* [PATCH v6 00/26] MADVISE FOR XE
@ 2025-08-07 16:43 Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
` (30 more replies)
0 siblings, 31 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
-v7
Change gpuvm layering on gpuvm_map_req struct
Fix EAGAIN return on garbage collector splitting vma
-v6
Rebase on gpuvm patches
Address review comments
-v5
Restore attributes to default after free from userspace
Add defragment worker to merge cpu mirror vma with default attributes
Avoid using VMA in uapi
address review comments
-v4:
fix atomic policies
fix attribute copy
address review comments
Provides a user API to assign attributes like pat_index, atomic
operation type, and preferred location for SVM ranges.
The Kernel Mode Driver (KMD) may split existing VMAs to cover input
ranges, assign user-provided attributes, and invalidate existing PTEs so
that the next page fault/prefetch can use the new attributes.
Boris Brezillon (2):
drm/gpuvm: Pass map arguments through a struct
drm/gpuvm: Kill drm_gpuva_init()
Himal Prasad Ghimiray (23):
drm/gpuvm: Support flags in drm_gpuvm_map_req
drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
drm/xe/uapi: Add madvise interface
drm/xe/vm: Add attributes struct as member of vma
drm/xe/vma: Move pat_index to vma attributes
drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as
parameter
drm/gpusvm: Make drm_gpusvm_for_each_* macros public
drm/xe/svm: Split system allocator vma incase of madvise call
drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for
madvise
drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping
drm/xe: Implement madvise ioctl for xe
drm/xe/svm : Add svm ranges migration policy on atomic access
drm/xe/madvise: Update migration policy based on preferred location
drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute
drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
drm/xe/svm: Consult madvise preferred location in prefetch
drm/xe/bo: Add attributes field to xe_bo
drm/xe/bo: Update atomic_access attribute on madvise
drm/xe/madvise: Skip vma invalidation if mem attr are unchanged
drm/xe/vm: Add helper to check for default VMA memory attributes
drm/xe: Reset VMA attributes to default in SVM garbage collector
drm/xe: Enable madvise ioctl for xe
drm/xe/uapi: Add UAPI for querying VMA count and memory attributes
Matthew Brost (1):
drm/xe/pat: Add helper for compression mode of pat index
drivers/gpu/drm/drm_gpusvm.c | 122 ++-----
drivers/gpu/drm/drm_gpuvm.c | 190 ++++++-----
drivers/gpu/drm/imagination/pvr_vm.c | 15 +-
drivers/gpu/drm/msm/msm_gem_vma.c | 33 +-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 11 +-
drivers/gpu/drm/panthor/panthor_mmu.c | 13 +-
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_bo.c | 29 +-
drivers/gpu/drm/xe/xe_bo_types.h | 8 +
drivers/gpu/drm/xe/xe_device.c | 4 +
drivers/gpu/drm/xe/xe_gt_pagefault.c | 35 +-
drivers/gpu/drm/xe/xe_pat.c | 10 +
drivers/gpu/drm/xe/xe_pat.h | 10 +
drivers/gpu/drm/xe/xe_pt.c | 39 ++-
drivers/gpu/drm/xe/xe_svm.c | 256 ++++++++++++--
drivers/gpu/drm/xe/xe_svm.h | 23 ++
drivers/gpu/drm/xe/xe_vm.c | 419 +++++++++++++++++++++--
drivers/gpu/drm/xe/xe_vm.h | 10 +-
drivers/gpu/drm/xe/xe_vm_madvise.c | 447 +++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm_madvise.h | 15 +
drivers/gpu/drm/xe/xe_vm_types.h | 50 ++-
include/drm/drm_gpusvm.h | 70 ++++
include/drm/drm_gpuvm.h | 58 +++-
include/uapi/drm/xe_drm.h | 274 +++++++++++++++
24 files changed, 1853 insertions(+), 289 deletions(-)
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
--
2.34.1
^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-08 5:03 ` Matthew Brost
` (2 more replies)
2025-08-07 16:43 ` [PATCH v6 02/26] drm/gpuvm: Kill drm_gpuva_init() Himal Prasad Ghimiray
` (29 subsequent siblings)
30 siblings, 3 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Boris Brezillon,
Danilo Krummrich, Brendan King, Boris Brezillon, Caterina Shablia,
Rob Clark, dri-devel, Himal Prasad Ghimiray
From: Boris Brezillon <boris.brezillon@collabora.com>
We are about to pass more arguments to drm_gpuvm_sm_map[_ops_create](),
so, before we do that, let's pass arguments through a struct instead
of changing each call site every time a new optional argument is added.
v5
- Use drm_gpuva_op_map—same as drm_gpuvm_map_req
- Rebase changes for drm_gpuvm_sm_map_exec_lock()
- Fix kernel-docs
v6
- Use drm_gpuvm_map_req (Danilo/Matt)
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: Brendan King <Brendan.King@imgtec.com>
Cc: Boris Brezillon <bbrezillon@kernel.org>
Cc: Caterina Shablia <caterina.shablia@collabora.com>
Cc: Rob Clark <robin.clark@oss.qualcomm.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/drm_gpuvm.c | 105 ++++++++++---------------
drivers/gpu/drm/imagination/pvr_vm.c | 15 ++--
drivers/gpu/drm/msm/msm_gem_vma.c | 25 ++++--
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 11 ++-
drivers/gpu/drm/panthor/panthor_mmu.c | 13 ++-
drivers/gpu/drm/xe/xe_vm.c | 13 ++-
include/drm/drm_gpuvm.h | 20 +++--
7 files changed, 114 insertions(+), 88 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index bbc7fecb6f4a..b3a01c40001b 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -486,13 +486,18 @@
* u64 addr, u64 range,
* struct drm_gem_object *obj, u64 offset)
* {
+ * struct drm_gpuvm_map_req map_req = {
+ * .op_map.va.addr = addr,
+ * .op_map.va.range = range,
+ * .op_map.gem.obj = obj,
+ * .op_map.gem.offset = offset,
+ * };
* struct drm_gpuva_ops *ops;
* struct drm_gpuva_op *op
* struct drm_gpuvm_bo *vm_bo;
*
* driver_lock_va_space();
- * ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
- * obj, offset);
+ * ops = drm_gpuvm_sm_map_ops_create(gpuvm, &map_req);
* if (IS_ERR(ops))
* return PTR_ERR(ops);
*
@@ -2054,16 +2059,15 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unmap);
static int
op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
- u64 addr, u64 range,
- struct drm_gem_object *obj, u64 offset)
+ const struct drm_gpuvm_map_req *req)
{
struct drm_gpuva_op op = {};
op.op = DRM_GPUVA_OP_MAP;
- op.map.va.addr = addr;
- op.map.va.range = range;
- op.map.gem.obj = obj;
- op.map.gem.offset = offset;
+ op.map.va.addr = req->op_map.va.addr;
+ op.map.va.range = req->op_map.va.range;
+ op.map.gem.obj = req->op_map.gem.obj;
+ op.map.gem.offset = req->op_map.gem.offset;
return fn->sm_step_map(&op, priv);
}
@@ -2102,17 +2106,16 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
static int
__drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
const struct drm_gpuvm_ops *ops, void *priv,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *req_obj, u64 req_offset)
+ const struct drm_gpuvm_map_req *req)
{
struct drm_gpuva *va, *next;
- u64 req_end = req_addr + req_range;
+ u64 req_end = req->op_map.va.addr + req->op_map.va.range;
int ret;
- if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range)))
+ if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->op_map.va.addr, req->op_map.va.range)))
return -EINVAL;
- drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
+ drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req->op_map.va.addr, req_end) {
struct drm_gem_object *obj = va->gem.obj;
u64 offset = va->gem.offset;
u64 addr = va->va.addr;
@@ -2120,9 +2123,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
u64 end = addr + range;
bool merge = !!va->gem.obj;
- if (addr == req_addr) {
- merge &= obj == req_obj &&
- offset == req_offset;
+ if (addr == req->op_map.va.addr) {
+ merge &= obj == req->op_map.gem.obj &&
+ offset == req->op_map.gem.offset;
if (end == req_end) {
ret = op_unmap_cb(ops, priv, va, merge);
@@ -2141,9 +2144,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
if (end > req_end) {
struct drm_gpuva_op_map n = {
.va.addr = req_end,
- .va.range = range - req_range,
+ .va.range = range - req->op_map.va.range,
.gem.obj = obj,
- .gem.offset = offset + req_range,
+ .gem.offset = offset + req->op_map.va.range,
};
struct drm_gpuva_op_unmap u = {
.va = va,
@@ -2155,8 +2158,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
return ret;
break;
}
- } else if (addr < req_addr) {
- u64 ls_range = req_addr - addr;
+ } else if (addr < req->op_map.va.addr) {
+ u64 ls_range = req->op_map.va.addr - addr;
struct drm_gpuva_op_map p = {
.va.addr = addr,
.va.range = ls_range,
@@ -2165,8 +2168,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
};
struct drm_gpuva_op_unmap u = { .va = va };
- merge &= obj == req_obj &&
- offset + ls_range == req_offset;
+ merge &= obj == req->op_map.gem.obj &&
+ offset + ls_range == req->op_map.gem.offset;
u.keep = merge;
if (end == req_end) {
@@ -2189,7 +2192,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
.va.range = end - req_end,
.gem.obj = obj,
.gem.offset = offset + ls_range +
- req_range,
+ req->op_map.va.range,
};
ret = op_remap_cb(ops, priv, &p, &n, &u);
@@ -2197,10 +2200,10 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
return ret;
break;
}
- } else if (addr > req_addr) {
- merge &= obj == req_obj &&
- offset == req_offset +
- (addr - req_addr);
+ } else if (addr > req->op_map.va.addr) {
+ merge &= obj == req->op_map.gem.obj &&
+ offset == req->op_map.gem.offset +
+ (addr - req->op_map.va.addr);
if (end == req_end) {
ret = op_unmap_cb(ops, priv, va, merge);
@@ -2236,9 +2239,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
}
}
- return op_map_cb(ops, priv,
- req_addr, req_range,
- req_obj, req_offset);
+ return op_map_cb(ops, priv, req);
}
static int
@@ -2303,10 +2304,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
* drm_gpuvm_sm_map() - calls the &drm_gpuva_op split/merge steps
* @gpuvm: the &drm_gpuvm representing the GPU VA space
* @priv: pointer to a driver private data structure
- * @req_addr: the start address of the new mapping
- * @req_range: the range of the new mapping
- * @req_obj: the &drm_gem_object to map
- * @req_offset: the offset within the &drm_gem_object
+ * @req: ptr to struct drm_gpuvm_map_req
*
* This function iterates the given range of the GPU VA space. It utilizes the
* &drm_gpuvm_ops to call back into the driver providing the split and merge
@@ -2333,8 +2331,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
*/
int
drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *req_obj, u64 req_offset)
+ const struct drm_gpuvm_map_req *req)
{
const struct drm_gpuvm_ops *ops = gpuvm->ops;
@@ -2343,9 +2340,7 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
ops->sm_step_unmap)))
return -EINVAL;
- return __drm_gpuvm_sm_map(gpuvm, ops, priv,
- req_addr, req_range,
- req_obj, req_offset);
+ return __drm_gpuvm_sm_map(gpuvm, ops, priv, req);
}
EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
@@ -2421,10 +2416,7 @@ static const struct drm_gpuvm_ops lock_ops = {
* @gpuvm: the &drm_gpuvm representing the GPU VA space
* @exec: the &drm_exec locking context
* @num_fences: for newly mapped objects, the # of fences to reserve
- * @req_addr: the start address of the range to unmap
- * @req_range: the range of the mappings to unmap
- * @req_obj: the &drm_gem_object to map
- * @req_offset: the offset within the &drm_gem_object
+ * @req: ptr to drm_gpuvm_map_req struct
*
* This function locks (drm_exec_lock_obj()) objects that will be unmapped/
* remapped, and locks+prepares (drm_exec_prepare_object()) objects that
@@ -2445,9 +2437,7 @@ static const struct drm_gpuvm_ops lock_ops = {
* ret = drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, op->addr, op->range);
* break;
* case DRIVER_OP_MAP:
- * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences,
- * op->addr, op->range,
- * obj, op->obj_offset);
+ * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences, &req);
* break;
* }
*
@@ -2478,18 +2468,15 @@ static const struct drm_gpuvm_ops lock_ops = {
int
drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
struct drm_exec *exec, unsigned int num_fences,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *req_obj, u64 req_offset)
+ struct drm_gpuvm_map_req *req)
{
- if (req_obj) {
- int ret = drm_exec_prepare_obj(exec, req_obj, num_fences);
+ if (req->op_map.gem.obj) {
+ int ret = drm_exec_prepare_obj(exec, req->op_map.gem.obj, num_fences);
if (ret)
return ret;
}
- return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec,
- req_addr, req_range,
- req_obj, req_offset);
+ return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec, req);
}
EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_exec_lock);
@@ -2611,10 +2598,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
/**
* drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
* @gpuvm: the &drm_gpuvm representing the GPU VA space
- * @req_addr: the start address of the new mapping
- * @req_range: the range of the new mapping
- * @req_obj: the &drm_gem_object to map
- * @req_offset: the offset within the &drm_gem_object
+ * @req: map request arguments
*
* This function creates a list of operations to perform splitting and merging
* of existent mapping(s) with the newly requested one.
@@ -2642,8 +2626,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
*/
struct drm_gpuva_ops *
drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *req_obj, u64 req_offset)
+ const struct drm_gpuvm_map_req *req)
{
struct drm_gpuva_ops *ops;
struct {
@@ -2661,9 +2644,7 @@ drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
args.vm = gpuvm;
args.ops = ops;
- ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
- req_addr, req_range,
- req_obj, req_offset);
+ ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args, req);
if (ret)
goto err_free_ops;
diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
index 2896fa7501b1..0f6b4cdb5fd8 100644
--- a/drivers/gpu/drm/imagination/pvr_vm.c
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -185,12 +185,17 @@ struct pvr_vm_bind_op {
static int pvr_vm_bind_op_exec(struct pvr_vm_bind_op *bind_op)
{
switch (bind_op->type) {
- case PVR_VM_BIND_TYPE_MAP:
+ case PVR_VM_BIND_TYPE_MAP: {
+ const struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = bind_op->device_addr,
+ .op_map.va.range = bind_op->size,
+ .op_map.gem.obj = gem_from_pvr_gem(bind_op->pvr_obj),
+ .op_map.gem.offset = bind_op->offset,
+ };
+
return drm_gpuvm_sm_map(&bind_op->vm_ctx->gpuvm_mgr,
- bind_op, bind_op->device_addr,
- bind_op->size,
- gem_from_pvr_gem(bind_op->pvr_obj),
- bind_op->offset);
+ bind_op, &map_req);
+ }
case PVR_VM_BIND_TYPE_UNMAP:
return drm_gpuvm_sm_unmap(&bind_op->vm_ctx->gpuvm_mgr,
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 3cd8562a5109..2ca408c40369 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -1172,10 +1172,17 @@ vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exec)
break;
case MSM_VM_BIND_OP_MAP:
case MSM_VM_BIND_OP_MAP_NULL:
- ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1,
- op->iova, op->range,
- op->obj, op->obj_offset);
+ {
+ struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = op->iova,
+ .op_map.va.range = op->range,
+ .op_map.gem.obj = op->obj,
+ .op_map.gem.offset = op->obj_offset,
+ };
+
+ ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, &map_req);
break;
+ }
default:
/*
* lookup_op() should have already thrown an error for
@@ -1283,9 +1290,17 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job)
arg.flags |= MSM_VMA_DUMP;
fallthrough;
case MSM_VM_BIND_OP_MAP_NULL:
- ret = drm_gpuvm_sm_map(job->vm, &arg, op->iova,
- op->range, op->obj, op->obj_offset);
+ {
+ struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = op->iova,
+ .op_map.va.range = op->range,
+ .op_map.gem.obj = op->obj,
+ .op_map.gem.offset = op->obj_offset,
+ };
+
+ ret = drm_gpuvm_sm_map(job->vm, &arg, &map_req);
break;
+ }
default:
/*
* lookup_op() should have already thrown an error for
diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
index ddfc46bc1b3e..92f87520eeb8 100644
--- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
@@ -1276,6 +1276,12 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
break;
case OP_MAP: {
struct nouveau_uvma_region *reg;
+ struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = op->va.addr,
+ .op_map.va.range = op->va.range,
+ .op_map.gem.obj = op->gem.obj,
+ .op_map.gem.offset = op->gem.offset,
+ };
reg = nouveau_uvma_region_find_first(uvmm,
op->va.addr,
@@ -1301,10 +1307,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
}
op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->base,
- op->va.addr,
- op->va.range,
- op->gem.obj,
- op->gem.offset);
+ &map_req);
if (IS_ERR(op->ops)) {
ret = PTR_ERR(op->ops);
goto unwind_continue;
diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index 4140f697ba5a..5ed4573b3a6b 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -2169,15 +2169,22 @@ panthor_vm_exec_op(struct panthor_vm *vm, struct panthor_vm_op_ctx *op,
mutex_lock(&vm->op_lock);
vm->op_ctx = op;
switch (op_type) {
- case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
+ case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: {
+ const struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = op->va.addr,
+ .op_map.va.range = op->va.range,
+ .op_map.gem.obj = op->map.vm_bo->obj,
+ .op_map.gem.offset = op->map.bo_offset,
+ };
+
if (vm->unusable) {
ret = -EINVAL;
break;
}
- ret = drm_gpuvm_sm_map(&vm->base, vm, op->va.addr, op->va.range,
- op->map.vm_bo->obj, op->map.bo_offset);
+ ret = drm_gpuvm_sm_map(&vm->base, vm, &map_req);
break;
+ }
case DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP:
ret = drm_gpuvm_sm_unmap(&vm->base, vm, op->va.addr, op->va.range);
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 432ea325677d..9fcc52032a1d 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2316,10 +2316,17 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
switch (operation) {
case DRM_XE_VM_BIND_OP_MAP:
- case DRM_XE_VM_BIND_OP_MAP_USERPTR:
- ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, addr, range,
- obj, bo_offset_or_userptr);
+ case DRM_XE_VM_BIND_OP_MAP_USERPTR: {
+ struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = addr,
+ .op_map.va.range = range,
+ .op_map.gem.obj = obj,
+ .op_map.gem.offset = bo_offset_or_userptr,
+ };
+
+ ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
break;
+ }
case DRM_XE_VM_BIND_OP_UNMAP:
ops = drm_gpuvm_sm_unmap_ops_create(&vm->gpuvm, addr, range);
break;
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 274532facfd6..3cf0a84b8b08 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -1058,10 +1058,20 @@ struct drm_gpuva_ops {
*/
#define drm_gpuva_next_op(op) list_next_entry(op, entry)
+/**
+ * struct drm_gpuvm_map_req - arguments passed to drm_gpuvm_sm_map[_ops_create]()
+ */
+struct drm_gpuvm_map_req {
+ /**
+ * @op_map: struct drm_gpuva_op_map
+ */
+ struct drm_gpuva_op_map op_map;
+};
+
struct drm_gpuva_ops *
drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
- u64 addr, u64 range,
- struct drm_gem_object *obj, u64 offset);
+ const struct drm_gpuvm_map_req *req);
+
struct drm_gpuva_ops *
drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
u64 addr, u64 range);
@@ -1205,16 +1215,14 @@ struct drm_gpuvm_ops {
};
int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
- u64 addr, u64 range,
- struct drm_gem_object *obj, u64 offset);
+ const struct drm_gpuvm_map_req *req);
int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
u64 addr, u64 range);
int drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
struct drm_exec *exec, unsigned int num_fences,
- u64 req_addr, u64 req_range,
- struct drm_gem_object *obj, u64 offset);
+ struct drm_gpuvm_map_req *req);
int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
u64 req_addr, u64 req_range);
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 02/26] drm/gpuvm: Kill drm_gpuva_init()
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 03/26] drm/gpuvm: Support flags in drm_gpuvm_map_req Himal Prasad Ghimiray
` (28 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Boris Brezillon,
Danilo Krummrich, Rob Clark, Caterina Shablia,
Himal Prasad Ghimiray
From: Boris Brezillon <boris.brezillon@collabora.com>
drm_gpuva_init() only has one internal user, and given we are about to
add new optional fields, it only add maintenance burden for no real
benefit, so let's kill the thing now.
v2
- Remove usage from msm
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: Rob Clark <robin.clark@oss.qualcomm.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
Acked-by: Danilo Krummrich <dakr@kernel.org>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/msm/msm_gem_vma.c | 8 +++++++-
include/drm/drm_gpuvm.h | 15 ++++-----------
2 files changed, 11 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
index 2ca408c40369..9d833b7ae13c 100644
--- a/drivers/gpu/drm/msm/msm_gem_vma.c
+++ b/drivers/gpu/drm/msm/msm_gem_vma.c
@@ -371,6 +371,12 @@ struct drm_gpuva *
msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
u64 offset, u64 range_start, u64 range_end)
{
+ struct drm_gpuva_op_map op_map = {
+ .va.addr = range_start,
+ .va.range = range_end - range_start,
+ .gem.obj = obj,
+ .gem.offset = offset,
+ };
struct msm_gem_vm *vm = to_msm_vm(gpuvm);
struct drm_gpuvm_bo *vm_bo;
struct msm_gem_vma *vma;
@@ -399,7 +405,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj,
if (obj)
GEM_WARN_ON((range_end - range_start) > obj->size);
- drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset);
+ drm_gpuva_init_from_op(&vma->base, &op_map);
vma->mapped = false;
ret = drm_gpuva_insert(&vm->base, &vma->base);
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 3cf0a84b8b08..cbb9b6519462 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -160,15 +160,6 @@ struct drm_gpuva *drm_gpuva_find_first(struct drm_gpuvm *gpuvm,
struct drm_gpuva *drm_gpuva_find_prev(struct drm_gpuvm *gpuvm, u64 start);
struct drm_gpuva *drm_gpuva_find_next(struct drm_gpuvm *gpuvm, u64 end);
-static inline void drm_gpuva_init(struct drm_gpuva *va, u64 addr, u64 range,
- struct drm_gem_object *obj, u64 offset)
-{
- va->va.addr = addr;
- va->va.range = range;
- va->gem.obj = obj;
- va->gem.offset = offset;
-}
-
/**
* drm_gpuva_invalidate() - sets whether the backing GEM of this &drm_gpuva is
* invalidated
@@ -1089,8 +1080,10 @@ void drm_gpuva_ops_free(struct drm_gpuvm *gpuvm,
static inline void drm_gpuva_init_from_op(struct drm_gpuva *va,
struct drm_gpuva_op_map *op)
{
- drm_gpuva_init(va, op->va.addr, op->va.range,
- op->gem.obj, op->gem.offset);
+ va->va.addr = op->va.addr;
+ va->va.range = op->va.range;
+ va->gem.obj = op->gem.obj;
+ va->gem.offset = op->gem.offset;
}
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 03/26] drm/gpuvm: Support flags in drm_gpuvm_map_req
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 02/26] drm/gpuvm: Kill drm_gpuva_init() Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-08 5:04 ` Matthew Brost
2025-08-09 12:46 ` Danilo Krummrich
2025-08-07 16:43 ` [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
` (27 subsequent siblings)
30 siblings, 2 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray,
Danilo Krummrich, Boris Brezillon, Caterina Shablia, dri-devel
This change adds support for passing flags to drm_gpuvm_sm_map() and
sm_map_ops_create(), enabling future extensions that affect split/merge
logic in drm_gpuvm.
v2
- Move flag to drm_gpuvm_map_req
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: Boris Brezillon <bbrezillon@kernel.org>
Cc: Caterina Shablia <caterina.shablia@collabora.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
include/drm/drm_gpuvm.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index cbb9b6519462..116f77abd570 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -1049,6 +1049,13 @@ struct drm_gpuva_ops {
*/
#define drm_gpuva_next_op(op) list_next_entry(op, entry)
+enum drm_gpuvm_sm_map_ops_flags {
+ /**
+ * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE: DEFAULT sm_map ops
+ */
+ DRM_GPUVM_SM_MAP_OPS_FLAG_NONE = 0,
+};
+
/**
* struct drm_gpuvm_map_req - arguments passed to drm_gpuvm_sm_map[_ops_create]()
*/
@@ -1057,6 +1064,11 @@ struct drm_gpuvm_map_req {
* @op_map: struct drm_gpuva_op_map
*/
struct drm_gpuva_op_map op_map;
+
+ /**
+ * @flags: drm_gpuvm_sm_map_ops_flags for this mapping request
+ */
+ enum drm_gpuvm_sm_map_ops_flags flags;
};
struct drm_gpuva_ops *
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (2 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 03/26] drm/gpuvm: Support flags in drm_gpuvm_map_req Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-08 5:20 ` Matthew Brost
` (2 more replies)
2025-08-07 16:43 ` [PATCH v6 05/26] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
` (26 subsequent siblings)
30 siblings, 3 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray,
Danilo Krummrich, Boris Brezillon, dri-devel
- DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE: This flag is used by
drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
user-provided range and split the existing non-GEM object VMA if the
start or end of the input range lies within it. The operations can
create up to 2 REMAPS and 2 MAPs. The purpose of this operation is to be
used by the Xe driver to assign attributes to GPUVMA's within the
user-defined range. Unlike drm_gpuvm_sm_map_ops_flags in default mode,
the operation with this flag will never have UNMAPs and
merges, and can be without any final operations.
v2
- use drm_gpuvm_sm_map_ops_create with flags instead of defining new
ops_create (Danilo)
- Add doc (Danilo)
v3
- Fix doc
- Fix unmapping check
v4
- Fix mapping for non madvise ops
v5
- Fix mapping (Matthew Brost)
- Rebase on top of struct changes
v6
- flag moved to map_req
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Boris Brezillon <bbrezillon@kernel.org>
Cc: <dri-devel@lists.freedesktop.org>
Signed-off-by: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/drm_gpuvm.c | 87 +++++++++++++++++++++++++++++++------
include/drm/drm_gpuvm.h | 11 +++++
2 files changed, 84 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index b3a01c40001b..d8f5f594a415 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -2110,6 +2110,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
{
struct drm_gpuva *va, *next;
u64 req_end = req->op_map.va.addr + req->op_map.va.range;
+ bool is_madvise_ops = (req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
+ bool needs_map = !is_madvise_ops;
int ret;
if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->op_map.va.addr, req->op_map.va.range)))
@@ -2122,26 +2124,35 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
u64 range = va->va.range;
u64 end = addr + range;
bool merge = !!va->gem.obj;
+ bool skip_madvise_ops = is_madvise_ops && merge;
+ needs_map = !is_madvise_ops;
if (addr == req->op_map.va.addr) {
merge &= obj == req->op_map.gem.obj &&
offset == req->op_map.gem.offset;
if (end == req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
break;
}
if (end < req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
continue;
}
if (end > req_end) {
+ if (skip_madvise_ops)
+ break;
+
struct drm_gpuva_op_map n = {
.va.addr = req_end,
.va.range = range - req->op_map.va.range,
@@ -2156,6 +2167,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
ret = op_remap_cb(ops, priv, NULL, &n, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ needs_map = true;
break;
}
} else if (addr < req->op_map.va.addr) {
@@ -2173,20 +2187,45 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
u.keep = merge;
if (end == req_end) {
+ if (skip_madvise_ops)
+ break;
+
ret = op_remap_cb(ops, priv, &p, NULL, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ needs_map = true;
+
break;
}
if (end < req_end) {
+ if (skip_madvise_ops)
+ continue;
+
ret = op_remap_cb(ops, priv, &p, NULL, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops) {
+ struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = req->op_map.va.addr,
+ .op_map.va.range = end - req->op_map.va.addr,
+ };
+
+ ret = op_map_cb(ops, priv, &map_req);
+ if (ret)
+ return ret;
+ }
+
continue;
}
if (end > req_end) {
+ if (skip_madvise_ops)
+ break;
+
struct drm_gpuva_op_map n = {
.va.addr = req_end,
.va.range = end - req_end,
@@ -2198,6 +2237,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
ret = op_remap_cb(ops, priv, &p, &n, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops)
+ needs_map = true;
break;
}
} else if (addr > req->op_map.va.addr) {
@@ -2206,20 +2248,29 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
(addr - req->op_map.va.addr);
if (end == req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
+
break;
}
if (end < req_end) {
- ret = op_unmap_cb(ops, priv, va, merge);
- if (ret)
- return ret;
+ if (!is_madvise_ops) {
+ ret = op_unmap_cb(ops, priv, va, merge);
+ if (ret)
+ return ret;
+ }
+
continue;
}
if (end > req_end) {
+ if (skip_madvise_ops)
+ break;
+
struct drm_gpuva_op_map n = {
.va.addr = req_end,
.va.range = end - req_end,
@@ -2234,12 +2285,20 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
ret = op_remap_cb(ops, priv, NULL, &n, &u);
if (ret)
return ret;
+
+ if (is_madvise_ops) {
+ struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = addr,
+ .op_map.va.range = req_end - addr,
+ };
+
+ return op_map_cb(ops, priv, &map_req);
+ }
break;
}
}
}
-
- return op_map_cb(ops, priv, req);
+ return needs_map ? op_map_cb(ops, priv, req) : 0;
}
static int
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index 116f77abd570..fa2b74a54534 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -1054,6 +1054,17 @@ enum drm_gpuvm_sm_map_ops_flags {
* %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE: DEFAULT sm_map ops
*/
DRM_GPUVM_SM_MAP_OPS_FLAG_NONE = 0,
+
+ /**
+ * @DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE: This flag is used by
+ * drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
+ * user-provided range and split the existing non-GEM object VMA if the
+ * start or end of the input range lies within it. The operations can
+ * create up to 2 REMAPS and 2 MAPs. Unlike drm_gpuvm_sm_map_ops_flags
+ * in default mode, the operation with this flag will never have UNMAPs
+ * and merges, and can be without any final operations.
+ */
+ DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE = BIT(0),
};
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 05/26] drm/xe/uapi: Add madvise interface
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (3 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 06/26] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
` (25 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
This commit introduces a new madvise interface to support
driver-specific ioctl operations. The madvise interface allows for more
efficient memory management by providing hints to the driver about the
expected memory usage and pte update policy for gpuvma.
v2 (Matthew/Thomas)
- Drop num_ops support
- Drop purgeable support
- Add kernel-docs
- IOWR/IOW
v3 (Matthew/Thomas)
- Reorder attributes
- use __u16 for migration_policy
- use __u64 for reserved in unions
- Avoid usage of vma
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
include/uapi/drm/xe_drm.h | 130 ++++++++++++++++++++++++++++++++++++++
1 file changed, 130 insertions(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index c721e130c1d2..4e6e9a9164ee 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -81,6 +81,7 @@ extern "C" {
* - &DRM_IOCTL_XE_EXEC
* - &DRM_IOCTL_XE_WAIT_USER_FENCE
* - &DRM_IOCTL_XE_OBSERVATION
+ * - &DRM_IOCTL_XE_MADVISE
*/
/*
@@ -102,6 +103,7 @@ extern "C" {
#define DRM_XE_EXEC 0x09
#define DRM_XE_WAIT_USER_FENCE 0x0a
#define DRM_XE_OBSERVATION 0x0b
+#define DRM_XE_MADVISE 0x0c
/* Must be kept compact -- no holes */
@@ -117,6 +119,7 @@ extern "C" {
#define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
+#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
/**
* DOC: Xe IOCTL Extensions
@@ -1978,6 +1981,133 @@ struct drm_xe_query_eu_stall {
__u64 sampling_rates[];
};
+/**
+ * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
+ *
+ * This structure is used to set memory attributes for a virtual address range
+ * in a VM. The type of attribute is specified by @type, and the corresponding
+ * union member is used to provide additional parameters for @type.
+ *
+ * Supported attribute types:
+ * - DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC: Set preferred memory location.
+ * - DRM_XE_MEM_RANGE_ATTR_ATOMIC: Set atomic access policy.
+ * - DRM_XE_MEM_RANGE_ATTR_PAT: Set page attribute table index.
+ *
+ * Example:
+ *
+ * .. code-block:: C
+ *
+ * struct drm_xe_madvise madvise = {
+ * .vm_id = vm_id,
+ * .start = 0x100000,
+ * .range = 0x2000,
+ * .type = DRM_XE_MEM_RANGE_ATTR_ATOMIC,
+ * .atomic_val = DRM_XE_ATOMIC_DEVICE,
+ * };
+ *
+ * ioctl(fd, DRM_IOCTL_XE_MADVISE, &madvise);
+ *
+ */
+struct drm_xe_madvise {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @start: start of the virtual address range */
+ __u64 start;
+
+ /** @range: size of the virtual address range */
+ __u64 range;
+
+ /** @vm_id: vm_id of the virtual range */
+ __u32 vm_id;
+
+#define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC 0
+#define DRM_XE_MEM_RANGE_ATTR_ATOMIC 1
+#define DRM_XE_MEM_RANGE_ATTR_PAT 2
+ /** @type: type of attribute */
+ __u32 type;
+
+ union {
+ /**
+ * @preferred_mem_loc: preferred memory location
+ *
+ * Used when @type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC
+ *
+ * Supported values for @preferred_mem_loc.devmem_fd:
+ * - DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE: set vram of faulting tile as preferred loc
+ * - DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM: set smem as preferred loc
+ *
+ * Supported values for @preferred_mem_loc.migration_policy:
+ * - DRM_XE_MIGRATE_ALL_PAGES
+ * - DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES
+ */
+ struct {
+#define DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE 0
+#define DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM -1
+ /** @preferred_mem_loc.devmem_fd: fd for preferred loc */
+ __u32 devmem_fd;
+
+#define DRM_XE_MIGRATE_ALL_PAGES 0
+#define DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES 1
+ /** @preferred_mem_loc.migration_policy: Page migration policy */
+ __u16 migration_policy;
+
+ /** @preferred_mem_loc.pad : MBZ */
+ __u16 pad;
+
+ /** @preferred_mem_loc.reserved : Reserved */
+ __u64 reserved;
+ } preferred_mem_loc;
+
+ /**
+ * @atomic: Atomic access policy
+ *
+ * Used when @type == DRM_XE_MEM_RANGE_ATTR_ATOMIC.
+ *
+ * Supported values for @atomic.val:
+ * - DRM_XE_ATOMIC_UNDEFINED: Undefined or default behaviour
+ * Support both GPU and CPU atomic operations for system allocator
+ * Support GPU atomic operations for normal(bo) allocator
+ * - DRM_XE_ATOMIC_DEVICE: Support GPU atomic operations
+ * - DRM_XE_ATOMIC_GLOBAL: Support both GPU and CPU atomic operations
+ * - DRM_XE_ATOMIC_CPU: Support CPU atomic
+ */
+ struct {
+#define DRM_XE_ATOMIC_UNDEFINED 0
+#define DRM_XE_ATOMIC_DEVICE 1
+#define DRM_XE_ATOMIC_GLOBAL 2
+#define DRM_XE_ATOMIC_CPU 3
+ /** @atomic.val: value of atomic operation */
+ __u32 val;
+
+ /** @atomic.pad: MBZ */
+ __u32 pad;
+
+ /** @atomic.reserved: Reserved */
+ __u64 reserved;
+ } atomic;
+
+ /**
+ * @pat_index: Page attribute table index
+ *
+ * Used when @type == DRM_XE_MEM_RANGE_ATTR_PAT.
+ */
+ struct {
+ /** @pat_index.val: PAT index value */
+ __u32 val;
+
+ /** @pat_index.pad: MBZ */
+ __u32 pad;
+
+ /** @pat_index.reserved: Reserved */
+ __u64 reserved;
+ } pat_index;
+ };
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
#if defined(__cplusplus)
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 06/26] drm/xe/vm: Add attributes struct as member of vma
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (4 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 05/26] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 07/26] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
` (24 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
The attribute of xe_vma will determine the migration policy and the
encoding of the page table entries (PTEs) for that vma.
This attribute helps manage how memory pages are moved and how their
addresses are translated. It will be used by madvise to set the
behavior of the vma.
v2 (Matthew Brost)
- Add docs
v3 (Matthew Brost)
- Add uapi references
- 80 characters line wrap
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
drivers/gpu/drm/xe/xe_vm_types.h | 33 ++++++++++++++++++++++++++++++++
1 file changed, 33 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index bed6088e1bb3..5777b0e0c6a9 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -77,6 +77,33 @@ struct xe_userptr {
#endif
};
+/**
+ * struct xe_vma_mem_attr - memory attributes associated with vma
+ */
+struct xe_vma_mem_attr {
+ /** @preferred_loc: perferred memory_location */
+ struct {
+ /** @preferred_loc.migration_policy: Pages migration policy */
+ u32 migration_policy;
+
+ /**
+ * @preferred_loc.devmem_fd: used for determining pagemap_fd
+ * requested by user DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM and
+ * DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE mean system memory or
+ * closest device memory respectively.
+ */
+ u32 devmem_fd;
+ } preferred_loc;
+
+ /**
+ * @atomic_access: The atomic access type for the vma
+ * See %DRM_XE_VMA_ATOMIC_UNDEFINED, %DRM_XE_VMA_ATOMIC_DEVICE,
+ * %DRM_XE_VMA_ATOMIC_GLOBAL, and %DRM_XE_VMA_ATOMIC_CPU for possible
+ * values. These are defined in uapi/drm/xe_drm.h.
+ */
+ u32 atomic_access;
+};
+
struct xe_vma {
/** @gpuva: Base GPUVA object */
struct drm_gpuva gpuva;
@@ -135,6 +162,12 @@ struct xe_vma {
* Needs to be signalled before UNMAP can be processed.
*/
struct xe_user_fence *ufence;
+
+ /**
+ * @attr: The attributes of vma which determines the migration policy
+ * and encoding of the PTEs for this vma.
+ */
+ struct xe_vma_mem_attr attr;
};
/**
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 07/26] drm/xe/vma: Move pat_index to vma attributes
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (5 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 06/26] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 08/26] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
` (23 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
The PAT index determines how PTEs are encoded and can be modified by
madvise. Therefore, it is now part of the vma attributes.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 6 +++---
drivers/gpu/drm/xe/xe_vm_types.h | 10 +++++-----
3 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 330cc0f54a3f..9128b35ccb3b 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -518,7 +518,7 @@ xe_pt_stage_bind_entry(struct xe_ptw *parent, pgoff_t offset,
{
struct xe_pt_stage_bind_walk *xe_walk =
container_of(walk, typeof(*xe_walk), base);
- u16 pat_index = xe_walk->vma->pat_index;
+ u16 pat_index = xe_walk->vma->attr.pat_index;
struct xe_pt *xe_parent = container_of(parent, typeof(*xe_parent), base);
struct xe_vm *vm = xe_walk->vm;
struct xe_pt *xe_child;
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 9fcc52032a1d..eba334920247 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1223,7 +1223,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
if (vm->xe->info.has_atomic_enable_pte_bit)
vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
- vma->pat_index = pat_index;
+ vma->attr.pat_index = pat_index;
if (bo) {
struct drm_gpuvm_bo *vm_bo;
@@ -2679,7 +2679,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.prev) {
vma = new_vma(vm, op->base.remap.prev,
- old->pat_index, flags);
+ old->attr.pat_index, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -2709,7 +2709,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.next) {
vma = new_vma(vm, op->base.remap.next,
- old->pat_index, flags);
+ old->attr.pat_index, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 5777b0e0c6a9..c30f404a00e3 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -102,6 +102,11 @@ struct xe_vma_mem_attr {
* values. These are defined in uapi/drm/xe_drm.h.
*/
u32 atomic_access;
+
+ /**
+ * @pat_index: The pat index to use when encoding the PTEs for this vma.
+ */
+ u16 pat_index;
};
struct xe_vma {
@@ -152,11 +157,6 @@ struct xe_vma {
/** @tile_staged: bind is staged for this VMA */
u8 tile_staged;
- /**
- * @pat_index: The pat index to use when encoding the PTEs for this vma.
- */
- u16 pat_index;
-
/**
* @ufence: The user fence that was provided with MAP.
* Needs to be signalled before UNMAP can be processed.
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 08/26] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (6 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 07/26] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 09/26] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
` (22 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
This change simplifies the logic by ensuring that remapped previous or
next VMAs are created with the same memory attributes as the original VMA.
By passing struct xe_vma_mem_attr as a parameter, we maintain consistency
in memory attributes.
-v2
*dst = *src (Matthew Brost)
-v3 (Matthew Brost)
Drop unnecessary helper
pass attr ptr as input to new_vma and vma_create
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index eba334920247..c5fe475b1fcc 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1168,7 +1168,8 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
struct xe_bo *bo,
u64 bo_offset_or_userptr,
u64 start, u64 end,
- u16 pat_index, unsigned int flags)
+ struct xe_vma_mem_attr *attr,
+ unsigned int flags)
{
struct xe_vma *vma;
struct xe_tile *tile;
@@ -1223,7 +1224,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
if (vm->xe->info.has_atomic_enable_pte_bit)
vma->gpuva.flags |= XE_VMA_ATOMIC_PTE_BIT;
- vma->attr.pat_index = pat_index;
+ vma->attr = *attr;
if (bo) {
struct drm_gpuvm_bo *vm_bo;
@@ -2450,7 +2451,7 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
ALLOW_ERROR_INJECTION(vm_bind_ioctl_ops_create, ERRNO);
static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
- u16 pat_index, unsigned int flags)
+ struct xe_vma_mem_attr *attr, unsigned int flags)
{
struct xe_bo *bo = op->gem.obj ? gem_to_xe_bo(op->gem.obj) : NULL;
struct drm_exec exec;
@@ -2479,7 +2480,7 @@ static struct xe_vma *new_vma(struct xe_vm *vm, struct drm_gpuva_op_map *op,
}
vma = xe_vma_create(vm, bo, op->gem.offset,
op->va.addr, op->va.addr +
- op->va.range - 1, pat_index, flags);
+ op->va.range - 1, attr, flags);
if (IS_ERR(vma))
goto err_unlock;
@@ -2622,6 +2623,15 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
switch (op->base.op) {
case DRM_GPUVA_OP_MAP:
{
+ struct xe_vma_mem_attr default_attr = {
+ .preferred_loc = {
+ .devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
+ .migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
+ },
+ .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+ .pat_index = op->map.pat_index,
+ };
+
flags |= op->map.read_only ?
VMA_CREATE_FLAG_READ_ONLY : 0;
flags |= op->map.is_null ?
@@ -2631,7 +2641,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
flags |= op->map.is_cpu_addr_mirror ?
VMA_CREATE_FLAG_IS_SYSTEM_ALLOCATOR : 0;
- vma = new_vma(vm, &op->base.map, op->map.pat_index,
+ vma = new_vma(vm, &op->base.map, &default_attr,
flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -2679,7 +2689,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.prev) {
vma = new_vma(vm, op->base.remap.prev,
- old->attr.pat_index, flags);
+ &old->attr, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
@@ -2709,7 +2719,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
if (op->base.remap.next) {
vma = new_vma(vm, op->base.remap.next,
- old->attr.pat_index, flags);
+ &old->attr, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 09/26] drm/gpusvm: Make drm_gpusvm_for_each_* macros public
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (7 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 08/26] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 10/26] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
` (21 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
The drm_gpusvm_for_each_notifier, drm_gpusvm_for_each_notifier_safe and
drm_gpusvm_for_each_range_safe macros are useful for locating notifiers
and ranges within a user-specified range. By making these macros public,
we enable broader access and utility for developers who need to leverage
them in their implementations.
v2 (Matthew Brost)
- drop inline __drm_gpusvm_range_find
- /s/notifier_iter_first/drm_gpusvm_notifier_find
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/drm_gpusvm.c | 122 +++++++----------------------------
include/drm/drm_gpusvm.h | 70 ++++++++++++++++++++
2 files changed, 95 insertions(+), 97 deletions(-)
diff --git a/drivers/gpu/drm/drm_gpusvm.c b/drivers/gpu/drm/drm_gpusvm.c
index 661306da6b2d..e2a9a6ae1d54 100644
--- a/drivers/gpu/drm/drm_gpusvm.c
+++ b/drivers/gpu/drm/drm_gpusvm.c
@@ -271,107 +271,50 @@ npages_in_range(unsigned long start, unsigned long end)
}
/**
- * drm_gpusvm_range_find() - Find GPU SVM range from GPU SVM notifier
- * @notifier: Pointer to the GPU SVM notifier structure.
- * @start: Start address of the range
- * @end: End address of the range
+ * drm_gpusvm_notifier_find() - Find GPU SVM notifier from GPU SVM
+ * @gpusvm: Pointer to the GPU SVM structure.
+ * @start: Start address of the notifier
+ * @end: End address of the notifier
*
- * Return: A pointer to the drm_gpusvm_range if found or NULL
+ * Return: A pointer to the drm_gpusvm_notifier if found or NULL
*/
-struct drm_gpusvm_range *
-drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
- unsigned long end)
+struct drm_gpusvm_notifier *
+drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm, unsigned long start,
+ unsigned long end)
{
struct interval_tree_node *itree;
- itree = interval_tree_iter_first(¬ifier->root, start, end - 1);
+ itree = interval_tree_iter_first(&gpusvm->root, start, end - 1);
if (itree)
- return container_of(itree, struct drm_gpusvm_range, itree);
+ return container_of(itree, struct drm_gpusvm_notifier, itree);
else
return NULL;
}
-EXPORT_SYMBOL_GPL(drm_gpusvm_range_find);
+EXPORT_SYMBOL_GPL(drm_gpusvm_notifier_find);
/**
- * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
- * @range__: Iterator variable for the ranges
- * @next__: Iterator variable for the ranges temporay storage
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the range
- * @end__: End address of the range
- *
- * This macro is used to iterate over GPU SVM ranges in a notifier while
- * removing ranges from it.
- */
-#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
- for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
- (next__) = __drm_gpusvm_range_next(range__); \
- (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
- (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-
-/**
- * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
- * @notifier: a pointer to the current drm_gpusvm_notifier
+ * drm_gpusvm_range_find() - Find GPU SVM range from GPU SVM notifier
+ * @notifier: Pointer to the GPU SVM notifier structure.
+ * @start: Start address of the range
+ * @end: End address of the range
*
- * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
- * the current notifier is the last one or if the input notifier is
- * NULL.
+ * Return: A pointer to the drm_gpusvm_range if found or NULL
*/
-static struct drm_gpusvm_notifier *
-__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
-{
- if (notifier && !list_is_last(¬ifier->entry,
- ¬ifier->gpusvm->notifier_list))
- return list_next_entry(notifier, entry);
-
- return NULL;
-}
-
-static struct drm_gpusvm_notifier *
-notifier_iter_first(struct rb_root_cached *root, unsigned long start,
- unsigned long last)
+struct drm_gpusvm_range *
+drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
+ unsigned long end)
{
struct interval_tree_node *itree;
- itree = interval_tree_iter_first(root, start, last);
+ itree = interval_tree_iter_first(¬ifier->root, start, end - 1);
if (itree)
- return container_of(itree, struct drm_gpusvm_notifier, itree);
+ return container_of(itree, struct drm_gpusvm_range, itree);
else
return NULL;
}
-
-/**
- * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
- * @notifier__: Iterator variable for the notifiers
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the notifier
- * @end__: End address of the notifier
- *
- * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
- */
-#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
- for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1); \
- (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
- (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-
-/**
- * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
- * @notifier__: Iterator variable for the notifiers
- * @next__: Iterator variable for the notifiers temporay storage
- * @notifier__: Pointer to the GPU SVM notifier
- * @start__: Start address of the notifier
- * @end__: End address of the notifier
- *
- * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
- * removing notifiers from it.
- */
-#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
- for ((notifier__) = notifier_iter_first(&(gpusvm__)->root, (start__), (end__) - 1), \
- (next__) = __drm_gpusvm_notifier_next(notifier__); \
- (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
- (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
+EXPORT_SYMBOL_GPL(drm_gpusvm_range_find);
/**
* drm_gpusvm_notifier_invalidate() - Invalidate a GPU SVM notifier.
@@ -472,22 +415,6 @@ int drm_gpusvm_init(struct drm_gpusvm *gpusvm,
}
EXPORT_SYMBOL_GPL(drm_gpusvm_init);
-/**
- * drm_gpusvm_notifier_find() - Find GPU SVM notifier
- * @gpusvm: Pointer to the GPU SVM structure
- * @fault_addr: Fault address
- *
- * This function finds the GPU SVM notifier associated with the fault address.
- *
- * Return: Pointer to the GPU SVM notifier on success, NULL otherwise.
- */
-static struct drm_gpusvm_notifier *
-drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm,
- unsigned long fault_addr)
-{
- return notifier_iter_first(&gpusvm->root, fault_addr, fault_addr + 1);
-}
-
/**
* to_drm_gpusvm_notifier() - retrieve the container struct for a given rbtree node
* @node: a pointer to the rbtree node embedded within a drm_gpusvm_notifier struct
@@ -943,7 +870,7 @@ drm_gpusvm_range_find_or_insert(struct drm_gpusvm *gpusvm,
if (!mmget_not_zero(mm))
return ERR_PTR(-EFAULT);
- notifier = drm_gpusvm_notifier_find(gpusvm, fault_addr);
+ notifier = drm_gpusvm_notifier_find(gpusvm, fault_addr, fault_addr + 1);
if (!notifier) {
notifier = drm_gpusvm_notifier_alloc(gpusvm, fault_addr);
if (IS_ERR(notifier)) {
@@ -1107,7 +1034,8 @@ void drm_gpusvm_range_remove(struct drm_gpusvm *gpusvm,
drm_gpusvm_driver_lock_held(gpusvm);
notifier = drm_gpusvm_notifier_find(gpusvm,
- drm_gpusvm_range_start(range));
+ drm_gpusvm_range_start(range),
+ drm_gpusvm_range_start(range) + 1);
if (WARN_ON_ONCE(!notifier))
return;
diff --git a/include/drm/drm_gpusvm.h b/include/drm/drm_gpusvm.h
index 8d613e9b2690..0e336148309d 100644
--- a/include/drm/drm_gpusvm.h
+++ b/include/drm/drm_gpusvm.h
@@ -282,6 +282,10 @@ void drm_gpusvm_range_unmap_pages(struct drm_gpusvm *gpusvm,
bool drm_gpusvm_has_mapping(struct drm_gpusvm *gpusvm, unsigned long start,
unsigned long end);
+struct drm_gpusvm_notifier *
+drm_gpusvm_notifier_find(struct drm_gpusvm *gpusvm, unsigned long start,
+ unsigned long end);
+
struct drm_gpusvm_range *
drm_gpusvm_range_find(struct drm_gpusvm_notifier *notifier, unsigned long start,
unsigned long end);
@@ -434,4 +438,70 @@ __drm_gpusvm_range_next(struct drm_gpusvm_range *range)
(range__) && (drm_gpusvm_range_start(range__) < (end__)); \
(range__) = __drm_gpusvm_range_next(range__))
+/**
+ * drm_gpusvm_for_each_range_safe() - Safely iterate over GPU SVM ranges in a notifier
+ * @range__: Iterator variable for the ranges
+ * @next__: Iterator variable for the ranges temporay storage
+ * @notifier__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the range
+ * @end__: End address of the range
+ *
+ * This macro is used to iterate over GPU SVM ranges in a notifier while
+ * removing ranges from it.
+ */
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
+
+/**
+ * __drm_gpusvm_notifier_next() - get the next drm_gpusvm_notifier in the list
+ * @notifier: a pointer to the current drm_gpusvm_notifier
+ *
+ * Return: A pointer to the next drm_gpusvm_notifier if available, or NULL if
+ * the current notifier is the last one or if the input notifier is
+ * NULL.
+ */
+static inline struct drm_gpusvm_notifier *
+__drm_gpusvm_notifier_next(struct drm_gpusvm_notifier *notifier)
+{
+ if (notifier && !list_is_last(¬ifier->entry,
+ ¬ifier->gpusvm->notifier_list))
+ return list_next_entry(notifier, entry);
+
+ return NULL;
+}
+
+/**
+ * drm_gpusvm_for_each_notifier() - Iterate over GPU SVM notifiers in a gpusvm
+ * @notifier__: Iterator variable for the notifiers
+ * @gpusvm__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the notifier
+ * @end__: End address of the notifier
+ *
+ * This macro is used to iterate over GPU SVM notifiers in a gpusvm.
+ */
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
+
+/**
+ * drm_gpusvm_for_each_notifier_safe() - Safely iterate over GPU SVM notifiers in a gpusvm
+ * @notifier__: Iterator variable for the notifiers
+ * @next__: Iterator variable for the notifiers temporay storage
+ * @gpusvm__: Pointer to the GPU SVM notifier
+ * @start__: Start address of the notifier
+ * @end__: End address of the notifier
+ *
+ * This macro is used to iterate over GPU SVM notifiers in a gpusvm while
+ * removing notifiers from it.
+ */
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
+
#endif /* __DRM_GPUSVM_H__ */
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 10/26] drm/xe/svm: Split system allocator vma incase of madvise call
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (8 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 09/26] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 11/26] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
` (20 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
If the start or end of input address range lies within system allocator
vma split the vma to create new vma's as per input range.
v2 (Matthew Brost)
- Add lockdep_assert_write for vm->lock
- Remove unnecessary page aligned checks
- Add kerrnel-doc and comments
- Remove unnecessary unwind_ops and return
v3
- Fix copying of attributes
v4
- Nit fixes
v5
- Squash identifier for madvise in xe_vma_ops to this patch
v6/v7
- Rebase on drm_gpuvm changes
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 109 +++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 +
drivers/gpu/drm/xe/xe_vm_types.h | 1 +
3 files changed, 112 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index c5fe475b1fcc..d43de3101d4f 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4178,3 +4178,112 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
}
kvfree(snap);
}
+
+/**
+ * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
+ * @vm: Pointer to the xe_vm structure
+ * @start: Starting input address
+ * @range: Size of the input range
+ *
+ * This function splits existing vma to create new vma for user provided input range
+ *
+ * Return: 0 if success
+ */
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+{
+ struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = start,
+ .op_map.va.range = range,
+ .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
+ };
+
+ struct xe_vma_ops vops;
+ struct drm_gpuva_ops *ops = NULL;
+ struct drm_gpuva_op *__op;
+ bool is_cpu_addr_mirror = false;
+ bool remap_op = false;
+ struct xe_vma_mem_attr tmp_attr;
+ int err;
+
+ lockdep_assert_held_write(&vm->lock);
+
+ vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
+ ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
+ if (IS_ERR(ops))
+ return PTR_ERR(ops);
+
+ if (list_empty(&ops->list)) {
+ err = 0;
+ goto free_ops;
+ }
+
+ drm_gpuva_for_each_op(__op, ops) {
+ struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+
+ if (__op->op == DRM_GPUVA_OP_REMAP) {
+ xe_assert(vm->xe, !remap_op);
+ remap_op = true;
+
+ if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
+ is_cpu_addr_mirror = true;
+ else
+ is_cpu_addr_mirror = false;
+ }
+
+ if (__op->op == DRM_GPUVA_OP_MAP) {
+ xe_assert(vm->xe, remap_op);
+ remap_op = false;
+
+ /* In case of madvise ops DRM_GPUVA_OP_MAP is always after
+ * DRM_GPUVA_OP_REMAP, so ensure we assign op->map.is_cpu_addr_mirror true
+ * if REMAP is for xe_vma_is_cpu_addr_mirror vma
+ */
+ op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
+ }
+
+ print_op(vm->xe, __op);
+ }
+
+ xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
+ vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
+ err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
+ if (err)
+ goto unwind_ops;
+
+ xe_vm_lock(vm, false);
+
+ drm_gpuva_for_each_op(__op, ops) {
+ struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+ struct xe_vma *vma;
+
+ if (__op->op == DRM_GPUVA_OP_UNMAP) {
+ /* There should be no unmap */
+ XE_WARN_ON("UNEXPECTED UNMAP");
+ xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
+ } else if (__op->op == DRM_GPUVA_OP_REMAP) {
+ vma = gpuva_to_vma(op->base.remap.unmap->va);
+ /* Store attributes for REMAP UNMAPPED VMA, so they can be assigned
+ * to newly MAP created vma.
+ */
+ tmp_attr = vma->attr;
+ xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
+ } else if (__op->op == DRM_GPUVA_OP_MAP) {
+ vma = op->map.vma;
+ /* In case of madvise call, MAP will always be follwed by REMAP.
+ * Therefore temp_attr will always have sane values, making it safe to
+ * copy them to new vma.
+ */
+ vma->attr = tmp_attr;
+ }
+ }
+
+ xe_vm_unlock(vm);
+ drm_gpuva_ops_free(&vm->gpuvm, ops);
+ return 0;
+
+unwind_ops:
+ vm_bind_ioctl_ops_unwind(vm, &ops, 1);
+free_ops:
+ drm_gpuva_ops_free(&vm->gpuvm, ops);
+ return err;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 3475a118f666..0d6b08cc4163 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
+
/**
* to_userptr_vma() - Return a pointer to an embedding userptr vma
* @vma: Pointer to the embedded struct xe_vma
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index c30f404a00e3..cd94d8b5819d 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -495,6 +495,7 @@ struct xe_vma_ops {
struct xe_vm_pgtable_update_ops pt_update_ops[XE_MAX_TILES_PER_DEVICE];
/** @flag: signify the properties within xe_vma_ops*/
#define XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH BIT(0)
+#define XE_VMA_OPS_FLAG_MADVISE BIT(1)
u32 flags;
#ifdef TEST_VM_OPS_ERROR
/** @inject_error: inject error to test error handling */
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 11/26] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (9 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 10/26] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 12/26] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping Himal Prasad Ghimiray
` (19 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
In the case of the MADVISE ioctl, if the start or end addresses fall
within a VMA and existing SVM ranges are present, remove the existing
SVM mappings. Then, continue with ops_parse to create new VMAs by REMAP
unmapping of old one.
v2 (Matthew Brost)
- Use vops flag to call unmapping of ranges in vm_bind_ioctl_ops_parse
- Rename the function
v3
- Fix doc
v4
- check if range is already in garbage collector (Matthew Brost)
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 35 +++++++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_svm.h | 7 +++++++
drivers/gpu/drm/xe/xe_vm.c | 8 ++++++--
3 files changed, 48 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index e35c6d4def20..ce42100cb753 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -932,6 +932,41 @@ bool xe_svm_has_mapping(struct xe_vm *vm, u64 start, u64 end)
return drm_gpusvm_has_mapping(&vm->svm.gpusvm, start, end);
}
+/**
+ * xe_svm_unmap_address_range - UNMAP SVM mappings and ranges
+ * @vm: The VM
+ * @start: start addr
+ * @end: end addr
+ *
+ * This function UNMAPS svm ranges if start or end address are inside them.
+ */
+void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct drm_gpusvm_notifier *notifier, *next;
+
+ lockdep_assert_held_write(&vm->lock);
+
+ drm_gpusvm_for_each_notifier_safe(notifier, next, &vm->svm.gpusvm, start, end) {
+ struct drm_gpusvm_range *range, *__next;
+
+ drm_gpusvm_for_each_range_safe(range, __next, notifier, start, end) {
+ if (start > drm_gpusvm_range_start(range) ||
+ end < drm_gpusvm_range_end(range)) {
+ if (IS_DGFX(vm->xe) && xe_svm_range_in_vram(to_xe_range(range)))
+ drm_gpusvm_range_evict(&vm->svm.gpusvm, range);
+ drm_gpusvm_range_get(range);
+ __xe_svm_garbage_collector(vm, to_xe_range(range));
+ if (!list_empty(&to_xe_range(range)->garbage_collector_link)) {
+ spin_lock(&vm->svm.garbage_collector.lock);
+ list_del(&to_xe_range(range)->garbage_collector_link);
+ spin_unlock(&vm->svm.garbage_collector.lock);
+ }
+ drm_gpusvm_range_put(range);
+ }
+ }
+ }
+}
+
/**
* xe_svm_bo_evict() - SVM evict BO to system memory
* @bo: BO to evict
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 4bdccb56d25f..184b3f4f0b2a 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -90,6 +90,8 @@ bool xe_svm_range_validate(struct xe_vm *vm,
u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *vma);
+void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -303,6 +305,11 @@ u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *vm
return ULONG_MAX;
}
+static inline
+void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
+{
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index d43de3101d4f..376850a22be2 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2669,8 +2669,12 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
end = op->base.remap.next->va.addr;
if (xe_vma_is_cpu_addr_mirror(old) &&
- xe_svm_has_mapping(vm, start, end))
- return -EBUSY;
+ xe_svm_has_mapping(vm, start, end)) {
+ if (vops->flags & XE_VMA_OPS_FLAG_MADVISE)
+ xe_svm_unmap_address_range(vm, start, end);
+ else
+ return -EBUSY;
+ }
op->remap.start = xe_vma_start(old);
op->remap.range = xe_vma_size(old);
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 12/26] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (10 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 11/26] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 13/26] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
` (18 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Introduce xe_svm_ranges_zap_ptes_in_range(), a function to zap page table
entries (PTEs) for all SVM ranges within a user-specified address range.
-v2 (Matthew Brost)
Lock should be called even for tlb_invalidation
v3(Matthew Brost)
- Update comment
- s/notifier->itree.start/drm_gpusvm_notifier_start
- s/notifier->itree.last + 1/drm_gpusvm_notifier_end
- use WRITE_ONCE
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 14 ++++++++++-
drivers/gpu/drm/xe/xe_svm.c | 50 +++++++++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_svm.h | 8 ++++++
3 files changed, 71 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 9128b35ccb3b..593fef438cd8 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -950,7 +950,19 @@ bool xe_pt_zap_ptes_range(struct xe_tile *tile, struct xe_vm *vm,
struct xe_pt *pt = vm->pt_root[tile->id];
u8 pt_mask = (range->tile_present & ~range->tile_invalidated);
- xe_svm_assert_in_notifier(vm);
+ /*
+ * Locking rules:
+ *
+ * - notifier_lock (write): full protection against page table changes
+ * and MMU notifier invalidations.
+ *
+ * - notifier_lock (read) + vm_lock (write): combined protection against
+ * invalidations and concurrent page table modifications. (e.g., madvise)
+ *
+ */
+ lockdep_assert(lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 0) ||
+ (lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 1) &&
+ lockdep_is_held_type(&vm->lock, 0)));
if (!(pt_mask & BIT(tile->id)))
return false;
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index ce42100cb753..c2306000f15e 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -1031,6 +1031,56 @@ int xe_svm_range_get_pages(struct xe_vm *vm, struct xe_svm_range *range,
return err;
}
+/**
+ * xe_svm_ranges_zap_ptes_in_range - clear ptes of svm ranges in input range
+ * @vm: Pointer to the xe_vm structure
+ * @start: Start of the input range
+ * @end: End of the input range
+ *
+ * This function removes the page table entries (PTEs) associated
+ * with the svm ranges within the given input start and end
+ *
+ * Return: tile_mask for which gt's need to be tlb invalidated.
+ */
+u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct drm_gpusvm_notifier *notifier;
+ struct xe_svm_range *range;
+ u64 adj_start, adj_end;
+ struct xe_tile *tile;
+ u8 tile_mask = 0;
+ u8 id;
+
+ lockdep_assert(lockdep_is_held_type(&vm->svm.gpusvm.notifier_lock, 1) &&
+ lockdep_is_held_type(&vm->lock, 0));
+
+ drm_gpusvm_for_each_notifier(notifier, &vm->svm.gpusvm, start, end) {
+ struct drm_gpusvm_range *r = NULL;
+
+ adj_start = max(start, drm_gpusvm_notifier_start(notifier));
+ adj_end = min(end, drm_gpusvm_notifier_end(notifier));
+ drm_gpusvm_for_each_range(r, notifier, adj_start, adj_end) {
+ range = to_xe_range(r);
+ for_each_tile(tile, vm->xe, id) {
+ if (xe_pt_zap_ptes_range(tile, vm, range)) {
+ tile_mask |= BIT(id);
+ /*
+ * WRITE_ONCE pairs with READ_ONCE in
+ * xe_vm_has_valid_gpu_mapping().
+ * Must not fail after setting
+ * tile_invalidated and before
+ * TLB invalidation.
+ */
+ WRITE_ONCE(range->tile_invalidated,
+ range->tile_invalidated | BIT(id));
+ }
+ }
+ }
+ }
+
+ return tile_mask;
+}
+
#if IS_ENABLED(CONFIG_DRM_XE_PAGEMAP)
static struct drm_pagemap *tile_local_pagemap(struct xe_tile *tile)
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 184b3f4f0b2a..046a9c4e95c2 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -92,6 +92,8 @@ u64 xe_svm_find_vma_start(struct xe_vm *vm, u64 addr, u64 end, struct xe_vma *v
void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
+u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -310,6 +312,12 @@ void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end)
{
}
+static inline
+u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ return 0;
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 13/26] drm/xe: Implement madvise ioctl for xe
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (11 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 12/26] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 14/26] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
` (17 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray,
Shuicheng Lin
This driver-specific ioctl enables UMDs to control the memory attributes
for GPU VMAs within a specified input range. If the start or end
addresses fall within an existing VMA, the VMA is split accordingly. The
attributes of the VMA are modified as provided by the users. The old
mappings of the VMAs are invalidated, and TLB invalidation is performed
if necessary.
v2(Matthew brost)
- xe_vm_in_fault_mode can't be enabled by Mesa, hence allow ioctl in non
fault mode too
- fix tlb invalidation skip for same ranges in multiple op
- use helper for tlb invalidation
- use xe_svm_notifier_lock/unlock helper
- s/lockdep_assert_held/lockdep_assert_held_write
- Add kernel-doc
v3(Matthew Brost)
- make vfunc fail safe
- Add sanitizing input args before vfunc
v4(Matthew Brost/Shuicheng)
- Make locks interruptable
- Error handling fixes
- vm_put fixes
v5(Matthew Brost)
- Flush garbage collector before any locking.
- Add check for null vma
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_vm_madvise.c | 308 +++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm_madvise.h | 15 ++
3 files changed, 324 insertions(+)
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.c
create mode 100644 drivers/gpu/drm/xe/xe_vm_madvise.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 8e0c3412a757..d0ea869fcd24 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -128,6 +128,7 @@ xe-y += xe_bb.o \
xe_uc.o \
xe_uc_fw.o \
xe_vm.o \
+ xe_vm_madvise.o \
xe_vram.o \
xe_vram_freq.o \
xe_vsec.o \
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
new file mode 100644
index 000000000000..b861c3349b0a
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -0,0 +1,308 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#include "xe_vm_madvise.h"
+
+#include <linux/nospec.h>
+#include <drm/xe_drm.h>
+
+#include "xe_bo.h"
+#include "xe_pt.h"
+#include "xe_svm.h"
+
+struct xe_vmas_in_madvise_range {
+ u64 addr;
+ u64 range;
+ struct xe_vma **vmas;
+ int num_vmas;
+ bool has_svm_vmas;
+ bool has_bo_vmas;
+ bool has_userptr_vmas;
+};
+
+static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_range)
+{
+ u64 addr = madvise_range->addr;
+ u64 range = madvise_range->range;
+
+ struct xe_vma **__vmas;
+ struct drm_gpuva *gpuva;
+ int max_vmas = 8;
+
+ lockdep_assert_held(&vm->lock);
+
+ madvise_range->num_vmas = 0;
+ madvise_range->vmas = kmalloc_array(max_vmas, sizeof(*madvise_range->vmas), GFP_KERNEL);
+ if (!madvise_range->vmas)
+ return -ENOMEM;
+
+ vm_dbg(&vm->xe->drm, "VMA's in range: start=0x%016llx, end=0x%016llx", addr, addr + range);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, addr, addr + range) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ if (xe_vma_bo(vma))
+ madvise_range->has_bo_vmas = true;
+ else if (xe_vma_is_cpu_addr_mirror(vma))
+ madvise_range->has_svm_vmas = true;
+ else if (xe_vma_is_userptr(vma))
+ madvise_range->has_userptr_vmas = true;
+
+ if (madvise_range->num_vmas == max_vmas) {
+ max_vmas <<= 1;
+ __vmas = krealloc(madvise_range->vmas,
+ max_vmas * sizeof(*madvise_range->vmas),
+ GFP_KERNEL);
+ if (!__vmas) {
+ kfree(madvise_range->vmas);
+ return -ENOMEM;
+ }
+ madvise_range->vmas = __vmas;
+ }
+
+ madvise_range->vmas[madvise_range->num_vmas] = vma;
+ (madvise_range->num_vmas)++;
+ }
+
+ if (!madvise_range->num_vmas)
+ kfree(madvise_range->vmas);
+
+ vm_dbg(&vm->xe->drm, "madvise_range-num_vmas = %d\n", madvise_range->num_vmas);
+
+ return 0;
+}
+
+static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op)
+{
+ /* Implementation pending */
+}
+
+static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op)
+{
+ /* Implementation pending */
+}
+
+static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op)
+{
+ /* Implementation pending */
+}
+
+typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
+ struct xe_vma **vmas, int num_vmas,
+ struct drm_xe_madvise *op);
+
+static const madvise_func madvise_funcs[] = {
+ [DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC] = madvise_preferred_mem_loc,
+ [DRM_XE_MEM_RANGE_ATTR_ATOMIC] = madvise_atomic,
+ [DRM_XE_MEM_RANGE_ATTR_PAT] = madvise_pat_index,
+};
+
+static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct drm_gpuva *gpuva;
+ struct xe_tile *tile;
+ u8 id, tile_mask;
+
+ lockdep_assert_held_write(&vm->lock);
+
+ /* Wait for pending binds */
+ if (dma_resv_wait_timeout(xe_vm_resv(vm), DMA_RESV_USAGE_BOOKKEEP,
+ false, MAX_SCHEDULE_TIMEOUT) <= 0)
+ XE_WARN_ON(1);
+
+ tile_mask = xe_svm_ranges_zap_ptes_in_range(vm, start, end);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ if (xe_vma_is_cpu_addr_mirror(vma) || xe_vma_is_null(vma))
+ continue;
+
+ for_each_tile(tile, vm->xe, id) {
+ if (xe_pt_zap_ptes(tile, vma)) {
+ tile_mask |= BIT(id);
+
+ /*
+ * WRITE_ONCE pairs with READ_ONCE
+ * in xe_vm_has_valid_gpu_mapping()
+ */
+ WRITE_ONCE(vma->tile_invalidated,
+ vma->tile_invalidated | BIT(id));
+ }
+ }
+ }
+
+ return tile_mask;
+}
+
+static int xe_vm_invalidate_madvise_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ u8 tile_mask = xe_zap_ptes_in_madvise_range(vm, start, end);
+
+ if (!tile_mask)
+ return 0;
+
+ xe_device_wmb(vm->xe);
+
+ return xe_vm_range_tilemask_tlb_invalidation(vm, start, end, tile_mask);
+}
+
+static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madvise *args)
+{
+ if (XE_IOCTL_DBG(xe, !args))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, !IS_ALIGNED(args->start, SZ_4K)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, !IS_ALIGNED(args->range, SZ_4K)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->range < SZ_4K))
+ return false;
+
+ switch (args->type) {
+ case DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC:
+ if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.migration_policy >
+ DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.pad))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->atomic.reserved))
+ return false;
+ break;
+ case DRM_XE_MEM_RANGE_ATTR_ATOMIC:
+ if (XE_IOCTL_DBG(xe, args->atomic.val > DRM_XE_ATOMIC_CPU))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->atomic.pad))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->atomic.reserved))
+ return false;
+
+ break;
+ case DRM_XE_MEM_RANGE_ATTR_PAT:
+ /*TODO: Add valid pat check */
+ break;
+ default:
+ if (XE_IOCTL_DBG(xe, 1))
+ return false;
+ }
+
+ if (XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
+ return false;
+
+ return true;
+}
+
+/**
+ * xe_vm_madvise_ioctl - Handle MADVise ioctl for a VM
+ * @dev: DRM device pointer
+ * @data: Pointer to ioctl data (drm_xe_madvise*)
+ * @file: DRM file pointer
+ *
+ * Handles the MADVISE ioctl to provide memory advice for vma's within
+ * input range.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_madvise *args = data;
+ struct xe_vmas_in_madvise_range madvise_range = {.addr = args->start,
+ .range = args->range, };
+ struct xe_vm *vm;
+ struct drm_exec exec;
+ int err, attr_type;
+
+ vm = xe_vm_lookup(xef, args->vm_id);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ if (!madvise_args_are_sane(vm->xe, args)) {
+ err = -EINVAL;
+ goto put_vm;
+ }
+
+ xe_svm_flush(vm);
+
+ err = down_write_killable(&vm->lock);
+ if (err)
+ goto put_vm;
+
+ if (XE_IOCTL_DBG(xe, xe_vm_is_closed_or_banned(vm))) {
+ err = -ENOENT;
+ goto unlock_vm;
+ }
+
+ err = xe_vm_alloc_madvise_vma(vm, args->start, args->range);
+ if (err)
+ goto unlock_vm;
+
+ err = get_vmas(vm, &madvise_range);
+ if (err || !madvise_range.num_vmas)
+ goto unlock_vm;
+
+ if (madvise_range.has_bo_vmas) {
+ drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
+ drm_exec_until_all_locked(&exec) {
+ for (int i = 0; i < madvise_range.num_vmas; i++) {
+ struct xe_bo *bo = xe_vma_bo(madvise_range.vmas[i]);
+
+ if (!bo)
+ continue;
+ err = drm_exec_lock_obj(&exec, &bo->ttm.base);
+ drm_exec_retry_on_contention(&exec);
+ if (err)
+ goto err_fini;
+ }
+ }
+ }
+
+ if (madvise_range.has_userptr_vmas) {
+ err = down_read_interruptible(&vm->userptr.notifier_lock);
+ if (err)
+ goto err_fini;
+ }
+
+ if (madvise_range.has_svm_vmas) {
+ err = down_read_interruptible(&vm->svm.gpusvm.notifier_lock);
+ if (err)
+ goto unlock_userptr;
+ }
+
+ attr_type = array_index_nospec(args->type, ARRAY_SIZE(madvise_funcs));
+ madvise_funcs[attr_type](xe, vm, madvise_range.vmas, madvise_range.num_vmas, args);
+
+ err = xe_vm_invalidate_madvise_range(vm, args->start, args->start + args->range);
+
+ if (madvise_range.has_svm_vmas)
+ xe_svm_notifier_unlock(vm);
+
+unlock_userptr:
+ if (madvise_range.has_userptr_vmas)
+ up_read(&vm->userptr.notifier_lock);
+err_fini:
+ if (madvise_range.has_bo_vmas)
+ drm_exec_fini(&exec);
+ kfree(madvise_range.vmas);
+ madvise_range.vmas = NULL;
+unlock_vm:
+ up_write(&vm->lock);
+put_vm:
+ xe_vm_put(vm);
+ return err;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
new file mode 100644
index 000000000000..b0e1fc445f23
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2025 Intel Corporation
+ */
+
+#ifndef _XE_VM_MADVISE_H_
+#define _XE_VM_MADVISE_H_
+
+struct drm_device;
+struct drm_file;
+
+int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file);
+
+#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 14/26] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (12 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 13/26] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-08 5:42 ` Matthew Brost
2025-08-07 16:43 ` [PATCH v6 15/26] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
` (16 subsequent siblings)
30 siblings, 1 reply; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
If the platform does not support atomic access on system memory, and the
ranges are in system memory, but the user requires atomic accesses on
the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
operations as well.
v2
- Drop unnecessary vm_dbg
v3 (Matthew Brost)
- fix atomic policy
- prefetch shouldn't have any impact of atomic
- bo can be accessed from vma, avoid duplicate parameter
v4 (Matthew Brost)
- Remove TODO comment
- Fix comment
- Dont allow gpu atomic ops when user is setting atomic attr as CPU
v5 (Matthew Brost)
- Fix atomic checks
- Add userptr checks
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_pt.c | 23 ++++++++------
drivers/gpu/drm/xe/xe_svm.c | 50 ++++++++++++++++++------------
drivers/gpu/drm/xe/xe_vm.c | 39 +++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 ++
drivers/gpu/drm/xe/xe_vm_madvise.c | 15 ++++++++-
5 files changed, 99 insertions(+), 30 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
index 593fef438cd8..6f5b384991cd 100644
--- a/drivers/gpu/drm/xe/xe_pt.c
+++ b/drivers/gpu/drm/xe/xe_pt.c
@@ -640,28 +640,31 @@ static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = {
* - In all other cases device atomics will be disabled with AE=0 until an application
* request differently using a ioctl like madvise.
*/
-static bool xe_atomic_for_vram(struct xe_vm *vm)
+static bool xe_atomic_for_vram(struct xe_vm *vm, struct xe_vma *vma)
{
+ if (vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
+ return false;
+
return true;
}
-static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
+static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_vma *vma)
{
struct xe_device *xe = vm->xe;
+ struct xe_bo *bo = xe_vma_bo(vma);
- if (!xe->info.has_device_atomics_on_smem)
+ if (!xe->info.has_device_atomics_on_smem ||
+ vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
return false;
+ if (vma->attr.atomic_access == DRM_XE_ATOMIC_DEVICE)
+ return true;
+
/*
* If a SMEM+LMEM allocation is backed by SMEM, a device
* atomics will cause a gpu page fault and which then
* gets migrated to LMEM, bind such allocations with
* device atomics enabled.
- *
- * TODO: Revisit this. Perhaps add something like a
- * fault_on_atomics_in_system UAPI flag.
- * Note that this also prohibits GPU atomics in LR mode for
- * userptr and system memory on DGFX.
*/
return (!IS_DGFX(xe) || (!xe_vm_in_lr_mode(vm) ||
(bo && xe_bo_has_single_placement(bo))));
@@ -744,8 +747,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
goto walk_pt;
if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
- xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
- xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
+ xe_walk.default_vram_pte = xe_atomic_for_vram(vm, vma) ? XE_USM_PPGTT_PTE_AE : 0;
+ xe_walk.default_system_pte = xe_atomic_for_system(vm, vma) ?
XE_USM_PPGTT_PTE_AE : 0;
}
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index c2306000f15e..c660ccb21945 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -789,22 +789,9 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
return true;
}
-/**
- * xe_svm_handle_pagefault() - SVM handle page fault
- * @vm: The VM.
- * @vma: The CPU address mirror VMA.
- * @gt: The gt upon the fault occurred.
- * @fault_addr: The GPU fault address.
- * @atomic: The fault atomic access bit.
- *
- * Create GPU bindings for a SVM page fault. Optionally migrate to device
- * memory.
- *
- * Return: 0 on success, negative error code on error.
- */
-int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
- struct xe_gt *gt, u64 fault_addr,
- bool atomic)
+static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
+ struct xe_gt *gt, u64 fault_addr,
+ bool need_vram)
{
struct drm_gpusvm_ctx ctx = {
.read_only = xe_vma_read_only(vma),
@@ -812,9 +799,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
.check_pages_threshold = IS_DGFX(vm->xe) &&
IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
- .devmem_only = atomic && IS_DGFX(vm->xe) &&
- IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
- .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
+ .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
+ .timeslice_ms = need_vram && IS_DGFX(vm->xe) &&
IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
vm->xe->atomic_svm_timeslice_ms : 0,
};
@@ -917,6 +903,32 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
return err;
}
+/**
+ * xe_svm_handle_pagefault() - SVM handle page fault
+ * @vm: The VM.
+ * @vma: The CPU address mirror VMA.
+ * @gt: The gt upon the fault occurred.
+ * @fault_addr: The GPU fault address.
+ * @atomic: The fault atomic access bit.
+ *
+ * Create GPU bindings for a SVM page fault. Optionally migrate to device
+ * memory.
+ *
+ * Return: 0 on success, negative error code on error.
+ */
+int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
+ struct xe_gt *gt, u64 fault_addr,
+ bool atomic)
+{
+ int need_vram;
+
+ need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
+ if (need_vram < 0)
+ return need_vram;
+
+ return __xe_svm_handle_pagefault(vm, vma, gt, fault_addr, need_vram ? true : false);
+}
+
/**
* xe_svm_has_mapping() - SVM has mappings
* @vm: The VM.
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 376850a22be2..aa8d4c4fe0f0 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4183,6 +4183,45 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
kvfree(snap);
}
+/**
+ * xe_vma_need_vram_for_atomic - Check if VMA needs VRAM migration for atomic operations
+ * @xe: Pointer to the XE device structure
+ * @vma: Pointer to the virtual memory area (VMA) structure
+ * @is_atomic: In pagefault path and atomic operation
+ *
+ * This function determines whether the given VMA needs to be migrated to
+ * VRAM in order to do atomic GPU operation.
+ *
+ * Return:
+ * 1 - Migration to VRAM is required
+ * 0 - Migration is not required
+ * -EACCES - Invalid access for atomic memory attr
+ *
+ */
+int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
+{
+ if (!IS_DGFX(xe) || !is_atomic)
+ return 0;
+
+ /*
+ * NOTE: The checks implemented here are platform-specific. For
+ * instance, on a device supporting CXL atomics, these would ideally
+ * work universally without additional handling.
+ */
+ switch (vma->attr.atomic_access) {
+ case DRM_XE_ATOMIC_DEVICE:
+ return !xe->info.has_device_atomics_on_smem;
+
+ case DRM_XE_ATOMIC_CPU:
+ return -EACCES;
+
+ case DRM_XE_ATOMIC_UNDEFINED:
+ case DRM_XE_ATOMIC_GLOBAL:
+ default:
+ return 1;
+ }
+}
+
/**
* xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
* @vm: Pointer to the xe_vm structure
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 0d6b08cc4163..05ac3118d9f4 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
+int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic);
+
int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
/**
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index b861c3349b0a..95258bb6a8ee 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -85,7 +85,20 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op)
{
- /* Implementation pending */
+ int i;
+
+ xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
+ xe_assert(vm->xe, op->atomic.val <= DRM_XE_ATOMIC_CPU);
+
+ for (i = 0; i < num_vmas; i++) {
+ if ((xe_vma_is_userptr(vmas[i]) &&
+ !(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
+ xe->info.has_device_atomics_on_smem)))
+ continue;
+
+ vmas[i]->attr.atomic_access = op->atomic.val;
+ /*TODO: handle bo backed vmas */
+ }
}
static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 15/26] drm/xe/madvise: Update migration policy based on preferred location
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (13 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 14/26] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 16/26] drm/xe/pat: Add helper for compression mode of pat index Himal Prasad Ghimiray
` (15 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
When the user sets the valid devmem_fd as a preferred location, GPU fault
will trigger migration to tile of device associated with devmem_fd.
If the user sets an invalid devmem_fd the preferred location is current
placement(smem) only.
v2(Matthew Brost)
- Default should be faulting tile
- remove devmem_fd used as region
v3 (Matthew Brost)
- Add migration_policy
- Fix return condition
- fix migrate condition
v4
-Rebase
v5
- Add check for userptr and bo based vmas
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 45 +++++++++++++++++++++++++++++-
drivers/gpu/drm/xe/xe_svm.h | 8 ++++++
drivers/gpu/drm/xe/xe_vm_madvise.c | 25 ++++++++++++++++-
3 files changed, 76 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index c660ccb21945..19585a3d9f69 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -806,6 +806,7 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
};
struct xe_svm_range *range;
struct dma_fence *fence;
+ struct drm_pagemap *dpagemap;
struct xe_tile *tile = gt_to_tile(gt);
int migrate_try_count = ctx.devmem_only ? 3 : 1;
ktime_t end = 0;
@@ -835,8 +836,14 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
range_debug(range, "PAGE FAULT");
+ dpagemap = xe_vma_resolve_pagemap(vma, tile);
if (--migrate_try_count >= 0 &&
- xe_svm_range_needs_migrate_to_vram(range, vma, IS_DGFX(vm->xe))) {
+ xe_svm_range_needs_migrate_to_vram(range, vma, !!dpagemap || ctx.devmem_only)) {
+ /* TODO : For multi-device dpagemap will be used to find the
+ * remote tile and remote device. Will need to modify
+ * xe_svm_alloc_vram to use dpagemap for future multi-device
+ * support.
+ */
err = xe_svm_alloc_vram(tile, range, &ctx);
ctx.timeslice_ms <<= 1; /* Double timeslice if we have to retry */
if (err) {
@@ -1100,6 +1107,37 @@ static struct drm_pagemap *tile_local_pagemap(struct xe_tile *tile)
return &tile->mem.vram->dpagemap;
}
+/**
+ * xe_vma_resolve_pagemap - Resolve the appropriate DRM pagemap for a VMA
+ * @vma: Pointer to the xe_vma structure containing memory attributes
+ * @tile: Pointer to the xe_tile structure used as fallback for VRAM mapping
+ *
+ * This function determines the correct DRM pagemap to use for a given VMA.
+ * It first checks if a valid devmem_fd is provided in the VMA's preferred
+ * location. If the devmem_fd is negative, it returns NULL, indicating no
+ * pagemap is available and smem to be used as preferred location.
+ * If the devmem_fd is equal to the default faulting
+ * GT identifier, it returns the VRAM pagemap associated with the tile.
+ *
+ * Future support for multi-device configurations may use drm_pagemap_from_fd()
+ * to resolve pagemaps from arbitrary file descriptors.
+ *
+ * Return: A pointer to the resolved drm_pagemap, or NULL if none is applicable.
+ */
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
+{
+ s32 fd = (s32)vma->attr.preferred_loc.devmem_fd;
+
+ if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM)
+ return NULL;
+
+ if (fd == DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE)
+ return IS_DGFX(tile_to_xe(tile)) ? tile_local_pagemap(tile) : NULL;
+
+ /* TODO: Support multi-device with drm_pagemap_from_fd(fd) */
+ return NULL;
+}
+
/**
* xe_svm_alloc_vram()- Allocate device memory pages for range,
* migrating existing data.
@@ -1212,6 +1250,11 @@ int xe_devm_add(struct xe_tile *tile, struct xe_vram_region *vr)
{
return 0;
}
+
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
+{
+ return NULL;
+}
#endif
/**
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index 046a9c4e95c2..9d6a8840a8b7 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -94,6 +94,8 @@ void xe_svm_unmap_address_range(struct xe_vm *vm, u64 start, u64 end);
u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end);
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile);
+
/**
* xe_svm_range_has_dma_mapping() - SVM range has DMA mapping
* @range: SVM range
@@ -318,6 +320,12 @@ u8 xe_svm_ranges_zap_ptes_in_range(struct xe_vm *vm, u64 start, u64 end)
return 0;
}
+static inline
+struct drm_pagemap *xe_vma_resolve_pagemap(struct xe_vma *vma, struct xe_tile *tile)
+{
+ return NULL;
+}
+
#define xe_svm_assert_in_notifier(...) do {} while (0)
#define xe_svm_range_has_dma_mapping(...) false
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 95258bb6a8ee..b5fc1eedf095 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -78,7 +78,23 @@ static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op)
{
- /* Implementation pending */
+ int i;
+
+ xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC);
+
+ for (i = 0; i < num_vmas; i++) {
+ /*TODO: Extend attributes to bo based vmas */
+ if (!xe_vma_is_cpu_addr_mirror(vmas[i]))
+ continue;
+
+ vmas[i]->attr.preferred_loc.devmem_fd = op->preferred_mem_loc.devmem_fd;
+
+ /* Till multi-device support is not added migration_policy
+ * is of no use and can be ignored.
+ */
+ vmas[i]->attr.preferred_loc.migration_policy =
+ op->preferred_mem_loc.migration_policy;
+ }
}
static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
@@ -184,6 +200,12 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
switch (args->type) {
case DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC:
+ {
+ s32 fd = (s32)args->preferred_mem_loc.devmem_fd;
+
+ if (XE_IOCTL_DBG(xe, fd < DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM))
+ return false;
+
if (XE_IOCTL_DBG(xe, args->preferred_mem_loc.migration_policy >
DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES))
return false;
@@ -194,6 +216,7 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
if (XE_IOCTL_DBG(xe, args->atomic.reserved))
return false;
break;
+ }
case DRM_XE_MEM_RANGE_ATTR_ATOMIC:
if (XE_IOCTL_DBG(xe, args->atomic.val > DRM_XE_ATOMIC_CPU))
return false;
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 16/26] drm/xe/pat: Add helper for compression mode of pat index
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (14 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 15/26] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-08 5:47 ` Matthew Brost
2025-08-07 16:43 ` [PATCH v6 17/26] drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute Himal Prasad Ghimiray
` (14 subsequent siblings)
30 siblings, 1 reply; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
From: Matthew Brost <matthew.brost@intel.com>
Add `xe_pat_index_get_comp_mode()` to extract compression mode from a
PAT index.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_pat.c | 10 ++++++++++
drivers/gpu/drm/xe/xe_pat.h | 10 ++++++++++
2 files changed, 20 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
index 2e7cb99ae87a..ac1767c812aa 100644
--- a/drivers/gpu/drm/xe/xe_pat.c
+++ b/drivers/gpu/drm/xe/xe_pat.c
@@ -154,6 +154,16 @@ static const struct xe_pat_table_entry xe2_pat_table[] = {
static const struct xe_pat_table_entry xe2_pat_ats = XE2_PAT( 0, 0, 0, 0, 3, 3 );
static const struct xe_pat_table_entry xe2_pat_pta = XE2_PAT( 0, 0, 0, 0, 3, 0 );
+bool xe_pat_index_get_comp_mode(struct xe_device *xe, u16 pat_index)
+{
+ WARN_ON(pat_index >= xe->pat.n_entries);
+
+ if (xe->pat.table != xe2_pat_table)
+ return false;
+
+ return xe->pat.table[pat_index].value & XE2_COMP_EN;
+}
+
u16 xe_pat_index_get_coh_mode(struct xe_device *xe, u16 pat_index)
{
WARN_ON(pat_index >= xe->pat.n_entries);
diff --git a/drivers/gpu/drm/xe/xe_pat.h b/drivers/gpu/drm/xe/xe_pat.h
index fa0dfbe525cd..8be2856a73af 100644
--- a/drivers/gpu/drm/xe/xe_pat.h
+++ b/drivers/gpu/drm/xe/xe_pat.h
@@ -58,4 +58,14 @@ void xe_pat_dump(struct xe_gt *gt, struct drm_printer *p);
*/
u16 xe_pat_index_get_coh_mode(struct xe_device *xe, u16 pat_index);
+/**
+ * xe_pat_index_get_comp_mode() = Extract the compression mode for the given
+ * pat_index.
+ * @xe: xe device
+ * @pat_index: The pat_index to query
+ *
+ * Return: True if pat_index is compressed, False otherwise
+ */
+bool xe_pat_index_get_comp_mode(struct xe_device *xe, u16 pat_index);
+
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 17/26] drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (15 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 16/26] drm/xe/pat: Add helper for compression mode of pat index Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 18/26] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
` (13 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
This attributes sets the pat_index for the svm used vma range, which is
utilized to ascertain the coherence.
v2 (Matthew Brost)
- Pat index sanity check
v3
- Add check for compression pat
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com> #v2
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 29 +++++++++++++++++++++++++++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index b5fc1eedf095..675a05a24e3b 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -9,6 +9,7 @@
#include <drm/xe_drm.h>
#include "xe_bo.h"
+#include "xe_pat.h"
#include "xe_pt.h"
#include "xe_svm.h"
@@ -121,7 +122,17 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op)
{
- /* Implementation pending */
+ int i;
+
+ xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PAT);
+
+ for (i = 0; i < num_vmas; i++) {
+ if (!xe_vma_bo(vmas[i]) &&
+ xe_pat_index_get_comp_mode(xe, op->pat_index.val))
+ continue;
+
+ vmas[i]->attr.pat_index = op->pat_index.val;
+ }
}
typedef void (*madvise_func)(struct xe_device *xe, struct xe_vm *vm,
@@ -229,8 +240,22 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
break;
case DRM_XE_MEM_RANGE_ATTR_PAT:
- /*TODO: Add valid pat check */
+ {
+ u16 coh_mode = xe_pat_index_get_coh_mode(xe, args->pat_index.val);
+
+ if (XE_IOCTL_DBG(xe, !coh_mode))
+ return false;
+
+ if (XE_WARN_ON(coh_mode > XE_COH_AT_LEAST_1WAY))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->pat_index.pad))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, args->pat_index.reserved))
+ return false;
break;
+ }
default:
if (XE_IOCTL_DBG(xe, 1))
return false;
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 18/26] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (16 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 17/26] drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 19/26] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
` (12 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Introduce flag DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC to ensure prefetching
in madvise-advised memory regions
v2 (Matthew Brost)
- Add kernel-doc
v3 (Matthew Brost)
- Fix kernel-doc
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
include/uapi/drm/xe_drm.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 4e6e9a9164ee..115b9bca2a25 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -1010,6 +1010,10 @@ struct drm_xe_vm_destroy {
* valid on VMs with DRM_XE_VM_CREATE_FLAG_FAULT_MODE set. The CPU address
* mirror flag are only valid for DRM_XE_VM_BIND_OP_MAP operations, the BO
* handle MBZ, and the BO offset MBZ.
+ *
+ * The @prefetch_mem_region_instance for %DRM_XE_VM_BIND_OP_PREFETCH can also be:
+ * - %DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, which ensures prefetching occurs in
+ * the memory region advised by madvise.
*/
struct drm_xe_vm_bind_op {
/** @extensions: Pointer to the first extension struct, if any */
@@ -1115,6 +1119,7 @@ struct drm_xe_vm_bind_op {
/** @flags: Bind flags */
__u32 flags;
+#define DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC -1
/**
* @prefetch_mem_region_instance: Memory region to prefetch VMA to.
* It is a region instance, not a mask.
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 19/26] drm/xe/svm: Consult madvise preferred location in prefetch
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (17 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 18/26] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-08 0:30 ` Matthew Brost
2025-08-07 16:43 ` [PATCH v6 20/26] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
` (11 subsequent siblings)
30 siblings, 1 reply; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
When prefetch region is DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, prefetch svm
ranges to preferred location provided by madvise.
v2 (Matthew Brost)
- Fix region, devmem_fd usages
- consult madvise is applicable for other vma's too.
v3
- Fix atomic handling
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 30 ++++++++++++++++++++++--------
1 file changed, 22 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index aa8d4c4fe0f0..ae966755255d 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -38,6 +38,7 @@
#include "xe_res_cursor.h"
#include "xe_svm.h"
#include "xe_sync.h"
+#include "xe_tile.h"
#include "xe_trace_bo.h"
#include "xe_wa.h"
#include "xe_hmm.h"
@@ -2913,15 +2914,28 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
int err = 0;
struct xe_svm_range *svm_range;
+ struct drm_pagemap *dpagemap;
struct drm_gpusvm_ctx ctx = {};
- struct xe_tile *tile;
+ struct xe_tile *tile = NULL;
unsigned long i;
u32 region;
if (!xe_vma_is_cpu_addr_mirror(vma))
return 0;
- region = op->prefetch_range.region;
+ if (op->prefetch_range.region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC) {
+ dpagemap = xe_vma_resolve_pagemap(vma, xe_device_get_root_tile(vm->xe));
+ /*
+ * TODO: Once multigpu support is enabled will need
+ * something to dereference tile from dpagemap.
+ */
+ if (dpagemap)
+ tile = xe_device_get_root_tile(vm->xe);
+ } else {
+ region = op->prefetch_range.region;
+ if (region)
+ tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
+ }
ctx.read_only = xe_vma_read_only(vma);
ctx.devmem_possible = devmem_possible;
@@ -2929,11 +2943,10 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
/* TODO: Threading the migration */
xa_for_each(&op->prefetch_range.range, i, svm_range) {
- if (!region)
+ if (!tile)
xe_svm_range_migrate_to_smem(vm, svm_range);
- if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
- tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
+ if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, !!tile)) {
err = xe_svm_alloc_vram(tile, svm_range, &ctx);
if (err) {
drm_dbg(&vm->xe->drm, "VRAM allocation failed, retry from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
@@ -3001,7 +3014,8 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
else
region = op->prefetch.region;
- xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
+ xe_assert(vm->xe, region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC ||
+ region <= ARRAY_SIZE(region_to_mem_type));
err = vma_lock_and_validate(exec,
gpuva_to_vma(op->base.prefetch.va),
@@ -3419,8 +3433,8 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
op == DRM_XE_VM_BIND_OP_PREFETCH) ||
XE_IOCTL_DBG(xe, prefetch_region &&
op != DRM_XE_VM_BIND_OP_PREFETCH) ||
- XE_IOCTL_DBG(xe, !(BIT(prefetch_region) &
- xe->info.mem_region_mask)) ||
+ XE_IOCTL_DBG(xe, (prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
+ !(BIT(prefetch_region) & xe->info.mem_region_mask))) ||
XE_IOCTL_DBG(xe, obj &&
op == DRM_XE_VM_BIND_OP_UNMAP)) {
err = -EINVAL;
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 20/26] drm/xe/bo: Add attributes field to xe_bo
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (18 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 19/26] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 21/26] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
` (10 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
A single BO can be linked to multiple VMAs, making VMA attributes
insufficient for determining the placement and PTE update attributes
of the BO. To address this, an attributes field has been added to the
BO.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_bo_types.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
index cf604adc13a3..314652afdca7 100644
--- a/drivers/gpu/drm/xe/xe_bo_types.h
+++ b/drivers/gpu/drm/xe/xe_bo_types.h
@@ -61,6 +61,14 @@ struct xe_bo {
*/
struct list_head client_link;
#endif
+ /** @attr: User controlled attributes for bo */
+ struct {
+ /**
+ * @atomic_access: type of atomic access bo needs
+ * protected by bo dma-resv lock
+ */
+ u32 atomic_access;
+ } attr;
/**
* @pxp_key_instance: PXP key instance this BO was created against. A
* 0 in this variable indicates that the BO does not use PXP encryption.
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 21/26] drm/xe/bo: Update atomic_access attribute on madvise
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (19 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 20/26] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 22/26] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged Himal Prasad Ghimiray
` (9 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Update the bo_atomic_access based on user-provided input and determine
the migration to smem during a CPU fault
v2 (Matthew Brost)
- Avoid cpu unmapping if bo is already in smem
- check atomics on smem too for ioctl
- Add comments
v3
- Avoid migration in prefetch
v4 (Matthew Brost)
- make sanity check function bool
- add assert for smem placement
- fix doc
v5 (Matthew Brost)
- NACK atomic fault with DRM_XE_ATOMIC_CPU
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 29 ++++++++++++--
drivers/gpu/drm/xe/xe_gt_pagefault.c | 35 ++++++-----------
drivers/gpu/drm/xe/xe_vm.c | 7 +++-
drivers/gpu/drm/xe/xe_vm_madvise.c | 59 +++++++++++++++++++++++++++-
4 files changed, 102 insertions(+), 28 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index 6fea39842e1e..72396d358a00 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -1711,6 +1711,18 @@ static void xe_gem_object_close(struct drm_gem_object *obj,
}
}
+static bool should_migrate_to_smem(struct xe_bo *bo)
+{
+ /*
+ * NOTE: The following atomic checks are platform-specific. For example,
+ * if a device supports CXL atomics, these may not be necessary or
+ * may behave differently.
+ */
+
+ return bo->attr.atomic_access == DRM_XE_ATOMIC_GLOBAL ||
+ bo->attr.atomic_access == DRM_XE_ATOMIC_CPU;
+}
+
static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
{
struct ttm_buffer_object *tbo = vmf->vma->vm_private_data;
@@ -1719,7 +1731,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
struct xe_bo *bo = ttm_to_xe_bo(tbo);
bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
vm_fault_t ret;
- int idx;
+ int idx, r = 0;
if (needs_rpm)
xe_pm_runtime_get(xe);
@@ -1731,8 +1743,19 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
if (drm_dev_enter(ddev, &idx)) {
trace_xe_bo_cpu_fault(bo);
- ret = ttm_bo_vm_fault_reserved(vmf, vmf->vma->vm_page_prot,
- TTM_BO_VM_NUM_PREFAULT);
+ if (should_migrate_to_smem(bo)) {
+ xe_assert(xe, bo->flags & XE_BO_FLAG_SYSTEM);
+
+ r = xe_bo_migrate(bo, XE_PL_TT);
+ if (r == -EBUSY || r == -ERESTARTSYS || r == -EINTR)
+ ret = VM_FAULT_NOPAGE;
+ else if (r)
+ ret = VM_FAULT_SIGBUS;
+ }
+ if (!ret)
+ ret = ttm_bo_vm_fault_reserved(vmf,
+ vmf->vma->vm_page_prot,
+ TTM_BO_VM_NUM_PREFAULT);
drm_dev_exit(idx);
if (ret == VM_FAULT_RETRY &&
diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index ab43dec52776..4ea30fbce9bd 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -75,7 +75,7 @@ static bool vma_is_valid(struct xe_tile *tile, struct xe_vma *vma)
}
static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
- bool atomic, struct xe_vram_region *vram)
+ bool need_vram_move, struct xe_vram_region *vram)
{
struct xe_bo *bo = xe_vma_bo(vma);
struct xe_vm *vm = xe_vma_vm(vma);
@@ -85,26 +85,13 @@ static int xe_pf_begin(struct drm_exec *exec, struct xe_vma *vma,
if (err)
return err;
- if (atomic && vram) {
- xe_assert(vm->xe, IS_DGFX(vm->xe));
+ if (!bo)
+ return 0;
- if (xe_vma_is_userptr(vma)) {
- err = -EACCES;
- return err;
- }
+ err = need_vram_move ? xe_bo_migrate(bo, vram->placement) :
+ xe_bo_validate(bo, vm, true);
- /* Migrate to VRAM, move should invalidate the VMA first */
- err = xe_bo_migrate(bo, vram->placement);
- if (err)
- return err;
- } else if (bo) {
- /* Create backing store if needed */
- err = xe_bo_validate(bo, vm, true);
- if (err)
- return err;
- }
-
- return 0;
+ return err;
}
static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
@@ -115,10 +102,14 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
struct drm_exec exec;
struct dma_fence *fence;
ktime_t end = 0;
- int err;
+ int err, needs_vram;
lockdep_assert_held_write(&vm->lock);
+ needs_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
+ if (needs_vram < 0 || (needs_vram && xe_vma_is_userptr(vma)))
+ return needs_vram < 0 ? needs_vram : -EACCES;
+
xe_gt_stats_incr(gt, XE_GT_STATS_ID_VMA_PAGEFAULT_COUNT, 1);
xe_gt_stats_incr(gt, XE_GT_STATS_ID_VMA_PAGEFAULT_KB, xe_vma_size(vma) / 1024);
@@ -141,7 +132,7 @@ static int handle_vma_pagefault(struct xe_gt *gt, struct xe_vma *vma,
/* Lock VM and BOs dma-resv */
drm_exec_init(&exec, 0, 0);
drm_exec_until_all_locked(&exec) {
- err = xe_pf_begin(&exec, vma, atomic, tile->mem.vram);
+ err = xe_pf_begin(&exec, vma, needs_vram == 1, tile->mem.vram);
drm_exec_retry_on_contention(&exec);
if (xe_vm_validate_should_retry(&exec, err, &end))
err = -EAGAIN;
@@ -576,7 +567,7 @@ static int handle_acc(struct xe_gt *gt, struct acc *acc)
/* Lock VM and BOs dma-resv */
drm_exec_init(&exec, 0, 0);
drm_exec_until_all_locked(&exec) {
- ret = xe_pf_begin(&exec, vma, true, tile->mem.vram);
+ ret = xe_pf_begin(&exec, vma, IS_DGFX(vm->xe), tile->mem.vram);
drm_exec_retry_on_contention(&exec);
if (ret)
break;
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index ae966755255d..8eac7a4ab22f 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4214,15 +4214,18 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
*/
int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
{
+ u32 atomic_access = xe_vma_bo(vma) ? xe_vma_bo(vma)->attr.atomic_access :
+ vma->attr.atomic_access;
+
if (!IS_DGFX(xe) || !is_atomic)
- return 0;
+ return false;
/*
* NOTE: The checks implemented here are platform-specific. For
* instance, on a device supporting CXL atomics, these would ideally
* work universally without additional handling.
*/
- switch (vma->attr.atomic_access) {
+ switch (atomic_access) {
case DRM_XE_ATOMIC_DEVICE:
return !xe->info.has_device_atomics_on_smem;
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 675a05a24e3b..b2a78774ca83 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -102,6 +102,7 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
struct xe_vma **vmas, int num_vmas,
struct drm_xe_madvise *op)
{
+ struct xe_bo *bo;
int i;
xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
@@ -114,7 +115,19 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
continue;
vmas[i]->attr.atomic_access = op->atomic.val;
- /*TODO: handle bo backed vmas */
+
+ bo = xe_vma_bo(vmas[i]);
+ if (!bo)
+ continue;
+
+ xe_bo_assert_held(bo);
+ bo->attr.atomic_access = op->atomic.val;
+
+ /* Invalidate cpu page table, so bo can migrate to smem in next access */
+ if (xe_bo_is_vram(bo) &&
+ (bo->attr.atomic_access == DRM_XE_ATOMIC_CPU ||
+ bo->attr.atomic_access == DRM_XE_ATOMIC_GLOBAL))
+ ttm_bo_unmap_virtual(&bo->ttm);
}
}
@@ -267,6 +280,41 @@ static bool madvise_args_are_sane(struct xe_device *xe, const struct drm_xe_madv
return true;
}
+static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas,
+ int num_vmas, u32 atomic_val)
+{
+ struct xe_device *xe = vm->xe;
+ struct xe_bo *bo;
+ int i;
+
+ for (i = 0; i < num_vmas; i++) {
+ bo = xe_vma_bo(vmas[i]);
+ if (!bo)
+ continue;
+ /*
+ * NOTE: The following atomic checks are platform-specific. For example,
+ * if a device supports CXL atomics, these may not be necessary or
+ * may behave differently.
+ */
+ if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_CPU &&
+ !(bo->flags & XE_BO_FLAG_SYSTEM)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_DEVICE &&
+ !(bo->flags & XE_BO_FLAG_VRAM0) &&
+ !(bo->flags & XE_BO_FLAG_VRAM1) &&
+ !(bo->flags & XE_BO_FLAG_SYSTEM &&
+ xe->info.has_device_atomics_on_smem)))
+ return false;
+
+ if (XE_IOCTL_DBG(xe, atomic_val == DRM_XE_ATOMIC_GLOBAL &&
+ (!(bo->flags & XE_BO_FLAG_SYSTEM) ||
+ (!(bo->flags & XE_BO_FLAG_VRAM0) &&
+ !(bo->flags & XE_BO_FLAG_VRAM1)))))
+ return false;
+ }
+ return true;
+}
/**
* xe_vm_madvise_ioctl - Handle MADVise ioctl for a VM
* @dev: DRM device pointer
@@ -318,6 +366,15 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
goto unlock_vm;
if (madvise_range.has_bo_vmas) {
+ if (args->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC) {
+ if (!check_bo_args_are_sane(vm, madvise_range.vmas,
+ madvise_range.num_vmas,
+ args->atomic.val)) {
+ err = -EINVAL;
+ goto unlock_vm;
+ }
+ }
+
drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT, 0);
drm_exec_until_all_locked(&exec) {
for (int i = 0; i < madvise_range.num_vmas; i++) {
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 22/26] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (20 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 21/26] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 23/26] drm/xe/vm: Add helper to check for default VMA memory attributes Himal Prasad Ghimiray
` (8 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
If a VMA within the madvise input range already has the same memory
attribute as the one requested by the user, skip PTE zapping for that
VMA to avoid unnecessary invalidation.
v2 (Matthew Brost)
- fix skip_invalidation for new attributes
- s/u32/bool
- Remove unnecessary assignment for kzalloc'ed
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 71 +++++++++++++++++++-----------
drivers/gpu/drm/xe/xe_vm_types.h | 6 +++
2 files changed, 52 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index b2a78774ca83..8a3b4cb59593 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -85,16 +85,20 @@ static void madvise_preferred_mem_loc(struct xe_device *xe, struct xe_vm *vm,
for (i = 0; i < num_vmas; i++) {
/*TODO: Extend attributes to bo based vmas */
- if (!xe_vma_is_cpu_addr_mirror(vmas[i]))
- continue;
-
- vmas[i]->attr.preferred_loc.devmem_fd = op->preferred_mem_loc.devmem_fd;
-
- /* Till multi-device support is not added migration_policy
- * is of no use and can be ignored.
- */
- vmas[i]->attr.preferred_loc.migration_policy =
+ if ((vmas[i]->attr.preferred_loc.devmem_fd == op->preferred_mem_loc.devmem_fd &&
+ vmas[i]->attr.preferred_loc.migration_policy ==
+ op->preferred_mem_loc.migration_policy) ||
+ !xe_vma_is_cpu_addr_mirror(vmas[i])) {
+ vmas[i]->skip_invalidation = true;
+ } else {
+ vmas[i]->skip_invalidation = false;
+ vmas[i]->attr.preferred_loc.devmem_fd = op->preferred_mem_loc.devmem_fd;
+ /* Till multi-device support is not added migration_policy
+ * is of no use and can be ignored.
+ */
+ vmas[i]->attr.preferred_loc.migration_policy =
op->preferred_mem_loc.migration_policy;
+ }
}
}
@@ -109,17 +113,27 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
xe_assert(vm->xe, op->atomic.val <= DRM_XE_ATOMIC_CPU);
for (i = 0; i < num_vmas; i++) {
- if ((xe_vma_is_userptr(vmas[i]) &&
- !(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
- xe->info.has_device_atomics_on_smem)))
+ if (xe_vma_is_userptr(vmas[i]) &&
+ !(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
+ xe->info.has_device_atomics_on_smem)) {
+ vmas[i]->skip_invalidation = true;
continue;
+ }
+
+ if (vmas[i]->attr.atomic_access == op->atomic.val) {
+ vmas[i]->skip_invalidation = true;
+ } else {
+ vmas[i]->skip_invalidation = false;
+ vmas[i]->attr.atomic_access = op->atomic.val;
+ }
vmas[i]->attr.atomic_access = op->atomic.val;
bo = xe_vma_bo(vmas[i]);
- if (!bo)
+ if (!bo || bo->attr.atomic_access == op->atomic.val)
continue;
+ vmas[i]->skip_invalidation = false;
xe_bo_assert_held(bo);
bo->attr.atomic_access = op->atomic.val;
@@ -140,11 +154,14 @@ static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_PAT);
for (i = 0; i < num_vmas; i++) {
- if (!xe_vma_bo(vmas[i]) &&
- xe_pat_index_get_comp_mode(xe, op->pat_index.val))
- continue;
-
- vmas[i]->attr.pat_index = op->pat_index.val;
+ if ((!xe_vma_bo(vmas[i]) &&
+ xe_pat_index_get_comp_mode(xe, op->pat_index.val)) ||
+ vmas[i]->attr.pat_index == op->pat_index.val) {
+ vmas[i]->skip_invalidation = true;
+ } else {
+ vmas[i]->skip_invalidation = false;
+ vmas[i]->attr.pat_index = op->pat_index.val;
+ }
}
}
@@ -162,7 +179,7 @@ static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
{
struct drm_gpuva *gpuva;
struct xe_tile *tile;
- u8 id, tile_mask;
+ u8 id, tile_mask = 0;
lockdep_assert_held_write(&vm->lock);
@@ -171,17 +188,20 @@ static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
false, MAX_SCHEDULE_TIMEOUT) <= 0)
XE_WARN_ON(1);
- tile_mask = xe_svm_ranges_zap_ptes_in_range(vm, start, end);
-
drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
struct xe_vma *vma = gpuva_to_vma(gpuva);
- if (xe_vma_is_cpu_addr_mirror(vma) || xe_vma_is_null(vma))
+ if (vma->skip_invalidation || xe_vma_is_null(vma))
continue;
- for_each_tile(tile, vm->xe, id) {
- if (xe_pt_zap_ptes(tile, vma)) {
- tile_mask |= BIT(id);
+ if (xe_vma_is_cpu_addr_mirror(vma)) {
+ tile_mask |= xe_svm_ranges_zap_ptes_in_range(vm,
+ xe_vma_start(vma),
+ xe_vma_end(vma));
+ } else {
+ for_each_tile(tile, vm->xe, id) {
+ if (xe_pt_zap_ptes(tile, vma)) {
+ tile_mask |= BIT(id);
/*
* WRITE_ONCE pairs with READ_ONCE
@@ -189,6 +209,7 @@ static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
*/
WRITE_ONCE(vma->tile_invalidated,
vma->tile_invalidated | BIT(id));
+ }
}
}
}
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index cd94d8b5819d..81d92d886578 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -157,6 +157,12 @@ struct xe_vma {
/** @tile_staged: bind is staged for this VMA */
u8 tile_staged;
+ /**
+ * @skip_invalidation: Used in madvise to avoid invalidation
+ * if mem attributes doesn't change
+ */
+ bool skip_invalidation;
+
/**
* @ufence: The user fence that was provided with MAP.
* Needs to be signalled before UNMAP can be processed.
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 23/26] drm/xe/vm: Add helper to check for default VMA memory attributes
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (21 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 22/26] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 24/26] drm/xe: Reset VMA attributes to default in SVM garbage collector Himal Prasad Ghimiray
` (7 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Introduce a new helper function `xe_vma_has_default_mem_attrs()` to
determine whether a VMA's memory attributes are set to their default
values. This includes checks for atomic access, PAT index, and preferred
location.
Also, add a new field `default_pat_index` to `struct xe_vma_mem_attr`
to track the initial PAT index set during the first bind. This helps
distinguish between default and user-modified pat index, such as those
changed via madvise.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 24 ++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 ++
drivers/gpu/drm/xe/xe_vm_types.h | 6 ++++++
3 files changed, 32 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 8eac7a4ab22f..c4d1c3e1c974 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2598,6 +2598,29 @@ static int xe_vma_op_commit(struct xe_vm *vm, struct xe_vma_op *op)
return err;
}
+/**
+ * xe_vma_has_default_mem_attrs - Check if a VMA has default memory attributes
+ * @vma: Pointer to the xe_vma structure to check
+ *
+ * This function determines whether the given VMA (Virtual Memory Area)
+ * has its memory attributes set to their default values. Specifically,
+ * it checks the following conditions:
+ *
+ * - `atomic_access` is `DRM_XE_VMA_ATOMIC_UNDEFINED`
+ * - `pat_index` is equal to `default_pat_index`
+ * - `preferred_loc.devmem_fd` is `DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE`
+ * - `preferred_loc.migration_policy` is `DRM_XE_MIGRATE_ALL_PAGES`
+ *
+ * Return: true if all attributes are at their default values, false otherwise.
+ */
+bool xe_vma_has_default_mem_attrs(struct xe_vma *vma)
+{
+ return (vma->attr.atomic_access == DRM_XE_ATOMIC_UNDEFINED &&
+ vma->attr.pat_index == vma->attr.default_pat_index &&
+ vma->attr.preferred_loc.devmem_fd == DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE &&
+ vma->attr.preferred_loc.migration_policy == DRM_XE_MIGRATE_ALL_PAGES);
+}
+
static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
struct xe_vma_ops *vops)
{
@@ -2630,6 +2653,7 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
.migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
},
.atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+ .default_pat_index = op->map.pat_index,
.pat_index = op->map.pat_index,
};
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 05ac3118d9f4..f735d994806d 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -66,6 +66,8 @@ static inline bool xe_vm_is_closed_or_banned(struct xe_vm *vm)
struct xe_vma *
xe_vm_find_overlapping_vma(struct xe_vm *vm, u64 start, u64 range);
+bool xe_vma_has_default_mem_attrs(struct xe_vma *vma);
+
/**
* xe_vm_has_scratch() - Whether the vm is configured for scratch PTEs
* @vm: The vm
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 81d92d886578..351242c92c12 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -103,8 +103,14 @@ struct xe_vma_mem_attr {
*/
u32 atomic_access;
+ /**
+ * @default_pat_index: The pat index for VMA set during first bind by user.
+ */
+ u16 default_pat_index;
+
/**
* @pat_index: The pat index to use when encoding the PTEs for this vma.
+ * same as default_pat_index unless overwritten by madvise.
*/
u16 pat_index;
};
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 24/26] drm/xe: Reset VMA attributes to default in SVM garbage collector
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (22 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 23/26] drm/xe/vm: Add helper to check for default VMA memory attributes Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 17:13 ` Matthew Brost
2025-08-07 16:43 ` [PATCH v6 25/26] drm/xe: Enable madvise ioctl for xe Himal Prasad Ghimiray
` (6 subsequent siblings)
30 siblings, 1 reply; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Restore default memory attributes for VMAs during garbage collection
if they were modified by madvise. Reuse existing VMA if fully overlapping;
otherwise, allocate a new mirror VMA.
v2 (Matthew Brost)
- Add helper for vma split
- Add retry to get updated vma
v3
- Rebase on gpuvm layer
Suggested-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 82 +++++++++++++++++--
drivers/gpu/drm/xe/xe_vm.c | 155 ++++++++++++++++++++++++++----------
drivers/gpu/drm/xe/xe_vm.h | 2 +
3 files changed, 190 insertions(+), 49 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 19585a3d9f69..5163634d003e 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -253,10 +253,56 @@ static int __xe_svm_garbage_collector(struct xe_vm *vm,
return 0;
}
+static int xe_svm_range_set_default_attr(struct xe_vm *vm, u64 range_start, u64 range_end)
+{
+ struct xe_vma *vma;
+ struct xe_vma_mem_attr default_attr = {
+ .preferred_loc = {
+ .devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
+ .migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
+ },
+ .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
+ };
+ int err = 0;
+
+ vma = xe_vm_find_vma_by_addr(vm, range_start);
+ if (!vma)
+ return -EINVAL;
+
+ if (xe_vma_has_default_mem_attrs(vma))
+ return 0;
+
+ vm_dbg(&vm->xe->drm, "Existing VMA start=0x%016llx, vma_end=0x%016llx",
+ xe_vma_start(vma), xe_vma_end(vma));
+
+ if (xe_vma_start(vma) == range_start && xe_vma_end(vma) == range_end) {
+ default_attr.pat_index = vma->attr.default_pat_index;
+ default_attr.default_pat_index = vma->attr.default_pat_index;
+ vma->attr = default_attr;
+ } else {
+ vm_dbg(&vm->xe->drm, "Split VMA start=0x%016llx, vma_end=0x%016llx",
+ range_start, range_end);
+ err = xe_vm_alloc_cpu_addr_mirror_vma(vm, range_start, range_end - range_start);
+ if (err) {
+ drm_warn(&vm->xe->drm, "VMA SPLIT failed: %pe\n", ERR_PTR(err));
+ xe_vm_kill(vm, true);
+ return err;
+ }
+ }
+
+ /*
+ * On call from xe_svm_handle_pagefault original VMA might be changed
+ * signal this to lookup for VMA again.
+ */
+ return -EAGAIN;
+}
+
static int xe_svm_garbage_collector(struct xe_vm *vm)
{
struct xe_svm_range *range;
- int err;
+ u64 range_start;
+ u64 range_end;
+ int err, ret = 0;
lockdep_assert_held_write(&vm->lock);
@@ -271,6 +317,9 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
if (!range)
break;
+ range_start = xe_svm_range_start(range);
+ range_end = xe_svm_range_end(range);
+
list_del(&range->garbage_collector_link);
spin_unlock(&vm->svm.garbage_collector.lock);
@@ -283,11 +332,19 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
return err;
}
+ err = xe_svm_range_set_default_attr(vm, range_start, range_end);
+ if (err) {
+ if (err == -EAGAIN)
+ ret = -EAGAIN;
+ else
+ return err;
+ }
+
spin_lock(&vm->svm.garbage_collector.lock);
}
spin_unlock(&vm->svm.garbage_collector.lock);
- return 0;
+ return ret;
}
static void xe_svm_garbage_collector_work_func(struct work_struct *w)
@@ -927,13 +984,28 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
struct xe_gt *gt, u64 fault_addr,
bool atomic)
{
- int need_vram;
-
+ bool is_retry = false;
+ int need_vram, ret;
+retry:
need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
if (need_vram < 0)
return need_vram;
- return __xe_svm_handle_pagefault(vm, vma, gt, fault_addr, need_vram ? true : false);
+ ret = __xe_svm_handle_pagefault(vm, vma, gt, fault_addr,
+ need_vram ? true : false);
+ if (ret == -EAGAIN && !is_retry) {
+ /*
+ * Retry once on -EAGAIN to re-lookup the VMA, as the original VMA
+ * may have been split by xe_svm_range_set_default_attr.
+ */
+ vma = xe_vm_find_vma_by_addr(vm, fault_addr);
+ if (!vma)
+ return -EINVAL;
+
+ is_retry = true;
+ goto retry;
+ }
+ return ret;
}
/**
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index c4d1c3e1c974..6d1ae88313e9 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -4263,36 +4263,24 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
}
}
-/**
- * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
- * @vm: Pointer to the xe_vm structure
- * @start: Starting input address
- * @range: Size of the input range
- *
- * This function splits existing vma to create new vma for user provided input range
- *
- * Return: 0 if success
- */
-int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+static int xe_vm_alloc_vma(struct xe_vm *vm, struct drm_gpuvm_map_req *map_req)
{
- struct drm_gpuvm_map_req map_req = {
- .op_map.va.addr = start,
- .op_map.va.range = range,
- .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
- };
-
struct xe_vma_ops vops;
struct drm_gpuva_ops *ops = NULL;
struct drm_gpuva_op *__op;
bool is_cpu_addr_mirror = false;
bool remap_op = false;
+ bool is_madvise = (map_req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
struct xe_vma_mem_attr tmp_attr;
+ u16 default_pat;
int err;
lockdep_assert_held_write(&vm->lock);
- vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
- ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
+ vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
+ map_req->op_map.va.addr, map_req->op_map.va.range);
+
+ ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, map_req);
if (IS_ERR(ops))
return PTR_ERR(ops);
@@ -4303,33 +4291,56 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
drm_gpuva_for_each_op(__op, ops) {
struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
+ struct xe_vma *vma = NULL;
- if (__op->op == DRM_GPUVA_OP_REMAP) {
- xe_assert(vm->xe, !remap_op);
- remap_op = true;
+ if (!is_madvise) {
+ if (__op->op == DRM_GPUVA_OP_UNMAP) {
+ vma = gpuva_to_vma(op->base.unmap.va);
+ XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma));
+ default_pat = vma->attr.default_pat_index;
+ }
- if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
- is_cpu_addr_mirror = true;
- else
- is_cpu_addr_mirror = false;
- }
+ if (__op->op == DRM_GPUVA_OP_REMAP) {
+ vma = gpuva_to_vma(op->base.remap.unmap->va);
+ default_pat = vma->attr.default_pat_index;
+ }
- if (__op->op == DRM_GPUVA_OP_MAP) {
- xe_assert(vm->xe, remap_op);
- remap_op = false;
+ if (__op->op == DRM_GPUVA_OP_MAP) {
+ op->map.is_cpu_addr_mirror = true;
+ op->map.pat_index = default_pat;
+ }
+ } else {
+ if (__op->op == DRM_GPUVA_OP_REMAP) {
+ vma = gpuva_to_vma(op->base.remap.unmap->va);
+ xe_assert(vm->xe, !remap_op);
+ remap_op = true;
- /* In case of madvise ops DRM_GPUVA_OP_MAP is always after
- * DRM_GPUVA_OP_REMAP, so ensure we assign op->map.is_cpu_addr_mirror true
- * if REMAP is for xe_vma_is_cpu_addr_mirror vma
- */
- op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
- }
+ if (xe_vma_is_cpu_addr_mirror(vma))
+ is_cpu_addr_mirror = true;
+ else
+ is_cpu_addr_mirror = false;
+ }
+ if (__op->op == DRM_GPUVA_OP_MAP) {
+ xe_assert(vm->xe, remap_op);
+ remap_op = false;
+ /*
+ * In case of madvise ops DRM_GPUVA_OP_MAP is
+ * always after DRM_GPUVA_OP_REMAP, so ensure
+ * we assign op->map.is_cpu_addr_mirror true
+ * if REMAP is for xe_vma_is_cpu_addr_mirror vma
+ */
+ op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
+ }
+ }
print_op(vm->xe, __op);
}
xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
- vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
+
+ if (is_madvise)
+ vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
+
err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
if (err)
goto unwind_ops;
@@ -4341,15 +4352,20 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
struct xe_vma *vma;
if (__op->op == DRM_GPUVA_OP_UNMAP) {
- /* There should be no unmap */
- XE_WARN_ON("UNEXPECTED UNMAP");
- xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
+ vma = gpuva_to_vma(op->base.unmap.va);
+ /* There should be no unmap for madvise */
+ if (is_madvise)
+ XE_WARN_ON("UNEXPECTED UNMAP");
+
+ xe_vma_destroy(vma, NULL);
} else if (__op->op == DRM_GPUVA_OP_REMAP) {
vma = gpuva_to_vma(op->base.remap.unmap->va);
- /* Store attributes for REMAP UNMAPPED VMA, so they can be assigned
- * to newly MAP created vma.
+ /* In case of madvise ops Store attributes for REMAP UNMAPPED
+ * VMA, so they can be assigned to newly MAP created vma.
*/
- tmp_attr = vma->attr;
+ if (is_madvise)
+ tmp_attr = vma->attr;
+
xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
} else if (__op->op == DRM_GPUVA_OP_MAP) {
vma = op->map.vma;
@@ -4357,7 +4373,8 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
* Therefore temp_attr will always have sane values, making it safe to
* copy them to new vma.
*/
- vma->attr = tmp_attr;
+ if (is_madvise)
+ vma->attr = tmp_attr;
}
}
@@ -4371,3 +4388,53 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
drm_gpuva_ops_free(&vm->gpuvm, ops);
return err;
}
+
+/**
+ * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
+ * @vm: Pointer to the xe_vm structure
+ * @start: Starting input address
+ * @range: Size of the input range
+ *
+ * This function splits existing vma to create new vma for user provided input range
+ *
+ * Return: 0 if success
+ */
+int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+{
+ struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = start,
+ .op_map.va.range = range,
+ .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
+ };
+
+ lockdep_assert_held_write(&vm->lock);
+
+ vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
+
+ return xe_vm_alloc_vma(vm, &map_req);
+}
+
+/**
+ * xe_vm_alloc_cpu_addr_mirror_vma - Allocate CPU addr mirror vma
+ * @vm: Pointer to the xe_vm structure
+ * @start: Starting input address
+ * @range: Size of the input range
+ *
+ * This function splits/merges existing vma to create new vma for user provided input range
+ *
+ * Return: 0 if success
+ */
+int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
+{
+ struct drm_gpuvm_map_req map_req = {
+ .op_map.va.addr = start,
+ .op_map.va.range = range,
+ };
+
+ lockdep_assert_held_write(&vm->lock);
+
+ vm_dbg(&vm->xe->drm, "CPU_ADDR_MIRROR_VMA_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
+ start, range);
+
+ return xe_vm_alloc_vma(vm, &map_req);
+}
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index f735d994806d..6538cddf158b 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -177,6 +177,8 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
+int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
+
/**
* to_userptr_vma() - Return a pointer to an embedding userptr vma
* @vma: Pointer to the embedded struct xe_vma
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 25/26] drm/xe: Enable madvise ioctl for xe
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (23 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 24/26] drm/xe: Reset VMA attributes to default in SVM garbage collector Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 26/26] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes Himal Prasad Ghimiray
` (5 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe; +Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray
Ioctl enables setting up of memory attributes in user provided range.
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 57edbc63da6f..09b28fb9eb41 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -63,6 +63,7 @@
#include "xe_ttm_stolen_mgr.h"
#include "xe_ttm_sys_mgr.h"
#include "xe_vm.h"
+#include "xe_vm_madvise.h"
#include "xe_vram.h"
#include "xe_vram_types.h"
#include "xe_vsec.h"
@@ -201,6 +202,7 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_IOCTL_DEF_DRV(XE_WAIT_USER_FENCE, xe_wait_user_fence_ioctl,
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH v6 26/26] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (24 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 25/26] drm/xe: Enable madvise ioctl for xe Himal Prasad Ghimiray
@ 2025-08-07 16:43 ` Himal Prasad Ghimiray
2025-08-07 18:02 ` ✗ CI.checkpatch: warning for MADVISE FOR XE (rev6) Patchwork
` (4 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Himal Prasad Ghimiray @ 2025-08-07 16:43 UTC (permalink / raw)
To: intel-xe
Cc: Matthew Brost, Thomas Hellström, Himal Prasad Ghimiray,
Shuicheng Lin
Introduce the DRM_IOCTL_XE_VM_QUERY_MEMORY_RANGE_ATTRS ioctl to allow
userspace to query memory attributes of VMAs within a user specified
virtual address range.
Userspace first calls the ioctl with num_mem_ranges = 0,
sizeof_mem_ranges_attr = 0 and vector_of_vma_mem_attr = NULL to retrieve
the number of memory ranges (vmas) and size of each memory range attribute.
Then, it allocates a buffer of that size and calls the ioctl again to fill
the buffer with memory range attributes.
This two-step interface allows userspace to first query the required
buffer size, then retrieve detailed attributes efficiently.
v2 (Matthew Brost)
- Use same ioctl to overload functionality
v3
- Add kernel-doc
v4
- Make uapi future proof by passing struct size (Matthew Brost)
- make lock interruptible (Matthew Brost)
- set reserved bits to zero (Matthew Brost)
- s/__copy_to_user/copy_to_user (Matthew Brost)
- Avod using VMA term in uapi (Thomas)
- xe_vm_put(vm) is missing (Shuicheng)
v5
- Nits
- Fix kernel-doc
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 2 +
drivers/gpu/drm/xe/xe_vm.c | 102 ++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm.h | 2 +-
include/uapi/drm/xe_drm.h | 139 +++++++++++++++++++++++++++++++++
4 files changed, 244 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 09b28fb9eb41..e3bb03dec180 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -203,6 +203,8 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_OBSERVATION, xe_observation_ioctl, DRM_RENDER_ALLOW),
DRM_IOCTL_DEF_DRV(XE_MADVISE, xe_vm_madvise_ioctl, DRM_RENDER_ALLOW),
+ DRM_IOCTL_DEF_DRV(XE_VM_QUERY_MEM_RANGE_ATTRS, xe_vm_query_vmas_attrs_ioctl,
+ DRM_RENDER_ALLOW),
};
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 6d1ae88313e9..364470212c21 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2171,6 +2171,108 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
return err;
}
+static int xe_vm_query_vmas(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct drm_gpuva *gpuva;
+ u32 num_vmas = 0;
+
+ lockdep_assert_held(&vm->lock);
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end)
+ num_vmas++;
+
+ return num_vmas;
+}
+
+static int get_mem_attrs(struct xe_vm *vm, u32 *num_vmas, u64 start,
+ u64 end, struct drm_xe_mem_range_attr *attrs)
+{
+ struct drm_gpuva *gpuva;
+ int i = 0;
+
+ lockdep_assert_held(&vm->lock);
+
+ drm_gpuvm_for_each_va_range(gpuva, &vm->gpuvm, start, end) {
+ struct xe_vma *vma = gpuva_to_vma(gpuva);
+
+ if (i == *num_vmas)
+ return -ENOSPC;
+
+ attrs[i].start = xe_vma_start(vma);
+ attrs[i].end = xe_vma_end(vma);
+ attrs[i].atomic.val = vma->attr.atomic_access;
+ attrs[i].pat_index.val = vma->attr.pat_index;
+ attrs[i].preferred_mem_loc.devmem_fd = vma->attr.preferred_loc.devmem_fd;
+ attrs[i].preferred_mem_loc.migration_policy =
+ vma->attr.preferred_loc.migration_policy;
+
+ i++;
+ }
+
+ *num_vmas = i;
+ return 0;
+}
+
+int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+{
+ struct xe_device *xe = to_xe_device(dev);
+ struct xe_file *xef = to_xe_file(file);
+ struct drm_xe_mem_range_attr *mem_attrs;
+ struct drm_xe_vm_query_mem_range_attr *args = data;
+ u64 __user *attrs_user = u64_to_user_ptr(args->vector_of_mem_attr);
+ struct xe_vm *vm;
+ int err = 0;
+
+ if (XE_IOCTL_DBG(xe,
+ ((args->num_mem_ranges == 0 &&
+ (attrs_user || args->sizeof_mem_range_attr != 0)) ||
+ (args->num_mem_ranges > 0 &&
+ (!attrs_user ||
+ args->sizeof_mem_range_attr !=
+ sizeof(struct drm_xe_mem_range_attr))))))
+ return -EINVAL;
+
+ vm = xe_vm_lookup(xef, args->vm_id);
+ if (XE_IOCTL_DBG(xe, !vm))
+ return -EINVAL;
+
+ err = down_read_interruptible(&vm->lock);
+ if (err)
+ goto put_vm;
+
+ attrs_user = u64_to_user_ptr(args->vector_of_mem_attr);
+
+ if (args->num_mem_ranges == 0 && !attrs_user) {
+ args->num_mem_ranges = xe_vm_query_vmas(vm, args->start, args->start + args->range);
+ args->sizeof_mem_range_attr = sizeof(struct drm_xe_mem_range_attr);
+ goto unlock_vm;
+ }
+
+ mem_attrs = kvmalloc_array(args->num_mem_ranges, args->sizeof_mem_range_attr,
+ GFP_KERNEL | __GFP_ACCOUNT |
+ __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
+ if (!mem_attrs) {
+ err = args->num_mem_ranges > 1 ? -ENOBUFS : -ENOMEM;
+ goto unlock_vm;
+ }
+
+ memset(mem_attrs, 0, args->num_mem_ranges * args->sizeof_mem_range_attr);
+ err = get_mem_attrs(vm, &args->num_mem_ranges, args->start,
+ args->start + args->range, mem_attrs);
+ if (err)
+ goto free_mem_attrs;
+
+ err = copy_to_user(attrs_user, mem_attrs,
+ args->sizeof_mem_range_attr * args->num_mem_ranges);
+
+free_mem_attrs:
+ kvfree(mem_attrs);
+unlock_vm:
+ up_read(&vm->lock);
+put_vm:
+ xe_vm_put(vm);
+ return err;
+}
+
static bool vma_matches(struct xe_vma *vma, u64 page_addr)
{
if (page_addr > xe_vma_end(vma) - 1 ||
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 6538cddf158b..3953b3ee2955 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -199,7 +199,7 @@ int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int xe_vm_bind_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
-
+int xe_vm_query_vmas_attrs_ioctl(struct drm_device *dev, void *data, struct drm_file *file);
void xe_vm_close_and_put(struct xe_vm *vm);
static inline bool xe_vm_in_fault_mode(struct xe_vm *vm)
diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
index 115b9bca2a25..eaf713706387 100644
--- a/include/uapi/drm/xe_drm.h
+++ b/include/uapi/drm/xe_drm.h
@@ -82,6 +82,7 @@ extern "C" {
* - &DRM_IOCTL_XE_WAIT_USER_FENCE
* - &DRM_IOCTL_XE_OBSERVATION
* - &DRM_IOCTL_XE_MADVISE
+ * - &DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS
*/
/*
@@ -104,6 +105,7 @@ extern "C" {
#define DRM_XE_WAIT_USER_FENCE 0x0a
#define DRM_XE_OBSERVATION 0x0b
#define DRM_XE_MADVISE 0x0c
+#define DRM_XE_VM_QUERY_MEM_RANGE_ATTRS 0x0d
/* Must be kept compact -- no holes */
@@ -120,6 +122,7 @@ extern "C" {
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
+#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
/**
* DOC: Xe IOCTL Extensions
@@ -2113,6 +2116,142 @@ struct drm_xe_madvise {
__u64 reserved[2];
};
+/**
+ * struct drm_xe_mem_range_attr - Output of &DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS
+ *
+ * This structure is provided by userspace and filled by KMD in response to the
+ * DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS ioctl. It describes memory attributes of
+ * a memory ranges within a user specified address range in a VM.
+ *
+ * The structure includes information such as atomic access policy,
+ * page attribute table (PAT) index, and preferred memory location.
+ * Userspace allocates an array of these structures and passes a pointer to the
+ * ioctl to retrieve attributes for each memory ranges
+ *
+ * @extensions: Pointer to the first extension struct, if any
+ * @start: Start address of the memory range
+ * @end: End address of the virtual memory range
+ *
+ */
+struct drm_xe_mem_range_attr {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @start: start of the memory range */
+ __u64 start;
+
+ /** @end: end of the memory range */
+ __u64 end;
+
+ /** @preferred_mem_loc: preferred memory location */
+ struct {
+ /** @preferred_mem_loc.devmem_fd: fd for preferred loc */
+ __u32 devmem_fd;
+
+ /** @preferred_mem_loc.migration_policy: Page migration policy */
+ __u32 migration_policy;
+ } preferred_mem_loc;
+
+ /** @atomic: Atomic access policy */
+ struct {
+ /** @atomic.val: atomic attribute */
+ __u32 val;
+
+ /** @atomic.reserved: Reserved */
+ __u32 reserved;
+ } atomic;
+
+ /** @pat_index: Page attribute table index */
+ struct {
+ /** @pat_index.val: PAT index */
+ __u32 val;
+
+ /** @pat_index.reserved: Reserved */
+ __u32 reserved;
+ } pat_index;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
+/**
+ * struct drm_xe_vm_query_mem_range_attr - Input of &DRM_IOCTL_XE_VM_QUERY_MEM_ATTRIBUTES
+ *
+ * This structure is used to query memory attributes of memory regions
+ * within a user specified address range in a VM. It provides detailed
+ * information about each memory range, including atomic access policy,
+ * page attribute table (PAT) index, and preferred memory location.
+ *
+ * Userspace first calls the ioctl with @num_mem_ranges = 0,
+ * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL to retrieve
+ * the number of memory regions and size of each memory range attribute.
+ * Then, it allocates a buffer of that size and calls the ioctl again to fill
+ * the buffer with memory range attributes.
+ *
+ * If second call fails with -ENOSPC, it means memory ranges changed between
+ * first call and now, retry IOCTL again with @num_mem_ranges = 0,
+ * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL followed by
+ * Second ioctl call.
+ *
+ * Example:
+ *
+ * .. code-block:: C
+ * struct drm_xe_vm_query_mem_range_attr query = {
+ * .vm_id = vm_id,
+ * .start = 0x100000,
+ * .range = 0x2000,
+ * };
+ *
+ * // First ioctl call to get num of mem regions and sizeof each attribute
+ * ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
+ *
+ * // Allocate buffer for the memory region attributes
+ * void *ptr = malloc(query.num_mem_ranges * query.sizeof_mem_range_attr);
+ *
+ * query.vector_of_mem_attr = (uintptr_t)ptr;
+ *
+ * // Second ioctl call to actually fill the memory attributes
+ * ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
+ *
+ * // Iterate over the returned memory region attributes
+ * for (unsigned int i = 0; i < query.num_mem_ranges; ++i) {
+ * struct drm_xe_mem_range_attr *attr = (struct drm_xe_mem_range_attr *)ptr;
+ *
+ * // Do something with attr
+ *
+ * // Move pointer by one entry
+ * ptr += query.sizeof_mem_range_attr;
+ * }
+ *
+ * free(ptr);
+ */
+struct drm_xe_vm_query_mem_range_attr {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @vm_id: vm_id of the virtual range */
+ __u32 vm_id;
+
+ /** @num_mem_ranges: number of mem_ranges in range */
+ __u32 num_mem_ranges;
+
+ /** @start: start of the virtual address range */
+ __u64 start;
+
+ /** @range: size of the virtual address range */
+ __u64 range;
+
+ /** @sizeof_mem_range_attr: size of struct drm_xe_mem_range_attr */
+ __u64 sizeof_mem_range_attr;
+
+ /** @vector_of_mem_attr: userptr to array of struct drm_xe_mem_range_attr */
+ __u64 vector_of_mem_attr;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+
+};
+
#if defined(__cplusplus)
}
#endif
--
2.34.1
^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH v6 24/26] drm/xe: Reset VMA attributes to default in SVM garbage collector
2025-08-07 16:43 ` [PATCH v6 24/26] drm/xe: Reset VMA attributes to default in SVM garbage collector Himal Prasad Ghimiray
@ 2025-08-07 17:13 ` Matthew Brost
0 siblings, 0 replies; 51+ messages in thread
From: Matthew Brost @ 2025-08-07 17:13 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström
On Thu, Aug 07, 2025 at 10:13:36PM +0530, Himal Prasad Ghimiray wrote:
> Restore default memory attributes for VMAs during garbage collection
> if they were modified by madvise. Reuse existing VMA if fully overlapping;
> otherwise, allocate a new mirror VMA.
>
> v2 (Matthew Brost)
> - Add helper for vma split
> - Add retry to get updated vma
>
> v3
> - Rebase on gpuvm layer
>
> Suggested-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 82 +++++++++++++++++--
> drivers/gpu/drm/xe/xe_vm.c | 155 ++++++++++++++++++++++++++----------
> drivers/gpu/drm/xe/xe_vm.h | 2 +
> 3 files changed, 190 insertions(+), 49 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index 19585a3d9f69..5163634d003e 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -253,10 +253,56 @@ static int __xe_svm_garbage_collector(struct xe_vm *vm,
> return 0;
> }
>
> +static int xe_svm_range_set_default_attr(struct xe_vm *vm, u64 range_start, u64 range_end)
> +{
> + struct xe_vma *vma;
> + struct xe_vma_mem_attr default_attr = {
> + .preferred_loc = {
> + .devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE,
> + .migration_policy = DRM_XE_MIGRATE_ALL_PAGES,
> + },
> + .atomic_access = DRM_XE_ATOMIC_UNDEFINED,
> + };
> + int err = 0;
> +
> + vma = xe_vm_find_vma_by_addr(vm, range_start);
> + if (!vma)
> + return -EINVAL;
> +
> + if (xe_vma_has_default_mem_attrs(vma))
> + return 0;
> +
> + vm_dbg(&vm->xe->drm, "Existing VMA start=0x%016llx, vma_end=0x%016llx",
> + xe_vma_start(vma), xe_vma_end(vma));
> +
> + if (xe_vma_start(vma) == range_start && xe_vma_end(vma) == range_end) {
> + default_attr.pat_index = vma->attr.default_pat_index;
> + default_attr.default_pat_index = vma->attr.default_pat_index;
> + vma->attr = default_attr;
> + } else {
> + vm_dbg(&vm->xe->drm, "Split VMA start=0x%016llx, vma_end=0x%016llx",
> + range_start, range_end);
> + err = xe_vm_alloc_cpu_addr_mirror_vma(vm, range_start, range_end - range_start);
> + if (err) {
> + drm_warn(&vm->xe->drm, "VMA SPLIT failed: %pe\n", ERR_PTR(err));
> + xe_vm_kill(vm, true);
> + return err;
> + }
> + }
> +
> + /*
> + * On call from xe_svm_handle_pagefault original VMA might be changed
> + * signal this to lookup for VMA again.
> + */
> + return -EAGAIN;
> +}
> +
> static int xe_svm_garbage_collector(struct xe_vm *vm)
> {
> struct xe_svm_range *range;
> - int err;
> + u64 range_start;
> + u64 range_end;
> + int err, ret = 0;
>
> lockdep_assert_held_write(&vm->lock);
>
> @@ -271,6 +317,9 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
> if (!range)
> break;
>
> + range_start = xe_svm_range_start(range);
> + range_end = xe_svm_range_end(range);
> +
> list_del(&range->garbage_collector_link);
> spin_unlock(&vm->svm.garbage_collector.lock);
>
> @@ -283,11 +332,19 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
> return err;
> }
>
> + err = xe_svm_range_set_default_attr(vm, range_start, range_end);
> + if (err) {
> + if (err == -EAGAIN)
> + ret = -EAGAIN;
> + else
> + return err;
> + }
> +
> spin_lock(&vm->svm.garbage_collector.lock);
> }
> spin_unlock(&vm->svm.garbage_collector.lock);
>
> - return 0;
> + return ret;
> }
>
> static void xe_svm_garbage_collector_work_func(struct work_struct *w)
> @@ -927,13 +984,28 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> struct xe_gt *gt, u64 fault_addr,
> bool atomic)
> {
> - int need_vram;
> -
> + bool is_retry = false;
> + int need_vram, ret;
> +retry:
> need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
> if (need_vram < 0)
> return need_vram;
>
> - return __xe_svm_handle_pagefault(vm, vma, gt, fault_addr, need_vram ? true : false);
> + ret = __xe_svm_handle_pagefault(vm, vma, gt, fault_addr,
> + need_vram ? true : false);
> + if (ret == -EAGAIN && !is_retry) {
I think you can get -EAGAIN multiple times and this perfectly valid.
Unmaps are fully async to this - think the main part of
__xe_svm_handle_pagefault is just big retry loop, this is just further
extentsion of that.
With this nit fixed:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> + /*
> + * Retry once on -EAGAIN to re-lookup the VMA, as the original VMA
> + * may have been split by xe_svm_range_set_default_attr.
> + */
> + vma = xe_vm_find_vma_by_addr(vm, fault_addr);
> + if (!vma)
> + return -EINVAL;
> +
> + is_retry = true;
> + goto retry;
> + }
> + return ret;
> }
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index c4d1c3e1c974..6d1ae88313e9 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -4263,36 +4263,24 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
> }
> }
>
> -/**
> - * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
> - * @vm: Pointer to the xe_vm structure
> - * @start: Starting input address
> - * @range: Size of the input range
> - *
> - * This function splits existing vma to create new vma for user provided input range
> - *
> - * Return: 0 if success
> - */
> -int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> +static int xe_vm_alloc_vma(struct xe_vm *vm, struct drm_gpuvm_map_req *map_req)
> {
> - struct drm_gpuvm_map_req map_req = {
> - .op_map.va.addr = start,
> - .op_map.va.range = range,
> - .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
> - };
> -
> struct xe_vma_ops vops;
> struct drm_gpuva_ops *ops = NULL;
> struct drm_gpuva_op *__op;
> bool is_cpu_addr_mirror = false;
> bool remap_op = false;
> + bool is_madvise = (map_req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
> struct xe_vma_mem_attr tmp_attr;
> + u16 default_pat;
> int err;
>
> lockdep_assert_held_write(&vm->lock);
>
> - vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
> - ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
> + vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
> + map_req->op_map.va.addr, map_req->op_map.va.range);
> +
> + ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, map_req);
> if (IS_ERR(ops))
> return PTR_ERR(ops);
>
> @@ -4303,33 +4291,56 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
>
> drm_gpuva_for_each_op(__op, ops) {
> struct xe_vma_op *op = gpuva_op_to_vma_op(__op);
> + struct xe_vma *vma = NULL;
>
> - if (__op->op == DRM_GPUVA_OP_REMAP) {
> - xe_assert(vm->xe, !remap_op);
> - remap_op = true;
> + if (!is_madvise) {
> + if (__op->op == DRM_GPUVA_OP_UNMAP) {
> + vma = gpuva_to_vma(op->base.unmap.va);
> + XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma));
> + default_pat = vma->attr.default_pat_index;
> + }
>
> - if (xe_vma_is_cpu_addr_mirror(gpuva_to_vma(op->base.remap.unmap->va)))
> - is_cpu_addr_mirror = true;
> - else
> - is_cpu_addr_mirror = false;
> - }
> + if (__op->op == DRM_GPUVA_OP_REMAP) {
> + vma = gpuva_to_vma(op->base.remap.unmap->va);
> + default_pat = vma->attr.default_pat_index;
> + }
>
> - if (__op->op == DRM_GPUVA_OP_MAP) {
> - xe_assert(vm->xe, remap_op);
> - remap_op = false;
> + if (__op->op == DRM_GPUVA_OP_MAP) {
> + op->map.is_cpu_addr_mirror = true;
> + op->map.pat_index = default_pat;
> + }
> + } else {
> + if (__op->op == DRM_GPUVA_OP_REMAP) {
> + vma = gpuva_to_vma(op->base.remap.unmap->va);
> + xe_assert(vm->xe, !remap_op);
> + remap_op = true;
>
> - /* In case of madvise ops DRM_GPUVA_OP_MAP is always after
> - * DRM_GPUVA_OP_REMAP, so ensure we assign op->map.is_cpu_addr_mirror true
> - * if REMAP is for xe_vma_is_cpu_addr_mirror vma
> - */
> - op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
> - }
> + if (xe_vma_is_cpu_addr_mirror(vma))
> + is_cpu_addr_mirror = true;
> + else
> + is_cpu_addr_mirror = false;
> + }
>
> + if (__op->op == DRM_GPUVA_OP_MAP) {
> + xe_assert(vm->xe, remap_op);
> + remap_op = false;
> + /*
> + * In case of madvise ops DRM_GPUVA_OP_MAP is
> + * always after DRM_GPUVA_OP_REMAP, so ensure
> + * we assign op->map.is_cpu_addr_mirror true
> + * if REMAP is for xe_vma_is_cpu_addr_mirror vma
> + */
> + op->map.is_cpu_addr_mirror = is_cpu_addr_mirror;
> + }
> + }
> print_op(vm->xe, __op);
> }
>
> xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
> - vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
> +
> + if (is_madvise)
> + vops.flags |= XE_VMA_OPS_FLAG_MADVISE;
> +
> err = vm_bind_ioctl_ops_parse(vm, ops, &vops);
> if (err)
> goto unwind_ops;
> @@ -4341,15 +4352,20 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> struct xe_vma *vma;
>
> if (__op->op == DRM_GPUVA_OP_UNMAP) {
> - /* There should be no unmap */
> - XE_WARN_ON("UNEXPECTED UNMAP");
> - xe_vma_destroy(gpuva_to_vma(op->base.unmap.va), NULL);
> + vma = gpuva_to_vma(op->base.unmap.va);
> + /* There should be no unmap for madvise */
> + if (is_madvise)
> + XE_WARN_ON("UNEXPECTED UNMAP");
> +
> + xe_vma_destroy(vma, NULL);
> } else if (__op->op == DRM_GPUVA_OP_REMAP) {
> vma = gpuva_to_vma(op->base.remap.unmap->va);
> - /* Store attributes for REMAP UNMAPPED VMA, so they can be assigned
> - * to newly MAP created vma.
> + /* In case of madvise ops Store attributes for REMAP UNMAPPED
> + * VMA, so they can be assigned to newly MAP created vma.
> */
> - tmp_attr = vma->attr;
> + if (is_madvise)
> + tmp_attr = vma->attr;
> +
> xe_vma_destroy(gpuva_to_vma(op->base.remap.unmap->va), NULL);
> } else if (__op->op == DRM_GPUVA_OP_MAP) {
> vma = op->map.vma;
> @@ -4357,7 +4373,8 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> * Therefore temp_attr will always have sane values, making it safe to
> * copy them to new vma.
> */
> - vma->attr = tmp_attr;
> + if (is_madvise)
> + vma->attr = tmp_attr;
> }
> }
>
> @@ -4371,3 +4388,53 @@ int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> drm_gpuva_ops_free(&vm->gpuvm, ops);
> return err;
> }
> +
> +/**
> + * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
> + * @vm: Pointer to the xe_vm structure
> + * @start: Starting input address
> + * @range: Size of the input range
> + *
> + * This function splits existing vma to create new vma for user provided input range
> + *
> + * Return: 0 if success
> + */
> +int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> +{
> + struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = start,
> + .op_map.va.range = range,
> + .flags = DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE,
> + };
> +
> + lockdep_assert_held_write(&vm->lock);
> +
> + vm_dbg(&vm->xe->drm, "MADVISE_OPS_CREATE: addr=0x%016llx, size=0x%016llx", start, range);
> +
> + return xe_vm_alloc_vma(vm, &map_req);
> +}
> +
> +/**
> + * xe_vm_alloc_cpu_addr_mirror_vma - Allocate CPU addr mirror vma
> + * @vm: Pointer to the xe_vm structure
> + * @start: Starting input address
> + * @range: Size of the input range
> + *
> + * This function splits/merges existing vma to create new vma for user provided input range
> + *
> + * Return: 0 if success
> + */
> +int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t start, uint64_t range)
> +{
> + struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = start,
> + .op_map.va.range = range,
> + };
> +
> + lockdep_assert_held_write(&vm->lock);
> +
> + vm_dbg(&vm->xe->drm, "CPU_ADDR_MIRROR_VMA_OPS_CREATE: addr=0x%016llx, size=0x%016llx",
> + start, range);
> +
> + return xe_vm_alloc_vma(vm, &map_req);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index f735d994806d..6538cddf158b 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -177,6 +177,8 @@ int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool i
>
> int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>
> +int xe_vm_alloc_cpu_addr_mirror_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
> +
> /**
> * to_userptr_vma() - Return a pointer to an embedding userptr vma
> * @vma: Pointer to the embedded struct xe_vma
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* ✗ CI.checkpatch: warning for MADVISE FOR XE (rev6)
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (25 preceding siblings ...)
2025-08-07 16:43 ` [PATCH v6 26/26] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes Himal Prasad Ghimiray
@ 2025-08-07 18:02 ` Patchwork
2025-08-07 18:03 ` ✓ CI.KUnit: success " Patchwork
` (3 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Patchwork @ 2025-08-07 18:02 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: MADVISE FOR XE (rev6)
URL : https://patchwork.freedesktop.org/series/149550/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
c298eac5978c38dcc62a70c0d73c91765e7cc296
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit fa7a4dd3f3842dc526a0b907e528be5f91984aa1
Author: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Date: Thu Aug 7 22:13:38 2025 +0530
drm/xe/uapi: Add UAPI for querying VMA count and memory attributes
Introduce the DRM_IOCTL_XE_VM_QUERY_MEMORY_RANGE_ATTRS ioctl to allow
userspace to query memory attributes of VMAs within a user specified
virtual address range.
Userspace first calls the ioctl with num_mem_ranges = 0,
sizeof_mem_ranges_attr = 0 and vector_of_vma_mem_attr = NULL to retrieve
the number of memory ranges (vmas) and size of each memory range attribute.
Then, it allocates a buffer of that size and calls the ioctl again to fill
the buffer with memory range attributes.
This two-step interface allows userspace to first query the required
buffer size, then retrieve detailed attributes efficiently.
v2 (Matthew Brost)
- Use same ioctl to overload functionality
v3
- Add kernel-doc
v4
- Make uapi future proof by passing struct size (Matthew Brost)
- make lock interruptible (Matthew Brost)
- set reserved bits to zero (Matthew Brost)
- s/__copy_to_user/copy_to_user (Matthew Brost)
- Avod using VMA term in uapi (Thomas)
- xe_vm_put(vm) is missing (Shuicheng)
v5
- Nits
- Fix kernel-doc
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
+ /mt/dim checkpatch 76741379fba333222be8a15bebf1e659eb84088a drm-intel
23eb1aea1421 drm/gpuvm: Pass map arguments through a struct
26e58a45c824 drm/gpuvm: Kill drm_gpuva_init()
47131ad3be2e drm/gpuvm: Support flags in drm_gpuvm_map_req
0b0b3fac1662 drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
b6838914e3a5 drm/xe/uapi: Add madvise interface
-:51: WARNING:LONG_LINE: line length of 113 exceeds 100 columns
#51: FILE: include/uapi/drm/xe_drm.h:122:
+#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
total: 0 errors, 1 warnings, 0 checks, 154 lines checked
3c8ee1f51c0a drm/xe/vm: Add attributes struct as member of vma
8b3317dc8c75 drm/xe/vma: Move pat_index to vma attributes
22743f774f84 drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter
ecd2e1e04a42 drm/gpusvm: Make drm_gpusvm_for_each_* macros public
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'range__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:452:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:452:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:225: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#225: FILE: include/drm/drm_gpusvm.h:452:
+#define drm_gpusvm_for_each_range_safe(range__, next__, notifier__, start__, end__) \
+ for ((range__) = drm_gpusvm_range_find((notifier__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_range_next(range__); \
+ (range__) && (drm_gpusvm_range_start(range__) < (end__)); \
+ (range__) = (next__), (next__) = __drm_gpusvm_range_next(range__))
-:258: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#258: FILE: include/drm/drm_gpusvm.h:485:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-:258: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#258: FILE: include/drm/drm_gpusvm.h:485:
+#define drm_gpusvm_for_each_notifier(notifier__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = __drm_gpusvm_notifier_next(notifier__))
-:274: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'notifier__' - possible side-effects?
#274: FILE: include/drm/drm_gpusvm.h:501:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
-:274: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'next__' - possible side-effects?
#274: FILE: include/drm/drm_gpusvm.h:501:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
-:274: CHECK:MACRO_ARG_REUSE: Macro argument reuse 'end__' - possible side-effects?
#274: FILE: include/drm/drm_gpusvm.h:501:
+#define drm_gpusvm_for_each_notifier_safe(notifier__, next__, gpusvm__, start__, end__) \
+ for ((notifier__) = drm_gpusvm_notifier_find((gpusvm__), (start__), (end__)), \
+ (next__) = __drm_gpusvm_notifier_next(notifier__); \
+ (notifier__) && (drm_gpusvm_notifier_start(notifier__) < (end__)); \
+ (notifier__) = (next__), (next__) = __drm_gpusvm_notifier_next(notifier__))
total: 0 errors, 0 warnings, 8 checks, 248 lines checked
6696856c535c drm/xe/svm: Split system allocator vma incase of madvise call
5a268a8f7470 drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise
1ea78420534e drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping
e22fb48651bd drm/xe: Implement madvise ioctl for xe
-:53: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#53:
new file mode 100644
total: 0 errors, 1 warnings, 0 checks, 330 lines checked
f175910a9e8f drm/xe/svm : Add svm ranges migration policy on atomic access
1892288b275e drm/xe/madvise: Update migration policy based on preferred location
5b342e844bb1 drm/xe/pat: Add helper for compression mode of pat index
1e6d3673d188 drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute
d62924a44fc9 drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch
ad08269caf59 drm/xe/svm: Consult madvise preferred location in prefetch
f52fbe553744 drm/xe/bo: Add attributes field to xe_bo
3965593c9e28 drm/xe/bo: Update atomic_access attribute on madvise
f47dd232d183 drm/xe/madvise: Skip vma invalidation if mem attr are unchanged
b3d3191df88c drm/xe/vm: Add helper to check for default VMA memory attributes
188affbc0731 drm/xe: Reset VMA attributes to default in SVM garbage collector
1b1fe7d76dea drm/xe: Enable madvise ioctl for xe
fa7a4dd3f384 drm/xe/uapi: Add UAPI for querying VMA count and memory attributes
-:210: WARNING:LONG_LINE: line length of 147 exceeds 100 columns
#210: FILE: include/uapi/drm/xe_drm.h:125:
+#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_RANGE_ATTRS, struct drm_xe_vm_query_mem_range_attr)
total: 0 errors, 1 warnings, 0 checks, 287 lines checked
^ permalink raw reply [flat|nested] 51+ messages in thread
* ✓ CI.KUnit: success for MADVISE FOR XE (rev6)
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (26 preceding siblings ...)
2025-08-07 18:02 ` ✗ CI.checkpatch: warning for MADVISE FOR XE (rev6) Patchwork
@ 2025-08-07 18:03 ` Patchwork
2025-08-07 18:18 ` ✗ CI.checksparse: warning " Patchwork
` (2 subsequent siblings)
30 siblings, 0 replies; 51+ messages in thread
From: Patchwork @ 2025-08-07 18:03 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: MADVISE FOR XE (rev6)
URL : https://patchwork.freedesktop.org/series/149550/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[18:02:41] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[18:02:46] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[18:03:13] Starting KUnit Kernel (1/1)...
[18:03:13] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[18:03:13] ================== guc_buf (11 subtests) ===================
[18:03:13] [PASSED] test_smallest
[18:03:13] [PASSED] test_largest
[18:03:13] [PASSED] test_granular
[18:03:13] [PASSED] test_unique
[18:03:13] [PASSED] test_overlap
[18:03:13] [PASSED] test_reusable
[18:03:13] [PASSED] test_too_big
[18:03:13] [PASSED] test_flush
[18:03:13] [PASSED] test_lookup
[18:03:13] [PASSED] test_data
[18:03:13] [PASSED] test_class
[18:03:13] ===================== [PASSED] guc_buf =====================
[18:03:13] =================== guc_dbm (7 subtests) ===================
[18:03:13] [PASSED] test_empty
[18:03:13] [PASSED] test_default
[18:03:13] ======================== test_size ========================
[18:03:13] [PASSED] 4
[18:03:13] [PASSED] 8
[18:03:13] [PASSED] 32
[18:03:13] [PASSED] 256
[18:03:13] ==================== [PASSED] test_size ====================
[18:03:13] ======================= test_reuse ========================
[18:03:13] [PASSED] 4
[18:03:13] [PASSED] 8
[18:03:13] [PASSED] 32
[18:03:13] [PASSED] 256
[18:03:13] =================== [PASSED] test_reuse ====================
[18:03:13] =================== test_range_overlap ====================
[18:03:13] [PASSED] 4
[18:03:13] [PASSED] 8
[18:03:13] [PASSED] 32
[18:03:13] [PASSED] 256
[18:03:13] =============== [PASSED] test_range_overlap ================
[18:03:13] =================== test_range_compact ====================
[18:03:13] [PASSED] 4
[18:03:13] [PASSED] 8
[18:03:13] [PASSED] 32
[18:03:13] [PASSED] 256
[18:03:13] =============== [PASSED] test_range_compact ================
[18:03:13] ==================== test_range_spare =====================
[18:03:13] [PASSED] 4
[18:03:13] [PASSED] 8
[18:03:13] [PASSED] 32
[18:03:13] [PASSED] 256
[18:03:13] ================ [PASSED] test_range_spare =================
[18:03:13] ===================== [PASSED] guc_dbm =====================
[18:03:13] =================== guc_idm (6 subtests) ===================
[18:03:13] [PASSED] bad_init
[18:03:13] [PASSED] no_init
[18:03:13] [PASSED] init_fini
[18:03:13] [PASSED] check_used
[18:03:13] [PASSED] check_quota
[18:03:13] [PASSED] check_all
[18:03:13] ===================== [PASSED] guc_idm =====================
[18:03:13] ================== no_relay (3 subtests) ===================
[18:03:13] [PASSED] xe_drops_guc2pf_if_not_ready
[18:03:13] [PASSED] xe_drops_guc2vf_if_not_ready
[18:03:13] [PASSED] xe_rejects_send_if_not_ready
[18:03:13] ==================== [PASSED] no_relay =====================
[18:03:13] ================== pf_relay (14 subtests) ==================
[18:03:13] [PASSED] pf_rejects_guc2pf_too_short
[18:03:13] [PASSED] pf_rejects_guc2pf_too_long
[18:03:13] [PASSED] pf_rejects_guc2pf_no_payload
[18:03:13] [PASSED] pf_fails_no_payload
[18:03:13] [PASSED] pf_fails_bad_origin
[18:03:13] [PASSED] pf_fails_bad_type
[18:03:13] [PASSED] pf_txn_reports_error
[18:03:13] [PASSED] pf_txn_sends_pf2guc
[18:03:13] [PASSED] pf_sends_pf2guc
[18:03:13] [SKIPPED] pf_loopback_nop
[18:03:13] [SKIPPED] pf_loopback_echo
[18:03:13] [SKIPPED] pf_loopback_fail
[18:03:13] [SKIPPED] pf_loopback_busy
[18:03:13] [SKIPPED] pf_loopback_retry
[18:03:13] ==================== [PASSED] pf_relay =====================
[18:03:13] ================== vf_relay (3 subtests) ===================
[18:03:13] [PASSED] vf_rejects_guc2vf_too_short
[18:03:13] [PASSED] vf_rejects_guc2vf_too_long
[18:03:13] [PASSED] vf_rejects_guc2vf_no_payload
[18:03:13] ==================== [PASSED] vf_relay =====================
[18:03:13] ===================== lmtt (1 subtest) =====================
[18:03:13] ======================== test_ops =========================
[18:03:13] [PASSED] 2-level
[18:03:13] [PASSED] multi-level
[18:03:13] ==================== [PASSED] test_ops =====================
[18:03:13] ====================== [PASSED] lmtt =======================
[18:03:13] ================= pf_service (11 subtests) =================
[18:03:13] [PASSED] pf_negotiate_any
[18:03:13] [PASSED] pf_negotiate_base_match
[18:03:13] [PASSED] pf_negotiate_base_newer
[18:03:13] [PASSED] pf_negotiate_base_next
[18:03:13] [SKIPPED] pf_negotiate_base_older
[18:03:13] [PASSED] pf_negotiate_base_prev
[18:03:13] [PASSED] pf_negotiate_latest_match
[18:03:13] [PASSED] pf_negotiate_latest_newer
[18:03:13] [PASSED] pf_negotiate_latest_next
[18:03:13] [SKIPPED] pf_negotiate_latest_older
[18:03:13] [SKIPPED] pf_negotiate_latest_prev
[18:03:13] =================== [PASSED] pf_service ====================
[18:03:13] =================== xe_mocs (2 subtests) ===================
[18:03:13] ================ xe_live_mocs_kernel_kunit ================
[18:03:13] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[18:03:13] ================ xe_live_mocs_reset_kunit =================
[18:03:13] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[18:03:13] ==================== [SKIPPED] xe_mocs =====================
[18:03:13] ================= xe_migrate (2 subtests) ==================
[18:03:13] ================= xe_migrate_sanity_kunit =================
[18:03:13] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[18:03:13] ================== xe_validate_ccs_kunit ==================
[18:03:13] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[18:03:13] =================== [SKIPPED] xe_migrate ===================
[18:03:13] ================== xe_dma_buf (1 subtest) ==================
[18:03:13] ==================== xe_dma_buf_kunit =====================
[18:03:13] ================ [SKIPPED] xe_dma_buf_kunit ================
[18:03:13] =================== [SKIPPED] xe_dma_buf ===================
[18:03:13] ================= xe_bo_shrink (1 subtest) =================
[18:03:13] =================== xe_bo_shrink_kunit ====================
[18:03:13] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[18:03:13] ================== [SKIPPED] xe_bo_shrink ==================
[18:03:13] ==================== xe_bo (2 subtests) ====================
[18:03:13] ================== xe_ccs_migrate_kunit ===================
[18:03:13] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[18:03:13] ==================== xe_bo_evict_kunit ====================
[18:03:13] =============== [SKIPPED] xe_bo_evict_kunit ================
[18:03:13] ===================== [SKIPPED] xe_bo ======================
[18:03:13] ==================== args (11 subtests) ====================
[18:03:13] [PASSED] count_args_test
[18:03:13] [PASSED] call_args_example
[18:03:13] [PASSED] call_args_test
[18:03:13] [PASSED] drop_first_arg_example
[18:03:13] [PASSED] drop_first_arg_test
[18:03:13] [PASSED] first_arg_example
[18:03:13] [PASSED] first_arg_test
[18:03:13] [PASSED] last_arg_example
[18:03:13] [PASSED] last_arg_test
[18:03:13] [PASSED] pick_arg_example
[18:03:13] [PASSED] sep_comma_example
[18:03:13] ====================== [PASSED] args =======================
[18:03:13] =================== xe_pci (3 subtests) ====================
[18:03:13] ==================== check_graphics_ip ====================
[18:03:13] [PASSED] 12.70 Xe_LPG
[18:03:13] [PASSED] 12.71 Xe_LPG
[18:03:13] [PASSED] 12.74 Xe_LPG+
[18:03:13] [PASSED] 20.01 Xe2_HPG
[18:03:13] [PASSED] 20.02 Xe2_HPG
[18:03:13] [PASSED] 20.04 Xe2_LPG
[18:03:13] [PASSED] 30.00 Xe3_LPG
[18:03:13] [PASSED] 30.01 Xe3_LPG
[18:03:13] [PASSED] 30.03 Xe3_LPG
[18:03:13] ================ [PASSED] check_graphics_ip ================
[18:03:13] ===================== check_media_ip ======================
[18:03:13] [PASSED] 13.00 Xe_LPM+
[18:03:13] [PASSED] 13.01 Xe2_HPM
[18:03:13] [PASSED] 20.00 Xe2_LPM
[18:03:13] [PASSED] 30.00 Xe3_LPM
[18:03:13] [PASSED] 30.02 Xe3_LPM
[18:03:13] ================= [PASSED] check_media_ip ==================
[18:03:13] ================= check_platform_gt_count =================
[18:03:13] [PASSED] 0x9A60 (TIGERLAKE)
[18:03:13] [PASSED] 0x9A68 (TIGERLAKE)
[18:03:13] [PASSED] 0x9A70 (TIGERLAKE)
[18:03:13] [PASSED] 0x9A40 (TIGERLAKE)
[18:03:13] [PASSED] 0x9A49 (TIGERLAKE)
[18:03:13] [PASSED] 0x9A59 (TIGERLAKE)
[18:03:13] [PASSED] 0x9A78 (TIGERLAKE)
[18:03:13] [PASSED] 0x9AC0 (TIGERLAKE)
[18:03:13] [PASSED] 0x9AC9 (TIGERLAKE)
[18:03:13] [PASSED] 0x9AD9 (TIGERLAKE)
[18:03:13] [PASSED] 0x9AF8 (TIGERLAKE)
[18:03:13] [PASSED] 0x4C80 (ROCKETLAKE)
[18:03:13] [PASSED] 0x4C8A (ROCKETLAKE)
[18:03:13] [PASSED] 0x4C8B (ROCKETLAKE)
[18:03:13] [PASSED] 0x4C8C (ROCKETLAKE)
[18:03:13] [PASSED] 0x4C90 (ROCKETLAKE)
[18:03:13] [PASSED] 0x4C9A (ROCKETLAKE)
[18:03:13] [PASSED] 0x4680 (ALDERLAKE_S)
[18:03:13] [PASSED] 0x4682 (ALDERLAKE_S)
[18:03:13] [PASSED] 0x4688 (ALDERLAKE_S)
[18:03:13] [PASSED] 0x468A (ALDERLAKE_S)
[18:03:13] [PASSED] 0x468B (ALDERLAKE_S)
[18:03:13] [PASSED] 0x4690 (ALDERLAKE_S)
[18:03:13] [PASSED] 0x4692 (ALDERLAKE_S)
[18:03:13] [PASSED] 0x4693 (ALDERLAKE_S)
[18:03:13] [PASSED] 0x46A0 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46A1 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46A2 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46A3 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46A6 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46A8 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46AA (ALDERLAKE_P)
[18:03:13] [PASSED] 0x462A (ALDERLAKE_P)
[18:03:13] [PASSED] 0x4626 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x4628 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46B0 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46B1 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46B2 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46B3 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46C0 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46C1 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46C2 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46C3 (ALDERLAKE_P)
[18:03:13] [PASSED] 0x46D0 (ALDERLAKE_N)
[18:03:13] [PASSED] 0x46D1 (ALDERLAKE_N)
[18:03:13] [PASSED] 0x46D2 (ALDERLAKE_N)
[18:03:13] [PASSED] 0x46D3 (ALDERLAKE_N)
[18:03:13] [PASSED] 0x46D4 (ALDERLAKE_N)
[18:03:13] [PASSED] 0xA721 (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA7A1 (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA7A9 (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA7AC (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA7AD (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA720 (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA7A0 (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA7A8 (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA7AA (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA7AB (ALDERLAKE_P)
[18:03:13] [PASSED] 0xA780 (ALDERLAKE_S)
[18:03:13] [PASSED] 0xA781 (ALDERLAKE_S)
[18:03:13] [PASSED] 0xA782 (ALDERLAKE_S)
[18:03:13] [PASSED] 0xA783 (ALDERLAKE_S)
[18:03:13] [PASSED] 0xA788 (ALDERLAKE_S)
[18:03:13] [PASSED] 0xA789 (ALDERLAKE_S)
[18:03:13] [PASSED] 0xA78A (ALDERLAKE_S)
[18:03:13] [PASSED] 0xA78B (ALDERLAKE_S)
[18:03:13] [PASSED] 0x4905 (DG1)
[18:03:13] [PASSED] 0x4906 (DG1)
[18:03:13] [PASSED] 0x4907 (DG1)
[18:03:13] [PASSED] 0x4908 (DG1)
[18:03:13] [PASSED] 0x4909 (DG1)
[18:03:13] [PASSED] 0x56C0 (DG2)
[18:03:13] [PASSED] 0x56C2 (DG2)
[18:03:13] [PASSED] 0x56C1 (DG2)
[18:03:13] [PASSED] 0x7D51 (METEORLAKE)
[18:03:13] [PASSED] 0x7DD1 (METEORLAKE)
[18:03:13] [PASSED] 0x7D41 (METEORLAKE)
[18:03:13] [PASSED] 0x7D67 (METEORLAKE)
[18:03:13] [PASSED] 0xB640 (METEORLAKE)
[18:03:13] [PASSED] 0x56A0 (DG2)
[18:03:13] [PASSED] 0x56A1 (DG2)
[18:03:13] [PASSED] 0x56A2 (DG2)
[18:03:13] [PASSED] 0x56BE (DG2)
[18:03:13] [PASSED] 0x56BF (DG2)
[18:03:13] [PASSED] 0x5690 (DG2)
[18:03:13] [PASSED] 0x5691 (DG2)
[18:03:13] [PASSED] 0x5692 (DG2)
[18:03:13] [PASSED] 0x56A5 (DG2)
[18:03:13] [PASSED] 0x56A6 (DG2)
[18:03:13] [PASSED] 0x56B0 (DG2)
[18:03:13] [PASSED] 0x56B1 (DG2)
[18:03:13] [PASSED] 0x56BA (DG2)
[18:03:13] [PASSED] 0x56BB (DG2)
[18:03:13] [PASSED] 0x56BC (DG2)
[18:03:13] [PASSED] 0x56BD (DG2)
[18:03:13] [PASSED] 0x5693 (DG2)
[18:03:13] [PASSED] 0x5694 (DG2)
[18:03:13] [PASSED] 0x5695 (DG2)
[18:03:13] [PASSED] 0x56A3 (DG2)
[18:03:13] [PASSED] 0x56A4 (DG2)
[18:03:13] [PASSED] 0x56B2 (DG2)
[18:03:13] [PASSED] 0x56B3 (DG2)
[18:03:13] [PASSED] 0x5696 (DG2)
[18:03:13] [PASSED] 0x5697 (DG2)
[18:03:13] [PASSED] 0xB69 (PVC)
[18:03:13] [PASSED] 0xB6E (PVC)
[18:03:13] [PASSED] 0xBD4 (PVC)
[18:03:13] [PASSED] 0xBD5 (PVC)
[18:03:13] [PASSED] 0xBD6 (PVC)
[18:03:13] [PASSED] 0xBD7 (PVC)
[18:03:13] [PASSED] 0xBD8 (PVC)
[18:03:13] [PASSED] 0xBD9 (PVC)
[18:03:13] [PASSED] 0xBDA (PVC)
[18:03:13] [PASSED] 0xBDB (PVC)
[18:03:13] [PASSED] 0xBE0 (PVC)
[18:03:13] [PASSED] 0xBE1 (PVC)
[18:03:13] [PASSED] 0xBE5 (PVC)
[18:03:13] [PASSED] 0x7D40 (METEORLAKE)
[18:03:13] [PASSED] 0x7D45 (METEORLAKE)
[18:03:13] [PASSED] 0x7D55 (METEORLAKE)
[18:03:13] [PASSED] 0x7D60 (METEORLAKE)
[18:03:13] [PASSED] 0x7DD5 (METEORLAKE)
[18:03:13] [PASSED] 0x6420 (LUNARLAKE)
[18:03:13] [PASSED] 0x64A0 (LUNARLAKE)
[18:03:13] [PASSED] 0x64B0 (LUNARLAKE)
[18:03:13] [PASSED] 0xE202 (BATTLEMAGE)
[18:03:13] [PASSED] 0xE209 (BATTLEMAGE)
[18:03:13] [PASSED] 0xE20B (BATTLEMAGE)
[18:03:13] [PASSED] 0xE20C (BATTLEMAGE)
[18:03:13] [PASSED] 0xE20D (BATTLEMAGE)
[18:03:13] [PASSED] 0xE210 (BATTLEMAGE)
[18:03:13] [PASSED] 0xE211 (BATTLEMAGE)
[18:03:13] [PASSED] 0xE212 (BATTLEMAGE)
[18:03:13] [PASSED] 0xE216 (BATTLEMAGE)
[18:03:13] [PASSED] 0xE220 (BATTLEMAGE)
[18:03:13] [PASSED] 0xE221 (BATTLEMAGE)
[18:03:13] [PASSED] 0xE222 (BATTLEMAGE)
[18:03:13] [PASSED] 0xE223 (BATTLEMAGE)
[18:03:13] [PASSED] 0xB080 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB081 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB082 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB083 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB084 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB085 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB086 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB087 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB08F (PANTHERLAKE)
[18:03:13] [PASSED] 0xB090 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB0A0 (PANTHERLAKE)
[18:03:13] [PASSED] 0xB0B0 (PANTHERLAKE)
[18:03:13] [PASSED] 0xFD80 (PANTHERLAKE)
[18:03:13] [PASSED] 0xFD81 (PANTHERLAKE)
[18:03:13] ============= [PASSED] check_platform_gt_count =============
[18:03:13] ===================== [PASSED] xe_pci ======================
[18:03:13] =================== xe_rtp (2 subtests) ====================
[18:03:13] =============== xe_rtp_process_to_sr_tests ================
[18:03:13] [PASSED] coalesce-same-reg
[18:03:13] [PASSED] no-match-no-add
[18:03:13] [PASSED] match-or
[18:03:13] [PASSED] match-or-xfail
[18:03:13] [PASSED] no-match-no-add-multiple-rules
[18:03:13] [PASSED] two-regs-two-entries
[18:03:13] [PASSED] clr-one-set-other
[18:03:13] [PASSED] set-field
[18:03:13] [PASSED] conflict-duplicate
[18:03:13] [PASSED] conflict-not-disjoint
[18:03:13] [PASSED] conflict-reg-type
[18:03:13] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[18:03:13] ================== xe_rtp_process_tests ===================
[18:03:13] [PASSED] active1
[18:03:13] [PASSED] active2
[18:03:13] [PASSED] active-inactive
[18:03:13] [PASSED] inactive-active
[18:03:13] [PASSED] inactive-1st_or_active-inactive
[18:03:13] [PASSED] inactive-2nd_or_active-inactive
[18:03:13] [PASSED] inactive-last_or_active-inactive
[18:03:13] [PASSED] inactive-no_or_active-inactive
[18:03:13] ============== [PASSED] xe_rtp_process_tests ===============
[18:03:13] ===================== [PASSED] xe_rtp ======================
[18:03:13] ==================== xe_wa (1 subtest) =====================
[18:03:13] ======================== xe_wa_gt =========================
[18:03:13] [PASSED] TIGERLAKE (B0)
[18:03:13] [PASSED] DG1 (A0)
[18:03:13] [PASSED] DG1 (B0)
[18:03:13] [PASSED] ALDERLAKE_S (A0)
[18:03:13] [PASSED] ALDERLAKE_S (B0)
[18:03:13] [PASSED] ALDERLAKE_S (C0)
[18:03:13] [PASSED] ALDERLAKE_S (D0)
[18:03:13] [PASSED] ALDERLAKE_P (A0)
[18:03:13] [PASSED] ALDERLAKE_P (B0)
[18:03:13] [PASSED] ALDERLAKE_P (C0)
[18:03:13] [PASSED] ALDERLAKE_S_RPLS (D0)
[18:03:13] [PASSED] ALDERLAKE_P_RPLU (E0)
[18:03:13] [PASSED] DG2_G10 (C0)
[18:03:13] [PASSED] DG2_G11 (B1)
[18:03:13] [PASSED] DG2_G12 (A1)
[18:03:13] [PASSED] METEORLAKE (g:A0, m:A0)
[18:03:13] [PASSED] METEORLAKE (g:A0, m:A0)
[18:03:13] [PASSED] METEORLAKE (g:A0, m:A0)
[18:03:13] [PASSED] LUNARLAKE (g:A0, m:A0)
[18:03:13] [PASSED] LUNARLAKE (g:B0, m:A0)
stty: 'standard input': Inappropriate ioctl for device
[18:03:13] [PASSED] BATTLEMAGE (g:A0, m:A1)
[18:03:13] ==================== [PASSED] xe_wa_gt =====================
[18:03:13] ====================== [PASSED] xe_wa ======================
[18:03:13] ============================================================
[18:03:13] Testing complete. Ran 297 tests: passed: 281, skipped: 16
[18:03:13] Elapsed time: 31.982s total, 4.151s configuring, 27.464s building, 0.323s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[18:03:14] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[18:03:15] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[18:03:37] Starting KUnit Kernel (1/1)...
[18:03:37] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[18:03:37] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[18:03:37] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[18:03:37] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[18:03:37] =========== drm_validate_clone_mode (2 subtests) ===========
[18:03:37] ============== drm_test_check_in_clone_mode ===============
[18:03:37] [PASSED] in_clone_mode
[18:03:37] [PASSED] not_in_clone_mode
[18:03:37] ========== [PASSED] drm_test_check_in_clone_mode ===========
[18:03:37] =============== drm_test_check_valid_clones ===============
[18:03:37] [PASSED] not_in_clone_mode
[18:03:37] [PASSED] valid_clone
[18:03:37] [PASSED] invalid_clone
[18:03:37] =========== [PASSED] drm_test_check_valid_clones ===========
[18:03:37] ============= [PASSED] drm_validate_clone_mode =============
[18:03:37] ============= drm_validate_modeset (1 subtest) =============
[18:03:37] [PASSED] drm_test_check_connector_changed_modeset
[18:03:37] ============== [PASSED] drm_validate_modeset ===============
[18:03:37] ====== drm_test_bridge_get_current_state (2 subtests) ======
[18:03:37] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[18:03:37] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[18:03:37] ======== [PASSED] drm_test_bridge_get_current_state ========
[18:03:37] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[18:03:37] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[18:03:37] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[18:03:37] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[18:03:37] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[18:03:37] ============== drm_bridge_alloc (2 subtests) ===============
[18:03:37] [PASSED] drm_test_drm_bridge_alloc_basic
[18:03:37] [PASSED] drm_test_drm_bridge_alloc_get_put
[18:03:37] ================ [PASSED] drm_bridge_alloc =================
[18:03:37] ================== drm_buddy (7 subtests) ==================
[18:03:37] [PASSED] drm_test_buddy_alloc_limit
[18:03:37] [PASSED] drm_test_buddy_alloc_optimistic
[18:03:37] [PASSED] drm_test_buddy_alloc_pessimistic
[18:03:37] [PASSED] drm_test_buddy_alloc_pathological
[18:03:37] [PASSED] drm_test_buddy_alloc_contiguous
[18:03:37] [PASSED] drm_test_buddy_alloc_clear
[18:03:37] [PASSED] drm_test_buddy_alloc_range_bias
[18:03:37] ==================== [PASSED] drm_buddy ====================
[18:03:37] ============= drm_cmdline_parser (40 subtests) =============
[18:03:37] [PASSED] drm_test_cmdline_force_d_only
[18:03:37] [PASSED] drm_test_cmdline_force_D_only_dvi
[18:03:37] [PASSED] drm_test_cmdline_force_D_only_hdmi
[18:03:37] [PASSED] drm_test_cmdline_force_D_only_not_digital
[18:03:37] [PASSED] drm_test_cmdline_force_e_only
[18:03:37] [PASSED] drm_test_cmdline_res
[18:03:37] [PASSED] drm_test_cmdline_res_vesa
[18:03:37] [PASSED] drm_test_cmdline_res_vesa_rblank
[18:03:37] [PASSED] drm_test_cmdline_res_rblank
[18:03:37] [PASSED] drm_test_cmdline_res_bpp
[18:03:37] [PASSED] drm_test_cmdline_res_refresh
[18:03:37] [PASSED] drm_test_cmdline_res_bpp_refresh
[18:03:37] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[18:03:37] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[18:03:37] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[18:03:37] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[18:03:37] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[18:03:37] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[18:03:37] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[18:03:37] [PASSED] drm_test_cmdline_res_margins_force_on
[18:03:37] [PASSED] drm_test_cmdline_res_vesa_margins
[18:03:37] [PASSED] drm_test_cmdline_name
[18:03:37] [PASSED] drm_test_cmdline_name_bpp
[18:03:37] [PASSED] drm_test_cmdline_name_option
[18:03:37] [PASSED] drm_test_cmdline_name_bpp_option
[18:03:37] [PASSED] drm_test_cmdline_rotate_0
[18:03:37] [PASSED] drm_test_cmdline_rotate_90
[18:03:37] [PASSED] drm_test_cmdline_rotate_180
[18:03:37] [PASSED] drm_test_cmdline_rotate_270
[18:03:37] [PASSED] drm_test_cmdline_hmirror
[18:03:37] [PASSED] drm_test_cmdline_vmirror
[18:03:37] [PASSED] drm_test_cmdline_margin_options
[18:03:37] [PASSED] drm_test_cmdline_multiple_options
[18:03:37] [PASSED] drm_test_cmdline_bpp_extra_and_option
[18:03:37] [PASSED] drm_test_cmdline_extra_and_option
[18:03:37] [PASSED] drm_test_cmdline_freestanding_options
[18:03:37] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[18:03:37] [PASSED] drm_test_cmdline_panel_orientation
[18:03:37] ================ drm_test_cmdline_invalid =================
[18:03:37] [PASSED] margin_only
[18:03:37] [PASSED] interlace_only
[18:03:37] [PASSED] res_missing_x
[18:03:37] [PASSED] res_missing_y
[18:03:37] [PASSED] res_bad_y
[18:03:37] [PASSED] res_missing_y_bpp
[18:03:37] [PASSED] res_bad_bpp
[18:03:37] [PASSED] res_bad_refresh
[18:03:37] [PASSED] res_bpp_refresh_force_on_off
[18:03:37] [PASSED] res_invalid_mode
[18:03:37] [PASSED] res_bpp_wrong_place_mode
[18:03:37] [PASSED] name_bpp_refresh
[18:03:37] [PASSED] name_refresh
[18:03:37] [PASSED] name_refresh_wrong_mode
[18:03:37] [PASSED] name_refresh_invalid_mode
[18:03:37] [PASSED] rotate_multiple
[18:03:37] [PASSED] rotate_invalid_val
[18:03:37] [PASSED] rotate_truncated
[18:03:37] [PASSED] invalid_option
[18:03:37] [PASSED] invalid_tv_option
[18:03:37] [PASSED] truncated_tv_option
[18:03:37] ============ [PASSED] drm_test_cmdline_invalid =============
[18:03:37] =============== drm_test_cmdline_tv_options ===============
[18:03:37] [PASSED] NTSC
[18:03:37] [PASSED] NTSC_443
[18:03:37] [PASSED] NTSC_J
[18:03:37] [PASSED] PAL
[18:03:37] [PASSED] PAL_M
[18:03:37] [PASSED] PAL_N
[18:03:37] [PASSED] SECAM
[18:03:37] [PASSED] MONO_525
[18:03:37] [PASSED] MONO_625
[18:03:37] =========== [PASSED] drm_test_cmdline_tv_options ===========
[18:03:37] =============== [PASSED] drm_cmdline_parser ================
[18:03:37] ========== drmm_connector_hdmi_init (20 subtests) ==========
[18:03:37] [PASSED] drm_test_connector_hdmi_init_valid
[18:03:37] [PASSED] drm_test_connector_hdmi_init_bpc_8
[18:03:37] [PASSED] drm_test_connector_hdmi_init_bpc_10
[18:03:37] [PASSED] drm_test_connector_hdmi_init_bpc_12
[18:03:37] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[18:03:37] [PASSED] drm_test_connector_hdmi_init_bpc_null
[18:03:37] [PASSED] drm_test_connector_hdmi_init_formats_empty
[18:03:37] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[18:03:37] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[18:03:37] [PASSED] supported_formats=0x9 yuv420_allowed=1
[18:03:37] [PASSED] supported_formats=0x9 yuv420_allowed=0
[18:03:37] [PASSED] supported_formats=0x3 yuv420_allowed=1
[18:03:37] [PASSED] supported_formats=0x3 yuv420_allowed=0
[18:03:37] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[18:03:37] [PASSED] drm_test_connector_hdmi_init_null_ddc
[18:03:37] [PASSED] drm_test_connector_hdmi_init_null_product
[18:03:37] [PASSED] drm_test_connector_hdmi_init_null_vendor
[18:03:37] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[18:03:37] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[18:03:37] [PASSED] drm_test_connector_hdmi_init_product_valid
[18:03:37] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[18:03:37] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[18:03:37] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[18:03:37] ========= drm_test_connector_hdmi_init_type_valid =========
[18:03:37] [PASSED] HDMI-A
[18:03:37] [PASSED] HDMI-B
[18:03:37] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[18:03:37] ======== drm_test_connector_hdmi_init_type_invalid ========
[18:03:37] [PASSED] Unknown
[18:03:37] [PASSED] VGA
[18:03:37] [PASSED] DVI-I
[18:03:37] [PASSED] DVI-D
[18:03:37] [PASSED] DVI-A
[18:03:37] [PASSED] Composite
[18:03:37] [PASSED] SVIDEO
[18:03:37] [PASSED] LVDS
[18:03:37] [PASSED] Component
[18:03:37] [PASSED] DIN
[18:03:37] [PASSED] DP
[18:03:37] [PASSED] TV
[18:03:37] [PASSED] eDP
[18:03:37] [PASSED] Virtual
[18:03:37] [PASSED] DSI
[18:03:37] [PASSED] DPI
[18:03:37] [PASSED] Writeback
[18:03:37] [PASSED] SPI
[18:03:37] [PASSED] USB
[18:03:37] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[18:03:37] ============ [PASSED] drmm_connector_hdmi_init =============
[18:03:37] ============= drmm_connector_init (3 subtests) =============
[18:03:37] [PASSED] drm_test_drmm_connector_init
[18:03:37] [PASSED] drm_test_drmm_connector_init_null_ddc
[18:03:37] ========= drm_test_drmm_connector_init_type_valid =========
[18:03:37] [PASSED] Unknown
[18:03:37] [PASSED] VGA
[18:03:37] [PASSED] DVI-I
[18:03:37] [PASSED] DVI-D
[18:03:37] [PASSED] DVI-A
[18:03:37] [PASSED] Composite
[18:03:37] [PASSED] SVIDEO
[18:03:37] [PASSED] LVDS
[18:03:37] [PASSED] Component
[18:03:37] [PASSED] DIN
[18:03:37] [PASSED] DP
[18:03:37] [PASSED] HDMI-A
[18:03:37] [PASSED] HDMI-B
[18:03:37] [PASSED] TV
[18:03:37] [PASSED] eDP
[18:03:37] [PASSED] Virtual
[18:03:37] [PASSED] DSI
[18:03:37] [PASSED] DPI
[18:03:37] [PASSED] Writeback
[18:03:37] [PASSED] SPI
[18:03:37] [PASSED] USB
[18:03:37] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[18:03:37] =============== [PASSED] drmm_connector_init ===============
[18:03:37] ========= drm_connector_dynamic_init (6 subtests) ==========
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_init
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_init_properties
[18:03:37] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[18:03:37] [PASSED] Unknown
[18:03:37] [PASSED] VGA
[18:03:37] [PASSED] DVI-I
[18:03:37] [PASSED] DVI-D
[18:03:37] [PASSED] DVI-A
[18:03:37] [PASSED] Composite
[18:03:37] [PASSED] SVIDEO
[18:03:37] [PASSED] LVDS
[18:03:37] [PASSED] Component
[18:03:37] [PASSED] DIN
[18:03:37] [PASSED] DP
[18:03:37] [PASSED] HDMI-A
[18:03:37] [PASSED] HDMI-B
[18:03:37] [PASSED] TV
[18:03:37] [PASSED] eDP
[18:03:37] [PASSED] Virtual
[18:03:37] [PASSED] DSI
[18:03:37] [PASSED] DPI
[18:03:37] [PASSED] Writeback
[18:03:37] [PASSED] SPI
[18:03:37] [PASSED] USB
[18:03:37] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[18:03:37] ======== drm_test_drm_connector_dynamic_init_name =========
[18:03:37] [PASSED] Unknown
[18:03:37] [PASSED] VGA
[18:03:37] [PASSED] DVI-I
[18:03:37] [PASSED] DVI-D
[18:03:37] [PASSED] DVI-A
[18:03:37] [PASSED] Composite
[18:03:37] [PASSED] SVIDEO
[18:03:37] [PASSED] LVDS
[18:03:37] [PASSED] Component
[18:03:37] [PASSED] DIN
[18:03:37] [PASSED] DP
[18:03:37] [PASSED] HDMI-A
[18:03:37] [PASSED] HDMI-B
[18:03:37] [PASSED] TV
[18:03:37] [PASSED] eDP
[18:03:37] [PASSED] Virtual
[18:03:37] [PASSED] DSI
[18:03:37] [PASSED] DPI
[18:03:37] [PASSED] Writeback
[18:03:37] [PASSED] SPI
[18:03:37] [PASSED] USB
[18:03:37] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[18:03:37] =========== [PASSED] drm_connector_dynamic_init ============
[18:03:37] ==== drm_connector_dynamic_register_early (4 subtests) =====
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[18:03:37] ====== [PASSED] drm_connector_dynamic_register_early =======
[18:03:37] ======= drm_connector_dynamic_register (7 subtests) ========
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[18:03:37] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[18:03:37] ========= [PASSED] drm_connector_dynamic_register ==========
[18:03:37] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[18:03:37] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[18:03:37] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[18:03:37] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[18:03:37] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[18:03:37] ========== drm_test_get_tv_mode_from_name_valid ===========
[18:03:37] [PASSED] NTSC
[18:03:37] [PASSED] NTSC-443
[18:03:37] [PASSED] NTSC-J
[18:03:37] [PASSED] PAL
[18:03:37] [PASSED] PAL-M
[18:03:37] [PASSED] PAL-N
[18:03:37] [PASSED] SECAM
[18:03:37] [PASSED] Mono
[18:03:37] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[18:03:37] [PASSED] drm_test_get_tv_mode_from_name_truncated
[18:03:37] ============ [PASSED] drm_get_tv_mode_from_name ============
[18:03:37] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[18:03:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[18:03:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[18:03:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[18:03:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[18:03:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[18:03:37] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[18:03:37] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[18:03:37] [PASSED] VIC 96
[18:03:37] [PASSED] VIC 97
[18:03:37] [PASSED] VIC 101
[18:03:37] [PASSED] VIC 102
[18:03:37] [PASSED] VIC 106
[18:03:37] [PASSED] VIC 107
[18:03:37] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[18:03:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[18:03:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[18:03:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[18:03:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[18:03:37] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[18:03:37] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[18:03:37] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[18:03:37] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[18:03:37] [PASSED] Automatic
[18:03:37] [PASSED] Full
[18:03:37] [PASSED] Limited 16:235
[18:03:37] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[18:03:37] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[18:03:37] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[18:03:37] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[18:03:37] === drm_test_drm_hdmi_connector_get_output_format_name ====
[18:03:37] [PASSED] RGB
[18:03:37] [PASSED] YUV 4:2:0
[18:03:37] [PASSED] YUV 4:2:2
[18:03:37] [PASSED] YUV 4:4:4
[18:03:37] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[18:03:37] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[18:03:37] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[18:03:37] ============= drm_damage_helper (21 subtests) ==============
[18:03:37] [PASSED] drm_test_damage_iter_no_damage
[18:03:37] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[18:03:37] [PASSED] drm_test_damage_iter_no_damage_src_moved
[18:03:37] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[18:03:37] [PASSED] drm_test_damage_iter_no_damage_not_visible
[18:03:37] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[18:03:37] [PASSED] drm_test_damage_iter_no_damage_no_fb
[18:03:37] [PASSED] drm_test_damage_iter_simple_damage
[18:03:37] [PASSED] drm_test_damage_iter_single_damage
[18:03:37] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[18:03:37] [PASSED] drm_test_damage_iter_single_damage_outside_src
[18:03:37] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[18:03:37] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[18:03:37] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[18:03:37] [PASSED] drm_test_damage_iter_single_damage_src_moved
[18:03:37] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[18:03:37] [PASSED] drm_test_damage_iter_damage
[18:03:37] [PASSED] drm_test_damage_iter_damage_one_intersect
[18:03:37] [PASSED] drm_test_damage_iter_damage_one_outside
[18:03:37] [PASSED] drm_test_damage_iter_damage_src_moved
[18:03:37] [PASSED] drm_test_damage_iter_damage_not_visible
[18:03:37] ================ [PASSED] drm_damage_helper ================
[18:03:37] ============== drm_dp_mst_helper (3 subtests) ==============
[18:03:37] ============== drm_test_dp_mst_calc_pbn_mode ==============
[18:03:37] [PASSED] Clock 154000 BPP 30 DSC disabled
[18:03:37] [PASSED] Clock 234000 BPP 30 DSC disabled
[18:03:37] [PASSED] Clock 297000 BPP 24 DSC disabled
[18:03:37] [PASSED] Clock 332880 BPP 24 DSC enabled
[18:03:37] [PASSED] Clock 324540 BPP 24 DSC enabled
[18:03:37] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[18:03:37] ============== drm_test_dp_mst_calc_pbn_div ===============
[18:03:37] [PASSED] Link rate 2000000 lane count 4
[18:03:37] [PASSED] Link rate 2000000 lane count 2
[18:03:37] [PASSED] Link rate 2000000 lane count 1
[18:03:37] [PASSED] Link rate 1350000 lane count 4
[18:03:37] [PASSED] Link rate 1350000 lane count 2
[18:03:37] [PASSED] Link rate 1350000 lane count 1
[18:03:37] [PASSED] Link rate 1000000 lane count 4
[18:03:37] [PASSED] Link rate 1000000 lane count 2
[18:03:37] [PASSED] Link rate 1000000 lane count 1
[18:03:37] [PASSED] Link rate 810000 lane count 4
[18:03:37] [PASSED] Link rate 810000 lane count 2
[18:03:37] [PASSED] Link rate 810000 lane count 1
[18:03:37] [PASSED] Link rate 540000 lane count 4
[18:03:37] [PASSED] Link rate 540000 lane count 2
[18:03:37] [PASSED] Link rate 540000 lane count 1
[18:03:37] [PASSED] Link rate 270000 lane count 4
[18:03:37] [PASSED] Link rate 270000 lane count 2
[18:03:37] [PASSED] Link rate 270000 lane count 1
[18:03:37] [PASSED] Link rate 162000 lane count 4
[18:03:37] [PASSED] Link rate 162000 lane count 2
[18:03:37] [PASSED] Link rate 162000 lane count 1
[18:03:37] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[18:03:37] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[18:03:37] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[18:03:37] [PASSED] DP_POWER_UP_PHY with port number
[18:03:37] [PASSED] DP_POWER_DOWN_PHY with port number
[18:03:37] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[18:03:37] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[18:03:37] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[18:03:37] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[18:03:37] [PASSED] DP_QUERY_PAYLOAD with port number
[18:03:37] [PASSED] DP_QUERY_PAYLOAD with VCPI
[18:03:37] [PASSED] DP_REMOTE_DPCD_READ with port number
[18:03:37] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[18:03:37] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[18:03:37] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[18:03:37] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[18:03:37] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[18:03:37] [PASSED] DP_REMOTE_I2C_READ with port number
[18:03:37] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[18:03:37] [PASSED] DP_REMOTE_I2C_READ with transactions array
[18:03:37] [PASSED] DP_REMOTE_I2C_WRITE with port number
[18:03:37] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[18:03:37] [PASSED] DP_REMOTE_I2C_WRITE with data array
[18:03:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[18:03:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[18:03:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[18:03:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[18:03:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[18:03:37] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[18:03:37] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[18:03:37] ================ [PASSED] drm_dp_mst_helper ================
[18:03:37] ================== drm_exec (7 subtests) ===================
[18:03:37] [PASSED] sanitycheck
[18:03:37] [PASSED] test_lock
[18:03:37] [PASSED] test_lock_unlock
[18:03:37] [PASSED] test_duplicates
[18:03:37] [PASSED] test_prepare
[18:03:37] [PASSED] test_prepare_array
[18:03:37] [PASSED] test_multiple_loops
[18:03:37] ==================== [PASSED] drm_exec =====================
[18:03:37] =========== drm_format_helper_test (17 subtests) ===========
[18:03:37] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[18:03:37] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[18:03:37] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[18:03:37] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[18:03:37] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[18:03:37] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[18:03:37] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[18:03:37] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[18:03:37] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[18:03:37] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[18:03:37] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[18:03:37] ============== drm_test_fb_xrgb8888_to_mono ===============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[18:03:37] ==================== drm_test_fb_swab =====================
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ================ [PASSED] drm_test_fb_swab =================
[18:03:37] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[18:03:37] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[18:03:37] [PASSED] single_pixel_source_buffer
[18:03:37] [PASSED] single_pixel_clip_rectangle
[18:03:37] [PASSED] well_known_colors
[18:03:37] [PASSED] destination_pitch
[18:03:37] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[18:03:37] ================= drm_test_fb_clip_offset =================
[18:03:37] [PASSED] pass through
[18:03:37] [PASSED] horizontal offset
[18:03:37] [PASSED] vertical offset
[18:03:37] [PASSED] horizontal and vertical offset
[18:03:37] [PASSED] horizontal offset (custom pitch)
[18:03:37] [PASSED] vertical offset (custom pitch)
[18:03:37] [PASSED] horizontal and vertical offset (custom pitch)
[18:03:37] ============= [PASSED] drm_test_fb_clip_offset =============
[18:03:37] =================== drm_test_fb_memcpy ====================
[18:03:37] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[18:03:37] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[18:03:37] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[18:03:37] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[18:03:37] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[18:03:37] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[18:03:37] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[18:03:37] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[18:03:37] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[18:03:37] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[18:03:37] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[18:03:37] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[18:03:37] =============== [PASSED] drm_test_fb_memcpy ================
[18:03:37] ============= [PASSED] drm_format_helper_test ==============
[18:03:37] ================= drm_format (18 subtests) =================
[18:03:37] [PASSED] drm_test_format_block_width_invalid
[18:03:37] [PASSED] drm_test_format_block_width_one_plane
[18:03:37] [PASSED] drm_test_format_block_width_two_plane
[18:03:37] [PASSED] drm_test_format_block_width_three_plane
[18:03:37] [PASSED] drm_test_format_block_width_tiled
[18:03:37] [PASSED] drm_test_format_block_height_invalid
[18:03:37] [PASSED] drm_test_format_block_height_one_plane
[18:03:37] [PASSED] drm_test_format_block_height_two_plane
[18:03:37] [PASSED] drm_test_format_block_height_three_plane
[18:03:37] [PASSED] drm_test_format_block_height_tiled
[18:03:37] [PASSED] drm_test_format_min_pitch_invalid
[18:03:37] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[18:03:37] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[18:03:37] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[18:03:37] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[18:03:37] [PASSED] drm_test_format_min_pitch_two_plane
[18:03:37] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[18:03:37] [PASSED] drm_test_format_min_pitch_tiled
[18:03:37] =================== [PASSED] drm_format ====================
[18:03:37] ============== drm_framebuffer (10 subtests) ===============
[18:03:37] ========== drm_test_framebuffer_check_src_coords ==========
[18:03:37] [PASSED] Success: source fits into fb
[18:03:37] [PASSED] Fail: overflowing fb with x-axis coordinate
[18:03:37] [PASSED] Fail: overflowing fb with y-axis coordinate
[18:03:37] [PASSED] Fail: overflowing fb with source width
[18:03:37] [PASSED] Fail: overflowing fb with source height
[18:03:37] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[18:03:37] [PASSED] drm_test_framebuffer_cleanup
[18:03:37] =============== drm_test_framebuffer_create ===============
[18:03:37] [PASSED] ABGR8888 normal sizes
[18:03:37] [PASSED] ABGR8888 max sizes
[18:03:37] [PASSED] ABGR8888 pitch greater than min required
[18:03:37] [PASSED] ABGR8888 pitch less than min required
[18:03:37] [PASSED] ABGR8888 Invalid width
[18:03:37] [PASSED] ABGR8888 Invalid buffer handle
[18:03:37] [PASSED] No pixel format
[18:03:37] [PASSED] ABGR8888 Width 0
[18:03:37] [PASSED] ABGR8888 Height 0
[18:03:37] [PASSED] ABGR8888 Out of bound height * pitch combination
[18:03:37] [PASSED] ABGR8888 Large buffer offset
[18:03:37] [PASSED] ABGR8888 Buffer offset for inexistent plane
[18:03:37] [PASSED] ABGR8888 Invalid flag
[18:03:37] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[18:03:37] [PASSED] ABGR8888 Valid buffer modifier
[18:03:37] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[18:03:37] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[18:03:37] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[18:03:37] [PASSED] NV12 Normal sizes
[18:03:37] [PASSED] NV12 Max sizes
[18:03:37] [PASSED] NV12 Invalid pitch
[18:03:37] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[18:03:37] [PASSED] NV12 different modifier per-plane
[18:03:37] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[18:03:37] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[18:03:37] [PASSED] NV12 Modifier for inexistent plane
[18:03:37] [PASSED] NV12 Handle for inexistent plane
[18:03:37] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[18:03:37] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[18:03:37] [PASSED] YVU420 Normal sizes
[18:03:37] [PASSED] YVU420 Max sizes
[18:03:37] [PASSED] YVU420 Invalid pitch
[18:03:37] [PASSED] YVU420 Different pitches
[18:03:37] [PASSED] YVU420 Different buffer offsets/pitches
[18:03:37] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[18:03:37] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[18:03:37] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[18:03:37] [PASSED] YVU420 Valid modifier
[18:03:37] [PASSED] YVU420 Different modifiers per plane
[18:03:37] [PASSED] YVU420 Modifier for inexistent plane
[18:03:37] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[18:03:37] [PASSED] X0L2 Normal sizes
[18:03:37] [PASSED] X0L2 Max sizes
[18:03:37] [PASSED] X0L2 Invalid pitch
[18:03:37] [PASSED] X0L2 Pitch greater than minimum required
[18:03:37] [PASSED] X0L2 Handle for inexistent plane
[18:03:37] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[18:03:37] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[18:03:37] [PASSED] X0L2 Valid modifier
[18:03:37] [PASSED] X0L2 Modifier for inexistent plane
[18:03:37] =========== [PASSED] drm_test_framebuffer_create ===========
[18:03:37] [PASSED] drm_test_framebuffer_free
[18:03:37] [PASSED] drm_test_framebuffer_init
[18:03:37] [PASSED] drm_test_framebuffer_init_bad_format
[18:03:37] [PASSED] drm_test_framebuffer_init_dev_mismatch
[18:03:37] [PASSED] drm_test_framebuffer_lookup
[18:03:37] [PASSED] drm_test_framebuffer_lookup_inexistent
[18:03:37] [PASSED] drm_test_framebuffer_modifiers_not_supported
[18:03:37] ================= [PASSED] drm_framebuffer =================
[18:03:37] ================ drm_gem_shmem (8 subtests) ================
[18:03:37] [PASSED] drm_gem_shmem_test_obj_create
[18:03:37] [PASSED] drm_gem_shmem_test_obj_create_private
[18:03:37] [PASSED] drm_gem_shmem_test_pin_pages
[18:03:37] [PASSED] drm_gem_shmem_test_vmap
[18:03:37] [PASSED] drm_gem_shmem_test_get_pages_sgt
[18:03:37] [PASSED] drm_gem_shmem_test_get_sg_table
[18:03:37] [PASSED] drm_gem_shmem_test_madvise
[18:03:37] [PASSED] drm_gem_shmem_test_purge
[18:03:37] ================== [PASSED] drm_gem_shmem ==================
[18:03:37] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[18:03:37] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[18:03:37] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[18:03:37] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[18:03:37] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[18:03:37] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[18:03:37] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[18:03:37] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[18:03:37] [PASSED] Automatic
[18:03:37] [PASSED] Full
[18:03:37] [PASSED] Limited 16:235
[18:03:37] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[18:03:37] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[18:03:37] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[18:03:37] [PASSED] drm_test_check_disable_connector
[18:03:37] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[18:03:37] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[18:03:37] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[18:03:37] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[18:03:37] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[18:03:37] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[18:03:37] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[18:03:37] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[18:03:37] [PASSED] drm_test_check_output_bpc_dvi
[18:03:37] [PASSED] drm_test_check_output_bpc_format_vic_1
[18:03:37] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[18:03:37] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[18:03:37] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[18:03:37] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[18:03:37] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[18:03:37] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[18:03:37] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[18:03:37] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[18:03:37] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[18:03:37] [PASSED] drm_test_check_broadcast_rgb_value
[18:03:37] [PASSED] drm_test_check_bpc_8_value
[18:03:37] [PASSED] drm_test_check_bpc_10_value
[18:03:37] [PASSED] drm_test_check_bpc_12_value
[18:03:37] [PASSED] drm_test_check_format_value
[18:03:37] [PASSED] drm_test_check_tmds_char_value
[18:03:37] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[18:03:37] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[18:03:37] [PASSED] drm_test_check_mode_valid
[18:03:37] [PASSED] drm_test_check_mode_valid_reject
[18:03:37] [PASSED] drm_test_check_mode_valid_reject_rate
[18:03:37] [PASSED] drm_test_check_mode_valid_reject_max_clock
[18:03:37] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[18:03:37] ================= drm_managed (2 subtests) =================
[18:03:37] [PASSED] drm_test_managed_release_action
[18:03:37] [PASSED] drm_test_managed_run_action
[18:03:37] =================== [PASSED] drm_managed ===================
[18:03:37] =================== drm_mm (6 subtests) ====================
[18:03:37] [PASSED] drm_test_mm_init
[18:03:37] [PASSED] drm_test_mm_debug
[18:03:37] [PASSED] drm_test_mm_align32
[18:03:37] [PASSED] drm_test_mm_align64
[18:03:37] [PASSED] drm_test_mm_lowest
[18:03:37] [PASSED] drm_test_mm_highest
[18:03:37] ===================== [PASSED] drm_mm ======================
[18:03:37] ============= drm_modes_analog_tv (5 subtests) =============
[18:03:37] [PASSED] drm_test_modes_analog_tv_mono_576i
[18:03:37] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[18:03:37] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[18:03:37] [PASSED] drm_test_modes_analog_tv_pal_576i
[18:03:37] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[18:03:37] =============== [PASSED] drm_modes_analog_tv ===============
[18:03:37] ============== drm_plane_helper (2 subtests) ===============
[18:03:37] =============== drm_test_check_plane_state ================
[18:03:37] [PASSED] clipping_simple
[18:03:37] [PASSED] clipping_rotate_reflect
[18:03:37] [PASSED] positioning_simple
[18:03:37] [PASSED] upscaling
[18:03:37] [PASSED] downscaling
[18:03:37] [PASSED] rounding1
[18:03:37] [PASSED] rounding2
[18:03:37] [PASSED] rounding3
[18:03:37] [PASSED] rounding4
[18:03:37] =========== [PASSED] drm_test_check_plane_state ============
[18:03:37] =========== drm_test_check_invalid_plane_state ============
[18:03:37] [PASSED] positioning_invalid
[18:03:37] [PASSED] upscaling_invalid
[18:03:37] [PASSED] downscaling_invalid
[18:03:37] ======= [PASSED] drm_test_check_invalid_plane_state ========
[18:03:37] ================ [PASSED] drm_plane_helper =================
[18:03:37] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[18:03:37] ====== drm_test_connector_helper_tv_get_modes_check =======
[18:03:37] [PASSED] None
[18:03:37] [PASSED] PAL
[18:03:37] [PASSED] NTSC
[18:03:37] [PASSED] Both, NTSC Default
[18:03:37] [PASSED] Both, PAL Default
[18:03:37] [PASSED] Both, NTSC Default, with PAL on command-line
[18:03:37] [PASSED] Both, PAL Default, with NTSC on command-line
[18:03:37] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[18:03:37] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[18:03:37] ================== drm_rect (9 subtests) ===================
[18:03:37] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[18:03:37] [PASSED] drm_test_rect_clip_scaled_not_clipped
[18:03:37] [PASSED] drm_test_rect_clip_scaled_clipped
[18:03:37] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[18:03:37] ================= drm_test_rect_intersect =================
[18:03:37] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[18:03:37] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[18:03:37] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[18:03:37] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[18:03:37] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[18:03:37] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[18:03:37] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[18:03:37] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[18:03:37] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[18:03:37] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[18:03:37] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[18:03:37] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[18:03:37] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[18:03:37] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[18:03:37] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[18:03:37] ============= [PASSED] drm_test_rect_intersect =============
[18:03:37] ================ drm_test_rect_calc_hscale ================
[18:03:37] [PASSED] normal use
[18:03:37] [PASSED] out of max range
[18:03:37] [PASSED] out of min range
[18:03:37] [PASSED] zero dst
[18:03:37] [PASSED] negative src
[18:03:37] [PASSED] negative dst
[18:03:37] ============ [PASSED] drm_test_rect_calc_hscale ============
[18:03:37] ================ drm_test_rect_calc_vscale ================
[18:03:37] [PASSED] normal use
[18:03:37] [PASSED] out of max range
[18:03:37] [PASSED] out of min range
[18:03:37] [PASSED] zero dst
[18:03:37] [PASSED] negative src
[18:03:37] [PASSED] negative dst
[18:03:37] ============ [PASSED] drm_test_rect_calc_vscale ============
[18:03:37] ================== drm_test_rect_rotate ===================
[18:03:37] [PASSED] reflect-x
[18:03:37] [PASSED] reflect-y
[18:03:37] [PASSED] rotate-0
[18:03:37] [PASSED] rotate-90
[18:03:37] [PASSED] rotate-180
[18:03:37] [PASSED] rotate-270
stty: 'standard input': Inappropriate ioctl for device
[18:03:37] ============== [PASSED] drm_test_rect_rotate ===============
[18:03:37] ================ drm_test_rect_rotate_inv =================
[18:03:37] [PASSED] reflect-x
[18:03:37] [PASSED] reflect-y
[18:03:37] [PASSED] rotate-0
[18:03:37] [PASSED] rotate-90
[18:03:37] [PASSED] rotate-180
[18:03:37] [PASSED] rotate-270
[18:03:37] ============ [PASSED] drm_test_rect_rotate_inv =============
[18:03:37] ==================== [PASSED] drm_rect =====================
[18:03:37] ============ drm_sysfb_modeset_test (1 subtest) ============
[18:03:37] ============ drm_test_sysfb_build_fourcc_list =============
[18:03:37] [PASSED] no native formats
[18:03:37] [PASSED] XRGB8888 as native format
[18:03:37] [PASSED] remove duplicates
[18:03:37] [PASSED] convert alpha formats
[18:03:37] [PASSED] random formats
[18:03:37] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[18:03:37] ============= [PASSED] drm_sysfb_modeset_test ==============
[18:03:37] ============================================================
[18:03:37] Testing complete. Ran 616 tests: passed: 616
[18:03:37] Elapsed time: 23.905s total, 1.692s configuring, 22.046s building, 0.144s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[18:03:38] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[18:03:39] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[18:03:47] Starting KUnit Kernel (1/1)...
[18:03:47] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[18:03:47] ================= ttm_device (5 subtests) ==================
[18:03:47] [PASSED] ttm_device_init_basic
[18:03:47] [PASSED] ttm_device_init_multiple
[18:03:47] [PASSED] ttm_device_fini_basic
[18:03:47] [PASSED] ttm_device_init_no_vma_man
[18:03:47] ================== ttm_device_init_pools ==================
[18:03:47] [PASSED] No DMA allocations, no DMA32 required
[18:03:47] [PASSED] DMA allocations, DMA32 required
[18:03:47] [PASSED] No DMA allocations, DMA32 required
[18:03:47] [PASSED] DMA allocations, no DMA32 required
[18:03:47] ============== [PASSED] ttm_device_init_pools ==============
[18:03:47] =================== [PASSED] ttm_device ====================
[18:03:47] ================== ttm_pool (8 subtests) ===================
[18:03:47] ================== ttm_pool_alloc_basic ===================
[18:03:47] [PASSED] One page
[18:03:47] [PASSED] More than one page
[18:03:47] [PASSED] Above the allocation limit
[18:03:47] [PASSED] One page, with coherent DMA mappings enabled
[18:03:47] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[18:03:47] ============== [PASSED] ttm_pool_alloc_basic ===============
[18:03:47] ============== ttm_pool_alloc_basic_dma_addr ==============
[18:03:47] [PASSED] One page
[18:03:47] [PASSED] More than one page
[18:03:47] [PASSED] Above the allocation limit
[18:03:47] [PASSED] One page, with coherent DMA mappings enabled
[18:03:47] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[18:03:47] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[18:03:47] [PASSED] ttm_pool_alloc_order_caching_match
[18:03:47] [PASSED] ttm_pool_alloc_caching_mismatch
[18:03:47] [PASSED] ttm_pool_alloc_order_mismatch
[18:03:47] [PASSED] ttm_pool_free_dma_alloc
[18:03:47] [PASSED] ttm_pool_free_no_dma_alloc
[18:03:47] [PASSED] ttm_pool_fini_basic
[18:03:47] ==================== [PASSED] ttm_pool =====================
[18:03:47] ================ ttm_resource (8 subtests) =================
[18:03:47] ================= ttm_resource_init_basic =================
[18:03:47] [PASSED] Init resource in TTM_PL_SYSTEM
[18:03:47] [PASSED] Init resource in TTM_PL_VRAM
[18:03:47] [PASSED] Init resource in a private placement
[18:03:47] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[18:03:47] ============= [PASSED] ttm_resource_init_basic =============
[18:03:47] [PASSED] ttm_resource_init_pinned
[18:03:47] [PASSED] ttm_resource_fini_basic
[18:03:47] [PASSED] ttm_resource_manager_init_basic
[18:03:47] [PASSED] ttm_resource_manager_usage_basic
[18:03:47] [PASSED] ttm_resource_manager_set_used_basic
[18:03:47] [PASSED] ttm_sys_man_alloc_basic
[18:03:47] [PASSED] ttm_sys_man_free_basic
[18:03:47] ================== [PASSED] ttm_resource ===================
[18:03:47] =================== ttm_tt (15 subtests) ===================
[18:03:47] ==================== ttm_tt_init_basic ====================
[18:03:47] [PASSED] Page-aligned size
[18:03:47] [PASSED] Extra pages requested
[18:03:47] ================ [PASSED] ttm_tt_init_basic ================
[18:03:47] [PASSED] ttm_tt_init_misaligned
[18:03:47] [PASSED] ttm_tt_fini_basic
[18:03:47] [PASSED] ttm_tt_fini_sg
[18:03:47] [PASSED] ttm_tt_fini_shmem
[18:03:47] [PASSED] ttm_tt_create_basic
[18:03:47] [PASSED] ttm_tt_create_invalid_bo_type
[18:03:47] [PASSED] ttm_tt_create_ttm_exists
[18:03:47] [PASSED] ttm_tt_create_failed
[18:03:47] [PASSED] ttm_tt_destroy_basic
[18:03:47] [PASSED] ttm_tt_populate_null_ttm
[18:03:47] [PASSED] ttm_tt_populate_populated_ttm
[18:03:47] [PASSED] ttm_tt_unpopulate_basic
[18:03:47] [PASSED] ttm_tt_unpopulate_empty_ttm
[18:03:47] [PASSED] ttm_tt_swapin_basic
[18:03:47] ===================== [PASSED] ttm_tt ======================
[18:03:47] =================== ttm_bo (14 subtests) ===================
[18:03:47] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[18:03:47] [PASSED] Cannot be interrupted and sleeps
[18:03:47] [PASSED] Cannot be interrupted, locks straight away
[18:03:47] [PASSED] Can be interrupted, sleeps
[18:03:47] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[18:03:47] [PASSED] ttm_bo_reserve_locked_no_sleep
[18:03:47] [PASSED] ttm_bo_reserve_no_wait_ticket
[18:03:47] [PASSED] ttm_bo_reserve_double_resv
[18:03:47] [PASSED] ttm_bo_reserve_interrupted
[18:03:47] [PASSED] ttm_bo_reserve_deadlock
[18:03:47] [PASSED] ttm_bo_unreserve_basic
[18:03:47] [PASSED] ttm_bo_unreserve_pinned
[18:03:47] [PASSED] ttm_bo_unreserve_bulk
[18:03:47] [PASSED] ttm_bo_put_basic
[18:03:47] [PASSED] ttm_bo_put_shared_resv
[18:03:47] [PASSED] ttm_bo_pin_basic
[18:03:47] [PASSED] ttm_bo_pin_unpin_resource
[18:03:47] [PASSED] ttm_bo_multiple_pin_one_unpin
[18:03:47] ===================== [PASSED] ttm_bo ======================
[18:03:47] ============== ttm_bo_validate (21 subtests) ===============
[18:03:47] ============== ttm_bo_init_reserved_sys_man ===============
[18:03:47] [PASSED] Buffer object for userspace
[18:03:47] [PASSED] Kernel buffer object
[18:03:47] [PASSED] Shared buffer object
[18:03:47] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[18:03:47] ============== ttm_bo_init_reserved_mock_man ==============
[18:03:47] [PASSED] Buffer object for userspace
[18:03:47] [PASSED] Kernel buffer object
[18:03:47] [PASSED] Shared buffer object
[18:03:47] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[18:03:47] [PASSED] ttm_bo_init_reserved_resv
[18:03:47] ================== ttm_bo_validate_basic ==================
[18:03:47] [PASSED] Buffer object for userspace
[18:03:47] [PASSED] Kernel buffer object
[18:03:47] [PASSED] Shared buffer object
[18:03:47] ============== [PASSED] ttm_bo_validate_basic ==============
[18:03:47] [PASSED] ttm_bo_validate_invalid_placement
[18:03:47] ============= ttm_bo_validate_same_placement ==============
[18:03:47] [PASSED] System manager
[18:03:47] [PASSED] VRAM manager
[18:03:47] ========= [PASSED] ttm_bo_validate_same_placement ==========
[18:03:47] [PASSED] ttm_bo_validate_failed_alloc
[18:03:47] [PASSED] ttm_bo_validate_pinned
[18:03:47] [PASSED] ttm_bo_validate_busy_placement
[18:03:47] ================ ttm_bo_validate_multihop =================
[18:03:47] [PASSED] Buffer object for userspace
[18:03:47] [PASSED] Kernel buffer object
[18:03:47] [PASSED] Shared buffer object
[18:03:47] ============ [PASSED] ttm_bo_validate_multihop =============
[18:03:47] ========== ttm_bo_validate_no_placement_signaled ==========
[18:03:47] [PASSED] Buffer object in system domain, no page vector
[18:03:47] [PASSED] Buffer object in system domain with an existing page vector
[18:03:47] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[18:03:47] ======== ttm_bo_validate_no_placement_not_signaled ========
[18:03:47] [PASSED] Buffer object for userspace
[18:03:47] [PASSED] Kernel buffer object
[18:03:47] [PASSED] Shared buffer object
[18:03:47] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[18:03:47] [PASSED] ttm_bo_validate_move_fence_signaled
[18:03:47] ========= ttm_bo_validate_move_fence_not_signaled =========
[18:03:47] [PASSED] Waits for GPU
[18:03:47] [PASSED] Tries to lock straight away
[18:03:47] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[18:03:47] [PASSED] ttm_bo_validate_happy_evict
[18:03:47] [PASSED] ttm_bo_validate_all_pinned_evict
[18:03:47] [PASSED] ttm_bo_validate_allowed_only_evict
[18:03:47] [PASSED] ttm_bo_validate_deleted_evict
[18:03:47] [PASSED] ttm_bo_validate_busy_domain_evict
[18:03:47] [PASSED] ttm_bo_validate_evict_gutting
[18:03:47] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[18:03:47] ================= [PASSED] ttm_bo_validate =================
[18:03:47] ============================================================
[18:03:47] Testing complete. Ran 101 tests: passed: 101
[18:03:47] Elapsed time: 9.776s total, 1.667s configuring, 7.893s building, 0.180s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 51+ messages in thread
* ✗ CI.checksparse: warning for MADVISE FOR XE (rev6)
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (27 preceding siblings ...)
2025-08-07 18:03 ` ✓ CI.KUnit: success " Patchwork
@ 2025-08-07 18:18 ` Patchwork
2025-08-07 19:11 ` ✓ Xe.CI.BAT: success " Patchwork
2025-08-07 21:16 ` ✓ Xe.CI.Full: " Patchwork
30 siblings, 0 replies; 51+ messages in thread
From: Patchwork @ 2025-08-07 18:18 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
== Series Details ==
Series: MADVISE FOR XE (rev6)
URL : https://patchwork.freedesktop.org/series/149550/
State : warning
== Summary ==
+ trap cleanup EXIT
+ KERNEL=/kernel
+ MT=/root/linux/maintainer-tools
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools /root/linux/maintainer-tools
Cloning into '/root/linux/maintainer-tools'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ make -C /root/linux/maintainer-tools
make: Entering directory '/root/linux/maintainer-tools'
cc -O2 -g -Wextra -o remap-log remap-log.c
make: Leaving directory '/root/linux/maintainer-tools'
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ /root/linux/maintainer-tools/dim sparse --fast 76741379fba333222be8a15bebf1e659eb84088a
Sparse version: 0.6.4 (Ubuntu: 0.6.4-4ubuntu3)
Fast mode used, each commit won't be checked separately.
-
+drivers/gpu/drm/i915/display/intel_display_types.h:2018:24: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/display/intel_display_types.h:2018:24: warning: unreplaced symbol '<noident>'
+drivers/gpu/drm/i915/display/intel_psr.c: note: in included file:
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 51+ messages in thread
* ✓ Xe.CI.BAT: success for MADVISE FOR XE (rev6)
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (28 preceding siblings ...)
2025-08-07 18:18 ` ✗ CI.checksparse: warning " Patchwork
@ 2025-08-07 19:11 ` Patchwork
2025-08-07 21:16 ` ✓ Xe.CI.Full: " Patchwork
30 siblings, 0 replies; 51+ messages in thread
From: Patchwork @ 2025-08-07 19:11 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1770 bytes --]
== Series Details ==
Series: MADVISE FOR XE (rev6)
URL : https://patchwork.freedesktop.org/series/149550/
State : success
== Summary ==
CI Bug Log - changes from xe-3515-76741379fba333222be8a15bebf1e659eb84088a_BAT -> xe-pw-149550v6_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (11 -> 9)
------------------------------
Missing (2): bat-adlp-vm bat-ptl-vm
Known issues
------------
Here are the changes found in xe-pw-149550v6_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_flip@basic-plain-flip@d-edp1:
- bat-adlp-7: [PASS][1] -> [DMESG-WARN][2] ([Intel XE#4543]) +1 other test dmesg-warn
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/bat-adlp-7/igt@kms_flip@basic-plain-flip@d-edp1.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/bat-adlp-7/igt@kms_flip@basic-plain-flip@d-edp1.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
[Intel XE#5783]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5783
Build changes
-------------
* Linux: xe-3515-76741379fba333222be8a15bebf1e659eb84088a -> xe-pw-149550v6
IGT_8488: c4a9bee161f4bb74cbbf81c73b24c416ecf93976 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-3515-76741379fba333222be8a15bebf1e659eb84088a: 76741379fba333222be8a15bebf1e659eb84088a
xe-pw-149550v6: 149550v6
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/index.html
[-- Attachment #2: Type: text/html, Size: 2269 bytes --]
^ permalink raw reply [flat|nested] 51+ messages in thread
* ✓ Xe.CI.Full: success for MADVISE FOR XE (rev6)
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
` (29 preceding siblings ...)
2025-08-07 19:11 ` ✓ Xe.CI.BAT: success " Patchwork
@ 2025-08-07 21:16 ` Patchwork
30 siblings, 0 replies; 51+ messages in thread
From: Patchwork @ 2025-08-07 21:16 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 101572 bytes --]
== Series Details ==
Series: MADVISE FOR XE (rev6)
URL : https://patchwork.freedesktop.org/series/149550/
State : success
== Summary ==
CI Bug Log - changes from xe-3515-76741379fba333222be8a15bebf1e659eb84088a_FULL -> xe-pw-149550v6_FULL
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-149550v6_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@core_getversion@all-cards:
- shard-dg2-set2: [PASS][1] -> [FAIL][2] ([Intel XE#4208])
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@core_getversion@all-cards.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@core_getversion@all-cards.html
* igt@fbdev@info:
- shard-dg2-set2: NOTRUN -> [SKIP][3] ([Intel XE#2134])
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@fbdev@info.html
* igt@fbdev@unaligned-read:
- shard-dg2-set2: [PASS][4] -> [SKIP][5] ([Intel XE#2134])
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@fbdev@unaligned-read.html
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@fbdev@unaligned-read.html
* igt@kms_addfb_basic@invalid-get-prop:
- shard-adlp: [PASS][6] -> [DMESG-WARN][7] ([Intel XE#2953] / [Intel XE#4173])
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-adlp-9/igt@kms_addfb_basic@invalid-get-prop.html
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-2/igt@kms_addfb_basic@invalid-get-prop.html
* igt@kms_async_flips@async-flip-suspend-resume@pipe-c-hdmi-a-1:
- shard-adlp: [PASS][8] -> [DMESG-WARN][9] ([Intel XE#4543]) +8 other tests dmesg-warn
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-adlp-4/igt@kms_async_flips@async-flip-suspend-resume@pipe-c-hdmi-a-1.html
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-6/igt@kms_async_flips@async-flip-suspend-resume@pipe-c-hdmi-a-1.html
* igt@kms_big_fb@4-tiled-32bpp-rotate-90:
- shard-dg2-set2: NOTRUN -> [SKIP][10] ([Intel XE#316]) +1 other test skip
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_big_fb@4-tiled-32bpp-rotate-90.html
* igt@kms_big_fb@4-tiled-64bpp-rotate-270:
- shard-bmg: NOTRUN -> [SKIP][11] ([Intel XE#2327])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_big_fb@4-tiled-64bpp-rotate-270.html
- shard-adlp: NOTRUN -> [SKIP][12] ([Intel XE#1124])
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_big_fb@4-tiled-64bpp-rotate-270.html
- shard-lnl: NOTRUN -> [SKIP][13] ([Intel XE#1407])
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_big_fb@4-tiled-64bpp-rotate-270.html
* igt@kms_big_fb@y-tiled-64bpp-rotate-180:
- shard-dg2-set2: NOTRUN -> [SKIP][14] ([Intel XE#1124]) +3 other tests skip
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_big_fb@y-tiled-64bpp-rotate-180.html
* igt@kms_big_fb@y-tiled-64bpp-rotate-90:
- shard-bmg: NOTRUN -> [SKIP][15] ([Intel XE#1124]) +2 other tests skip
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html
- shard-adlp: NOTRUN -> [SKIP][16] ([Intel XE#316])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-addfb-size-overflow:
- shard-dg2-set2: NOTRUN -> [SKIP][17] ([Intel XE#610])
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_big_fb@y-tiled-addfb-size-overflow.html
* igt@kms_bw@linear-tiling-3-displays-1920x1080p:
- shard-dg2-set2: NOTRUN -> [SKIP][18] ([Intel XE#367])
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_bw@linear-tiling-3-displays-1920x1080p.html
* igt@kms_bw@linear-tiling-4-displays-2160x1440p:
- shard-bmg: NOTRUN -> [SKIP][19] ([Intel XE#367])
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@kms_bw@linear-tiling-4-displays-2160x1440p.html
* igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [SKIP][20] ([Intel XE#787]) +167 other tests skip
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6.html
* igt@kms_ccs@bad-pixel-format-yf-tiled-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][21] ([Intel XE#455] / [Intel XE#787]) +28 other tests skip
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_ccs@bad-pixel-format-yf-tiled-ccs.html
* igt@kms_ccs@crc-primary-rotation-180-4-tiled-lnl-ccs:
- shard-adlp: NOTRUN -> [SKIP][22] ([Intel XE#2907])
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_ccs@crc-primary-rotation-180-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4:
- shard-dg2-set2: NOTRUN -> [INCOMPLETE][23] ([Intel XE#3862]) +1 other test incomplete
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-mc-ccs@pipe-d-dp-4.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs:
- shard-bmg: NOTRUN -> [SKIP][24] ([Intel XE#3432])
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@kms_ccs@crc-primary-suspend-4-tiled-mtl-rc-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs@pipe-d-hdmi-a-3:
- shard-bmg: NOTRUN -> [SKIP][25] ([Intel XE#2652] / [Intel XE#787]) +17 other tests skip
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs@pipe-d-hdmi-a-3.html
* igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-rc-ccs:
- shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#2887]) +2 other tests skip
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-gen12-rc-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-d-dp-4:
- shard-dg2-set2: [PASS][27] -> [INCOMPLETE][28] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4522]) +1 other test incomplete
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-d-dp-4.html
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc@pipe-d-dp-4.html
* igt@kms_cdclk@mode-transition@pipe-d-dp-4:
- shard-dg2-set2: NOTRUN -> [SKIP][29] ([Intel XE#4417]) +3 other tests skip
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-463/igt@kms_cdclk@mode-transition@pipe-d-dp-4.html
* igt@kms_chamelium_edid@hdmi-edid-stress-resolution-non-4k:
- shard-dg2-set2: NOTRUN -> [SKIP][30] ([Intel XE#373]) +1 other test skip
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_chamelium_edid@hdmi-edid-stress-resolution-non-4k.html
* igt@kms_chamelium_frames@dp-crc-single:
- shard-adlp: NOTRUN -> [SKIP][31] ([Intel XE#373])
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_chamelium_frames@dp-crc-single.html
- shard-bmg: NOTRUN -> [SKIP][32] ([Intel XE#2252])
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_chamelium_frames@dp-crc-single.html
- shard-lnl: NOTRUN -> [SKIP][33] ([Intel XE#373])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_chamelium_frames@dp-crc-single.html
* igt@kms_content_protection@atomic@pipe-a-dp-4:
- shard-dg2-set2: NOTRUN -> [FAIL][34] ([Intel XE#1178])
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_content_protection@atomic@pipe-a-dp-4.html
* igt@kms_content_protection@lic-type-0@pipe-a-dp-4:
- shard-dg2-set2: NOTRUN -> [FAIL][35] ([Intel XE#3304])
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_content_protection@lic-type-0@pipe-a-dp-4.html
* igt@kms_content_protection@srm@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [FAIL][36] ([Intel XE#1178])
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-3/igt@kms_content_protection@srm@pipe-a-dp-2.html
* igt@kms_content_protection@uevent@pipe-a-dp-4:
- shard-dg2-set2: NOTRUN -> [FAIL][37] ([Intel XE#1188])
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-463/igt@kms_content_protection@uevent@pipe-a-dp-4.html
* igt@kms_cursor_crc@cursor-onscreen-128x42:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#2320])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@kms_cursor_crc@cursor-onscreen-128x42.html
* igt@kms_cursor_crc@cursor-rapid-movement-512x512:
- shard-bmg: NOTRUN -> [SKIP][39] ([Intel XE#2321])
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@kms_cursor_crc@cursor-rapid-movement-512x512.html
* igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions:
- shard-dg2-set2: [PASS][40] -> [SKIP][41] ([Intel XE#4208] / [i915#2575]) +57 other tests skip
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions.html
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions.html
* igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size:
- shard-bmg: [PASS][42] -> [SKIP][43] ([Intel XE#2291]) +2 other tests skip
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-7/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-6/igt@kms_cursor_legacy@cursora-vs-flipb-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@cursora-vs-flipb-varying-size:
- shard-adlp: NOTRUN -> [SKIP][44] ([Intel XE#309])
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html
- shard-lnl: NOTRUN -> [SKIP][45] ([Intel XE#309])
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html
* igt@kms_cursor_legacy@flip-vs-cursor-legacy:
- shard-bmg: [PASS][46] -> [FAIL][47] ([Intel XE#5299])
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-1/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
* igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [SKIP][48] ([Intel XE#4494] / [i915#3804])
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-6.html
* igt@kms_dp_aux_dev:
- shard-bmg: [PASS][49] -> [SKIP][50] ([Intel XE#3009])
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-7/igt@kms_dp_aux_dev.html
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-6/igt@kms_dp_aux_dev.html
* igt@kms_dp_link_training@uhbr-mst:
- shard-dg2-set2: NOTRUN -> [SKIP][51] ([Intel XE#4356])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_dp_link_training@uhbr-mst.html
* igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-out-visible-area:
- shard-bmg: NOTRUN -> [SKIP][52] ([Intel XE#4422])
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-out-visible-area.html
* igt@kms_fbcon_fbt@psr:
- shard-bmg: NOTRUN -> [SKIP][53] ([Intel XE#776])
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@kms_fbcon_fbt@psr.html
* igt@kms_feature_discovery@chamelium:
- shard-dg2-set2: NOTRUN -> [SKIP][54] ([Intel XE#701])
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_feature_discovery@chamelium.html
* igt@kms_flip@2x-modeset-vs-vblank-race-interruptible:
- shard-bmg: [PASS][55] -> [SKIP][56] ([Intel XE#2316]) +1 other test skip
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-2/igt@kms_flip@2x-modeset-vs-vblank-race-interruptible.html
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-4/igt@kms_flip@2x-modeset-vs-vblank-race-interruptible.html
* igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset:
- shard-adlp: NOTRUN -> [SKIP][57] ([Intel XE#310])
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset.html
- shard-lnl: NOTRUN -> [SKIP][58] ([Intel XE#1421])
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset.html
* igt@kms_flip@flip-vs-rmfb:
- shard-adlp: [PASS][59] -> [DMESG-WARN][60] ([Intel XE#4543] / [Intel XE#5208])
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-adlp-4/igt@kms_flip@flip-vs-rmfb.html
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-6/igt@kms_flip@flip-vs-rmfb.html
* igt@kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling:
- shard-dg2-set2: NOTRUN -> [SKIP][61] ([Intel XE#2351] / [Intel XE#4208]) +7 other tests skip
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling:
- shard-dg2-set2: NOTRUN -> [SKIP][62] ([Intel XE#455]) +10 other tests skip
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-downscaling@pipe-a-valid-mode:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#2293]) +1 other test skip
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-1/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-downscaling@pipe-a-valid-mode.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling:
- shard-bmg: NOTRUN -> [SKIP][64] ([Intel XE#2293] / [Intel XE#2380]) +1 other test skip
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
* igt@kms_force_connector_basic@force-edid:
- shard-adlp: [PASS][65] -> [ABORT][66] ([Intel XE#2953])
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-adlp-9/igt@kms_force_connector_basic@force-edid.html
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-2/igt@kms_force_connector_basic@force-edid.html
* igt@kms_frontbuffer_tracking@drrs-1p-pri-indfb-multidraw:
- shard-lnl: NOTRUN -> [SKIP][67] ([Intel XE#651]) +3 other tests skip
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_frontbuffer_tracking@drrs-1p-pri-indfb-multidraw.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-onoff:
- shard-dg2-set2: NOTRUN -> [SKIP][68] ([Intel XE#651]) +8 other tests skip
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-mmap-wc:
- shard-dg2-set2: [PASS][69] -> [SKIP][70] ([Intel XE#2351] / [Intel XE#4208]) +4 other tests skip
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-mmap-wc.html
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-pri-indfb-multidraw:
- shard-adlp: NOTRUN -> [SKIP][71] ([Intel XE#656])
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_frontbuffer_tracking@fbc-2p-pri-indfb-multidraw.html
- shard-lnl: NOTRUN -> [SKIP][72] ([Intel XE#656])
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_frontbuffer_tracking@fbc-2p-pri-indfb-multidraw.html
* igt@kms_frontbuffer_tracking@fbc-tiling-linear:
- shard-bmg: NOTRUN -> [SKIP][73] ([Intel XE#5390]) +5 other tests skip
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-tiling-linear.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-spr-indfb-fullscreen:
- shard-bmg: NOTRUN -> [SKIP][74] ([Intel XE#2311]) +12 other tests skip
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-spr-indfb-fullscreen.html
- shard-adlp: NOTRUN -> [SKIP][75] ([Intel XE#651]) +4 other tests skip
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-spr-indfb-fullscreen.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-draw-blt:
- shard-dg2-set2: NOTRUN -> [SKIP][76] ([Intel XE#653]) +8 other tests skip
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-onoff:
- shard-adlp: NOTRUN -> [SKIP][77] ([Intel XE#653]) +1 other test skip
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-onoff.html
- shard-bmg: NOTRUN -> [SKIP][78] ([Intel XE#2313]) +5 other tests skip
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-onoff.html
* igt@kms_hdr@invalid-hdr:
- shard-bmg: NOTRUN -> [SKIP][79] ([Intel XE#1503])
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-1/igt@kms_hdr@invalid-hdr.html
* igt@kms_hdr@static-swap:
- shard-bmg: [PASS][80] -> [SKIP][81] ([Intel XE#1503]) +1 other test skip
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-2/igt@kms_hdr@static-swap.html
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-4/igt@kms_hdr@static-swap.html
* igt@kms_invalid_mode@overflow-vrefresh:
- shard-dg2-set2: NOTRUN -> [SKIP][82] ([Intel XE#4208] / [i915#2575]) +11 other tests skip
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_invalid_mode@overflow-vrefresh.html
* igt@kms_joiner@invalid-modeset-big-joiner:
- shard-dg2-set2: NOTRUN -> [SKIP][83] ([Intel XE#346])
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_joiner@invalid-modeset-big-joiner.html
* igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner:
- shard-dg2-set2: NOTRUN -> [SKIP][84] ([Intel XE#2925])
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html
* igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
- shard-dg2-set2: NOTRUN -> [SKIP][85] ([Intel XE#356])
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html
* igt@kms_plane_scaling@2x-scaler-multi-pipe:
- shard-bmg: [PASS][86] -> [SKIP][87] ([Intel XE#2571])
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-7/igt@kms_plane_scaling@2x-scaler-multi-pipe.html
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-6/igt@kms_plane_scaling@2x-scaler-multi-pipe.html
* igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-20x20@pipe-b:
- shard-lnl: NOTRUN -> [SKIP][88] ([Intel XE#2763]) +3 other tests skip
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-20x20@pipe-b.html
* igt@kms_pm_dc@dc6-dpms:
- shard-dg2-set2: NOTRUN -> [SKIP][89] ([Intel XE#908])
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_pm_dc@dc6-dpms.html
* igt@kms_psr2_sf@fbc-pr-plane-move-sf-dmg-area:
- shard-dg2-set2: NOTRUN -> [SKIP][90] ([Intel XE#1489]) +1 other test skip
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_psr2_sf@fbc-pr-plane-move-sf-dmg-area.html
* igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf:
- shard-bmg: NOTRUN -> [SKIP][91] ([Intel XE#1489])
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf.html
- shard-adlp: NOTRUN -> [SKIP][92] ([Intel XE#1489])
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf.html
- shard-lnl: NOTRUN -> [SKIP][93] ([Intel XE#2893] / [Intel XE#4608])
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf.html
* igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf@pipe-b-edp-1:
- shard-lnl: NOTRUN -> [SKIP][94] ([Intel XE#4608]) +1 other test skip
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf@pipe-b-edp-1.html
* igt@kms_psr2_su@page_flip-xrgb8888:
- shard-bmg: NOTRUN -> [SKIP][95] ([Intel XE#2387])
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_psr2_su@page_flip-xrgb8888.html
- shard-adlp: NOTRUN -> [SKIP][96] ([Intel XE#1122] / [Intel XE#5580])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_psr2_su@page_flip-xrgb8888.html
- shard-lnl: NOTRUN -> [SKIP][97] ([Intel XE#1128])
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_psr2_su@page_flip-xrgb8888.html
* igt@kms_psr@fbc-pr-dpms:
- shard-dg2-set2: NOTRUN -> [SKIP][98] ([Intel XE#2850] / [Intel XE#929]) +6 other tests skip
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_psr@fbc-pr-dpms.html
* igt@kms_psr@fbc-psr2-cursor-blt:
- shard-adlp: NOTRUN -> [SKIP][99] ([Intel XE#2850] / [Intel XE#929]) +1 other test skip
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_psr@fbc-psr2-cursor-blt.html
- shard-lnl: NOTRUN -> [SKIP][100] ([Intel XE#5784])
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@kms_psr@fbc-psr2-cursor-blt.html
* igt@kms_psr@psr-primary-page-flip:
- shard-bmg: NOTRUN -> [SKIP][101] ([Intel XE#2234] / [Intel XE#2850]) +2 other tests skip
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_psr@psr-primary-page-flip.html
* igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
- shard-dg2-set2: NOTRUN -> [SKIP][102] ([Intel XE#2939])
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
* igt@kms_rotation_crc@multiplane-rotation-cropping-bottom:
- shard-adlp: NOTRUN -> [FAIL][103] ([Intel XE#1874])
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@kms_rotation_crc@multiplane-rotation-cropping-bottom.html
* igt@kms_setmode@invalid-clone-single-crtc-stealing:
- shard-bmg: [PASS][104] -> [SKIP][105] ([Intel XE#1435])
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-7/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-6/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
* igt@xe_copy_basic@mem-copy-linear-0xfd:
- shard-dg2-set2: NOTRUN -> [SKIP][106] ([Intel XE#1123])
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@xe_copy_basic@mem-copy-linear-0xfd.html
* igt@xe_eu_stall@blocking-re-enable:
- shard-dg2-set2: NOTRUN -> [SKIP][107] ([Intel XE#5626])
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@xe_eu_stall@blocking-re-enable.html
* igt@xe_eudebug@basic-vm-bind-extended-discovery:
- shard-dg2-set2: NOTRUN -> [SKIP][108] ([Intel XE#4837]) +3 other tests skip
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@xe_eudebug@basic-vm-bind-extended-discovery.html
* igt@xe_eudebug@basic-vm-bind-metadata-discovery:
- shard-bmg: NOTRUN -> [SKIP][109] ([Intel XE#4837]) +4 other tests skip
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-1/igt@xe_eudebug@basic-vm-bind-metadata-discovery.html
* igt@xe_eudebug@connect-user:
- shard-adlp: NOTRUN -> [SKIP][110] ([Intel XE#4837] / [Intel XE#5565])
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@xe_eudebug@connect-user.html
- shard-lnl: NOTRUN -> [SKIP][111] ([Intel XE#4837])
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@xe_eudebug@connect-user.html
* igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate-race:
- shard-adlp: NOTRUN -> [SKIP][112] ([Intel XE#1392] / [Intel XE#5575])
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate-race.html
- shard-bmg: NOTRUN -> [SKIP][113] ([Intel XE#2322]) +1 other test skip
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate-race.html
- shard-lnl: NOTRUN -> [SKIP][114] ([Intel XE#1392])
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@xe_exec_basic@multigpu-many-execqueues-many-vm-userptr-invalidate-race.html
* igt@xe_exec_basic@multigpu-no-exec-userptr-invalidate-race:
- shard-dg2-set2: [PASS][115] -> [SKIP][116] ([Intel XE#1392]) +4 other tests skip
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-466/igt@xe_exec_basic@multigpu-no-exec-userptr-invalidate-race.html
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-432/igt@xe_exec_basic@multigpu-no-exec-userptr-invalidate-race.html
* igt@xe_exec_fault_mode@many-basic-prefetch:
- shard-dg2-set2: NOTRUN -> [SKIP][117] ([Intel XE#288]) +9 other tests skip
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@xe_exec_fault_mode@many-basic-prefetch.html
* igt@xe_exec_fault_mode@once-bindexecqueue-userptr-invalidate-imm:
- shard-adlp: NOTRUN -> [SKIP][118] ([Intel XE#288] / [Intel XE#5561]) +2 other tests skip
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@xe_exec_fault_mode@once-bindexecqueue-userptr-invalidate-imm.html
* igt@xe_exec_reset@gt-reset-stress:
- shard-adlp: [PASS][119] -> [ABORT][120] ([Intel XE#5729])
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-adlp-1/igt@xe_exec_reset@gt-reset-stress.html
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-6/igt@xe_exec_reset@gt-reset-stress.html
* igt@xe_exec_system_allocator@once-mmap-free-huge:
- shard-bmg: NOTRUN -> [SKIP][121] ([Intel XE#4943]) +7 other tests skip
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@xe_exec_system_allocator@once-mmap-free-huge.html
- shard-lnl: NOTRUN -> [SKIP][122] ([Intel XE#4943])
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-2/igt@xe_exec_system_allocator@once-mmap-free-huge.html
* igt@xe_exec_system_allocator@once-mmap-free-huge-nomemset:
- shard-bmg: NOTRUN -> [INCOMPLETE][123] ([Intel XE#2594])
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-1/igt@xe_exec_system_allocator@once-mmap-free-huge-nomemset.html
* igt@xe_exec_system_allocator@process-many-large-mmap-new-race:
- shard-adlp: NOTRUN -> [SKIP][124] ([Intel XE#4915]) +18 other tests skip
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@xe_exec_system_allocator@process-many-large-mmap-new-race.html
* igt@xe_exec_system_allocator@threads-many-large-mmap-shared-remap-dontunmap-eocheck:
- shard-dg2-set2: NOTRUN -> [SKIP][125] ([Intel XE#4915]) +94 other tests skip
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@xe_exec_system_allocator@threads-many-large-mmap-shared-remap-dontunmap-eocheck.html
* igt@xe_exec_threads@threads-hang-fd-rebind:
- shard-dg2-set2: [PASS][126] -> [DMESG-WARN][127] ([Intel XE#3876])
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-434/igt@xe_exec_threads@threads-hang-fd-rebind.html
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@xe_exec_threads@threads-hang-fd-rebind.html
* igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit:
- shard-bmg: NOTRUN -> [SKIP][128] ([Intel XE#2229])
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-1/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html
* igt@xe_oa@buffer-fill:
- shard-dg2-set2: NOTRUN -> [SKIP][129] ([Intel XE#3573]) +3 other tests skip
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@xe_oa@buffer-fill.html
* igt@xe_oa@oa-exponents:
- shard-adlp: NOTRUN -> [SKIP][130] ([Intel XE#3573])
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-8/igt@xe_oa@oa-exponents.html
* igt@xe_pat@display-vs-wb-transient:
- shard-dg2-set2: NOTRUN -> [SKIP][131] ([Intel XE#1337])
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@xe_pat@display-vs-wb-transient.html
* igt@xe_peer2peer@write@write-gpua-vram01-gpub-system-p2p:
- shard-dg2-set2: NOTRUN -> [FAIL][132] ([Intel XE#1173])
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@xe_peer2peer@write@write-gpua-vram01-gpub-system-p2p.html
* igt@xe_pm@d3cold-basic:
- shard-dg2-set2: NOTRUN -> [SKIP][133] ([Intel XE#2284] / [Intel XE#366])
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@xe_pm@d3cold-basic.html
* igt@xe_pm@s3-basic-exec:
- shard-dg2-set2: NOTRUN -> [SKIP][134] ([Intel XE#4208]) +63 other tests skip
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_pm@s3-basic-exec.html
* igt@xe_pm@vram-d3cold-threshold:
- shard-dg2-set2: NOTRUN -> [SKIP][135] ([Intel XE#579])
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@xe_pm@vram-d3cold-threshold.html
* igt@xe_pxp@pxp-termination-key-update-post-suspend:
- shard-dg2-set2: NOTRUN -> [SKIP][136] ([Intel XE#4733]) +1 other test skip
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@xe_pxp@pxp-termination-key-update-post-suspend.html
* igt@xe_query@multigpu-query-topology:
- shard-dg2-set2: NOTRUN -> [SKIP][137] ([Intel XE#944]) +1 other test skip
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@xe_query@multigpu-query-topology.html
* igt@xe_render_copy@render-stress-2-copies:
- shard-dg2-set2: NOTRUN -> [SKIP][138] ([Intel XE#4814])
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@xe_render_copy@render-stress-2-copies.html
* igt@xe_sriov_flr@flr-vf1-clear:
- shard-dg2-set2: NOTRUN -> [SKIP][139] ([Intel XE#3342]) +1 other test skip
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@xe_sriov_flr@flr-vf1-clear.html
* igt@xe_vm@mmap-style-bind-userptr-one-partial:
- shard-dg2-set2: [PASS][140] -> [SKIP][141] ([Intel XE#4208]) +113 other tests skip
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@xe_vm@mmap-style-bind-userptr-one-partial.html
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_vm@mmap-style-bind-userptr-one-partial.html
#### Possible fixes ####
* igt@fbdev@pan:
- shard-dg2-set2: [SKIP][142] ([Intel XE#2134]) -> [PASS][143]
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@fbdev@pan.html
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@fbdev@pan.html
* igt@kms_addfb_basic@bad-pitch-128:
- shard-adlp: [DMESG-WARN][144] ([Intel XE#2953] / [Intel XE#4173]) -> [PASS][145] +3 other tests pass
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-adlp-6/igt@kms_addfb_basic@bad-pitch-128.html
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-1/igt@kms_addfb_basic@bad-pitch-128.html
* igt@kms_atomic@plane-invalid-params-fence:
- shard-dg2-set2: [SKIP][146] ([Intel XE#4208] / [i915#2575]) -> [PASS][147] +66 other tests pass
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_atomic@plane-invalid-params-fence.html
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_atomic@plane-invalid-params-fence.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs:
- shard-bmg: [INCOMPLETE][148] ([Intel XE#3862]) -> [PASS][149] +1 other test pass
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-5/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-1/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs:
- shard-dg2-set2: [INCOMPLETE][150] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#3124] / [Intel XE#4345]) -> [PASS][151]
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-c-hdmi-a-6:
- shard-dg2-set2: [INCOMPLETE][152] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#3124]) -> [PASS][153]
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-c-hdmi-a-6.html
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-c-hdmi-a-6.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-bmg: [SKIP][154] ([Intel XE#2291]) -> [PASS][155] +5 other tests pass
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-3/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_display_modes@extended-mode-basic:
- shard-bmg: [SKIP][156] ([Intel XE#4302]) -> [PASS][157]
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_display_modes@extended-mode-basic.html
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-3/igt@kms_display_modes@extended-mode-basic.html
* igt@kms_dp_linktrain_fallback@dp-fallback:
- shard-bmg: [SKIP][158] ([Intel XE#4294]) -> [PASS][159]
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_dp_linktrain_fallback@dp-fallback.html
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-7/igt@kms_dp_linktrain_fallback@dp-fallback.html
* igt@kms_draw_crc@fill-fb:
- shard-dg2-set2: [SKIP][160] ([Intel XE#2351] / [Intel XE#4208]) -> [PASS][161] +9 other tests pass
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_draw_crc@fill-fb.html
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_draw_crc@fill-fb.html
* igt@kms_flip@2x-flip-vs-blocking-wf-vblank:
- shard-bmg: [SKIP][162] ([Intel XE#2316]) -> [PASS][163] +6 other tests pass
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_flip@2x-flip-vs-blocking-wf-vblank.html
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-1/igt@kms_flip@2x-flip-vs-blocking-wf-vblank.html
* igt@kms_flip@2x-flip-vs-suspend-interruptible@ad-hdmi-a6-dp4:
- shard-dg2-set2: [INCOMPLETE][164] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][165] +1 other test pass
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_flip@2x-flip-vs-suspend-interruptible@ad-hdmi-a6-dp4.html
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@kms_flip@2x-flip-vs-suspend-interruptible@ad-hdmi-a6-dp4.html
* igt@kms_flip@dpms-off-confusion@c-hdmi-a1:
- shard-adlp: [DMESG-WARN][166] ([Intel XE#4543]) -> [PASS][167] +7 other tests pass
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-adlp-2/igt@kms_flip@dpms-off-confusion@c-hdmi-a1.html
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-3/igt@kms_flip@dpms-off-confusion@c-hdmi-a1.html
* igt@kms_flip@flip-vs-expired-vblank@c-edp1:
- shard-lnl: [FAIL][168] ([Intel XE#301] / [Intel XE#3149]) -> [PASS][169] +1 other test pass
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-lnl-2/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-1/igt@kms_flip@flip-vs-expired-vblank@c-edp1.html
* igt@kms_flip@flip-vs-suspend-interruptible:
- shard-bmg: [INCOMPLETE][170] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][171] +1 other test pass
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-1/igt@kms_flip@flip-vs-suspend-interruptible.html
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-8/igt@kms_flip@flip-vs-suspend-interruptible.html
* igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-downscaling:
- shard-dg2-set2: [SKIP][172] ([Intel XE#4208]) -> [PASS][173] +136 other tests pass
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-downscaling.html
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-downscaling.html
* igt@kms_hdr@static-toggle-suspend:
- shard-bmg: [SKIP][174] ([Intel XE#1503]) -> [PASS][175]
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_hdr@static-toggle-suspend.html
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-3/igt@kms_hdr@static-toggle-suspend.html
* igt@kms_joiner@basic-force-big-joiner:
- shard-bmg: [SKIP][176] ([Intel XE#3012]) -> [PASS][177]
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_joiner@basic-force-big-joiner.html
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-3/igt@kms_joiner@basic-force-big-joiner.html
* igt@kms_plane_multiple@2x-tiling-x:
- shard-bmg: [SKIP][178] ([Intel XE#4596]) -> [PASS][179]
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_plane_multiple@2x-tiling-x.html
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-3/igt@kms_plane_multiple@2x-tiling-x.html
* igt@kms_vrr@flipline:
- shard-lnl: [FAIL][180] ([Intel XE#4227]) -> [PASS][181] +1 other test pass
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-lnl-8/igt@kms_vrr@flipline.html
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-7/igt@kms_vrr@flipline.html
* {igt@xe_compute_preempt@compute-preempt-many-vram-evict@engine-drm_xe_engine_class_compute}:
- shard-bmg: [ABORT][182] ([Intel XE#3970]) -> [PASS][183] +1 other test pass
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-8/igt@xe_compute_preempt@compute-preempt-many-vram-evict@engine-drm_xe_engine_class_compute.html
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-2/igt@xe_compute_preempt@compute-preempt-many-vram-evict@engine-drm_xe_engine_class_compute.html
* igt@xe_exec_basic@multigpu-once-basic-defer-bind:
- shard-dg2-set2: [SKIP][184] ([Intel XE#1392]) -> [PASS][185] +1 other test pass
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-432/igt@xe_exec_basic@multigpu-once-basic-defer-bind.html
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-463/igt@xe_exec_basic@multigpu-once-basic-defer-bind.html
* igt@xe_live_ktest@xe_bo:
- shard-dg2-set2: [SKIP][186] ([Intel XE#2229] / [Intel XE#455]) -> [PASS][187] +1 other test pass
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_live_ktest@xe_bo.html
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@xe_live_ktest@xe_bo.html
* igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit:
- shard-dg2-set2: [SKIP][188] ([Intel XE#2229]) -> [PASS][189]
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@xe_live_ktest@xe_bo@xe_ccs_migrate_kunit.html
* {igt@xe_pmu@engine-activity-render-node-idle@engine-drm_xe_engine_class_video_decode0}:
- shard-adlp: [FAIL][190] -> [PASS][191]
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-adlp-3/igt@xe_pmu@engine-activity-render-node-idle@engine-drm_xe_engine_class_video_decode0.html
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-4/igt@xe_pmu@engine-activity-render-node-idle@engine-drm_xe_engine_class_video_decode0.html
#### Warnings ####
* igt@kms_async_flips@async-flip-suspend-resume:
- shard-adlp: [DMESG-WARN][192] ([Intel XE#2953] / [Intel XE#4173] / [Intel XE#4543]) -> [DMESG-WARN][193] ([Intel XE#4543]) +1 other test dmesg-warn
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-adlp-4/igt@kms_async_flips@async-flip-suspend-resume.html
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-adlp-6/igt@kms_async_flips@async-flip-suspend-resume.html
* igt@kms_big_fb@linear-32bpp-rotate-90:
- shard-dg2-set2: [SKIP][194] ([Intel XE#316]) -> [SKIP][195] ([Intel XE#4208])
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_big_fb@linear-32bpp-rotate-90.html
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_big_fb@linear-32bpp-rotate-90.html
* igt@kms_big_fb@linear-64bpp-rotate-270:
- shard-dg2-set2: [SKIP][196] ([Intel XE#316]) -> [SKIP][197] ([Intel XE#2351] / [Intel XE#4208])
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-435/igt@kms_big_fb@linear-64bpp-rotate-270.html
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_big_fb@linear-64bpp-rotate-270.html
* igt@kms_big_fb@x-tiled-64bpp-rotate-90:
- shard-dg2-set2: [SKIP][198] ([Intel XE#4208]) -> [SKIP][199] ([Intel XE#316]) +1 other test skip
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_big_fb@x-tiled-64bpp-rotate-90.html
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_big_fb@x-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip:
- shard-dg2-set2: [SKIP][200] ([Intel XE#1124]) -> [SKIP][201] ([Intel XE#2351] / [Intel XE#4208])
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0:
- shard-dg2-set2: [SKIP][202] ([Intel XE#1124]) -> [SKIP][203] ([Intel XE#4208]) +4 other tests skip
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0.html
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-0.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip:
- shard-dg2-set2: [SKIP][204] ([Intel XE#4208]) -> [SKIP][205] ([Intel XE#1124]) +5 other tests skip
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-0-hflip-async-flip.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
- shard-dg2-set2: [SKIP][206] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][207] ([Intel XE#1124]) +2 other tests skip
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_big_fb@yf-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
* igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p:
- shard-dg2-set2: [SKIP][208] ([Intel XE#2191]) -> [SKIP][209] ([Intel XE#4208] / [i915#2575])
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_bw@connected-linear-tiling-3-displays-2160x1440p.html
* igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p:
- shard-dg2-set2: [SKIP][210] ([Intel XE#4208] / [i915#2575]) -> [SKIP][211] ([Intel XE#2191]) +1 other test skip
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p.html
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p.html
* igt@kms_bw@linear-tiling-3-displays-3840x2160p:
- shard-dg2-set2: [SKIP][212] ([Intel XE#4208] / [i915#2575]) -> [SKIP][213] ([Intel XE#367]) +1 other test skip
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_bw@linear-tiling-3-displays-3840x2160p.html
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_bw@linear-tiling-3-displays-3840x2160p.html
* igt@kms_bw@linear-tiling-4-displays-2560x1440p:
- shard-dg2-set2: [SKIP][214] ([Intel XE#367]) -> [SKIP][215] ([Intel XE#4208] / [i915#2575]) +2 other tests skip
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_bw@linear-tiling-4-displays-2560x1440p.html
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_bw@linear-tiling-4-displays-2560x1440p.html
* igt@kms_ccs@bad-aux-stride-y-tiled-gen12-rc-ccs:
- shard-dg2-set2: [SKIP][216] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][217] ([Intel XE#4208]) +5 other tests skip
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_ccs@bad-aux-stride-y-tiled-gen12-rc-ccs.html
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_ccs@bad-aux-stride-y-tiled-gen12-rc-ccs.html
* igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc:
- shard-dg2-set2: [SKIP][218] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][219] ([Intel XE#455] / [Intel XE#787]) +1 other test skip
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc.html
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc.html
* igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs:
- shard-dg2-set2: [SKIP][220] ([Intel XE#2907]) -> [SKIP][221] ([Intel XE#4208]) +1 other test skip
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-435/igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs.html
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs.html
* igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs:
- shard-dg2-set2: [SKIP][222] ([Intel XE#4208]) -> [SKIP][223] ([Intel XE#455] / [Intel XE#787]) +5 other tests skip
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs.html
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs.html
* igt@kms_ccs@crc-primary-rotation-180-4-tiled-bmg-ccs:
- shard-dg2-set2: [SKIP][224] ([Intel XE#4208]) -> [SKIP][225] ([Intel XE#2907])
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_ccs@crc-primary-rotation-180-4-tiled-bmg-ccs.html
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_ccs@crc-primary-rotation-180-4-tiled-bmg-ccs.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs:
- shard-dg2-set2: [SKIP][226] ([Intel XE#4208]) -> [SKIP][227] ([Intel XE#3442]) +1 other test skip
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_ccs@crc-primary-suspend-4-tiled-bmg-ccs.html
* igt@kms_ccs@missing-ccs-buffer-y-tiled-ccs:
- shard-dg2-set2: [SKIP][228] ([Intel XE#455] / [Intel XE#787]) -> [SKIP][229] ([Intel XE#2351] / [Intel XE#4208]) +1 other test skip
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_ccs@missing-ccs-buffer-y-tiled-ccs.html
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_ccs@missing-ccs-buffer-y-tiled-ccs.html
* igt@kms_chamelium_audio@dp-audio:
- shard-dg2-set2: [SKIP][230] ([Intel XE#373]) -> [SKIP][231] ([Intel XE#4208] / [i915#2575]) +6 other tests skip
[230]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_chamelium_audio@dp-audio.html
[231]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_chamelium_audio@dp-audio.html
* igt@kms_chamelium_color@ctm-0-50:
- shard-dg2-set2: [SKIP][232] ([Intel XE#4208] / [i915#2575]) -> [SKIP][233] ([Intel XE#306]) +2 other tests skip
[232]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_chamelium_color@ctm-0-50.html
[233]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_chamelium_color@ctm-0-50.html
* igt@kms_chamelium_color@gamma:
- shard-dg2-set2: [SKIP][234] ([Intel XE#306]) -> [SKIP][235] ([Intel XE#4208] / [i915#2575]) +1 other test skip
[234]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-435/igt@kms_chamelium_color@gamma.html
[235]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_chamelium_color@gamma.html
* igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode:
- shard-dg2-set2: [SKIP][236] ([Intel XE#4208] / [i915#2575]) -> [SKIP][237] ([Intel XE#373]) +6 other tests skip
[236]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode.html
[237]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_chamelium_hpd@hdmi-hpd-with-enabled-mode.html
* igt@kms_content_protection@dp-mst-lic-type-1:
- shard-dg2-set2: [SKIP][238] ([Intel XE#4208] / [i915#2575]) -> [SKIP][239] ([Intel XE#307])
[238]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_content_protection@dp-mst-lic-type-1.html
[239]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_content_protection@dp-mst-lic-type-1.html
* igt@kms_content_protection@legacy:
- shard-dg2-set2: [FAIL][240] ([Intel XE#1178]) -> [SKIP][241] ([Intel XE#4208] / [i915#2575])
[240]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-435/igt@kms_content_protection@legacy.html
[241]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_content_protection@legacy.html
* igt@kms_content_protection@lic-type-0:
- shard-dg2-set2: [SKIP][242] ([Intel XE#4208] / [i915#2575]) -> [FAIL][243] ([Intel XE#1178]) +1 other test fail
[242]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_content_protection@lic-type-0.html
[243]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_content_protection@lic-type-0.html
* igt@kms_content_protection@srm:
- shard-bmg: [SKIP][244] ([Intel XE#2341]) -> [FAIL][245] ([Intel XE#1178])
[244]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_content_protection@srm.html
[245]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-3/igt@kms_content_protection@srm.html
* igt@kms_cursor_crc@cursor-sliding-512x512:
- shard-dg2-set2: [SKIP][246] ([Intel XE#308]) -> [SKIP][247] ([Intel XE#4208] / [i915#2575])
[246]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_cursor_crc@cursor-sliding-512x512.html
[247]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_cursor_crc@cursor-sliding-512x512.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
- shard-dg2-set2: [SKIP][248] ([Intel XE#4208] / [i915#2575]) -> [SKIP][249] ([Intel XE#323]) +1 other test skip
[248]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
[249]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
* igt@kms_dp_link_training@non-uhbr-mst:
- shard-dg2-set2: [SKIP][250] ([Intel XE#4208]) -> [SKIP][251] ([Intel XE#4354])
[250]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_dp_link_training@non-uhbr-mst.html
[251]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_dp_link_training@non-uhbr-mst.html
* igt@kms_dsc@dsc-with-formats:
- shard-dg2-set2: [SKIP][252] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][253] ([Intel XE#455])
[252]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_dsc@dsc-with-formats.html
[253]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_dsc@dsc-with-formats.html
* igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats:
- shard-dg2-set2: [SKIP][254] ([Intel XE#4422]) -> [SKIP][255] ([Intel XE#4208])
[254]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats.html
[255]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats.html
* igt@kms_fbcon_fbt@psr-suspend:
- shard-dg2-set2: [SKIP][256] ([Intel XE#776]) -> [SKIP][257] ([Intel XE#4208])
[256]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_fbcon_fbt@psr-suspend.html
[257]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_fbcon_fbt@psr-suspend.html
* igt@kms_feature_discovery@dp-mst:
- shard-dg2-set2: [SKIP][258] ([Intel XE#1137]) -> [SKIP][259] ([Intel XE#4208] / [i915#2575])
[258]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_feature_discovery@dp-mst.html
[259]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_feature_discovery@dp-mst.html
* igt@kms_feature_discovery@psr2:
- shard-dg2-set2: [SKIP][260] ([Intel XE#1135]) -> [SKIP][261] ([Intel XE#4208] / [i915#2575])
[260]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_feature_discovery@psr2.html
[261]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_feature_discovery@psr2.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling:
- shard-dg2-set2: [SKIP][262] ([Intel XE#4208]) -> [SKIP][263] ([Intel XE#455]) +2 other tests skip
[262]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling.html
[263]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-upscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling:
- shard-lnl: [SKIP][264] ([Intel XE#1397] / [Intel XE#1745]) -> [ABORT][265] ([Intel XE#4760])
[264]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-lnl-5/igt@kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling.html
[265]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-3/igt@kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling@pipe-a-default-mode:
- shard-lnl: [SKIP][266] ([Intel XE#1397]) -> [ABORT][267] ([Intel XE#4760])
[266]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-lnl-5/igt@kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling@pipe-a-default-mode.html
[267]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-lnl-3/igt@kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling:
- shard-dg2-set2: [SKIP][268] ([Intel XE#455]) -> [SKIP][269] ([Intel XE#4208]) +2 other tests skip
[268]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
[269]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff:
- shard-dg2-set2: [SKIP][270] ([Intel XE#4208]) -> [SKIP][271] ([Intel XE#651]) +12 other tests skip
[270]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html
[271]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][272] ([Intel XE#2311]) -> [SKIP][273] ([Intel XE#2312]) +11 other tests skip
[272]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-3/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc.html
[273]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][274] ([Intel XE#2312]) -> [SKIP][275] ([Intel XE#2311]) +15 other tests skip
[274]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc.html
[275]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-7/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@drrs-suspend:
- shard-dg2-set2: [SKIP][276] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][277] ([Intel XE#651]) +9 other tests skip
[276]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_frontbuffer_tracking@drrs-suspend.html
[277]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_frontbuffer_tracking@drrs-suspend.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render:
- shard-bmg: [SKIP][278] ([Intel XE#2312]) -> [SKIP][279] ([Intel XE#5390]) +9 other tests skip
[278]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render.html
[279]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-pgflip-blt:
- shard-bmg: [SKIP][280] ([Intel XE#5390]) -> [SKIP][281] ([Intel XE#2312]) +7 other tests skip
[280]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-7/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-pgflip-blt.html
[281]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-tiling-y:
- shard-dg2-set2: [SKIP][282] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][283] ([Intel XE#658]) +1 other test skip
[282]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
[283]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-pri-shrfb-draw-blt:
- shard-dg2-set2: [SKIP][284] ([Intel XE#651]) -> [SKIP][285] ([Intel XE#2351] / [Intel XE#4208]) +3 other tests skip
[284]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-pri-shrfb-draw-blt.html
[285]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-pri-shrfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-indfb-scaledprimary:
- shard-dg2-set2: [SKIP][286] ([Intel XE#651]) -> [SKIP][287] ([Intel XE#4208]) +13 other tests skip
[286]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_frontbuffer_tracking@fbcdrrs-indfb-scaledprimary.html
[287]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_frontbuffer_tracking@fbcdrrs-indfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt:
- shard-dg2-set2: [SKIP][288] ([Intel XE#653]) -> [SKIP][289] ([Intel XE#4208]) +14 other tests skip
[288]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html
[289]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-render:
- shard-dg2-set2: [SKIP][290] ([Intel XE#653]) -> [SKIP][291] ([Intel XE#2351] / [Intel XE#4208]) +3 other tests skip
[290]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-render.html
[291]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-pri-shrfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbcpsr-slowdraw:
- shard-dg2-set2: [SKIP][292] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][293] ([Intel XE#653]) +8 other tests skip
[292]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_frontbuffer_tracking@fbcpsr-slowdraw.html
[293]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_frontbuffer_tracking@fbcpsr-slowdraw.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-onoff:
- shard-bmg: [SKIP][294] ([Intel XE#2313]) -> [SKIP][295] ([Intel XE#2312]) +11 other tests skip
[294]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-7/igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-onoff.html
[295]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-cur-indfb-onoff.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-indfb-draw-blt:
- shard-dg2-set2: [SKIP][296] ([Intel XE#4208]) -> [SKIP][297] ([Intel XE#653]) +12 other tests skip
[296]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-indfb-draw-blt.html
[297]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][298] ([Intel XE#2312]) -> [SKIP][299] ([Intel XE#2313]) +16 other tests skip
[298]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc.html
[299]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-7/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-pri-indfb-draw-mmap-wc.html
* igt@kms_hdr@brightness-with-hdr:
- shard-bmg: [SKIP][300] ([Intel XE#3374] / [Intel XE#3544]) -> [SKIP][301] ([Intel XE#3544])
[300]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-6/igt@kms_hdr@brightness-with-hdr.html
[301]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-1/igt@kms_hdr@brightness-with-hdr.html
* igt@kms_hdr@invalid-hdr:
- shard-dg2-set2: [SKIP][302] ([Intel XE#4208] / [i915#2575]) -> [SKIP][303] ([Intel XE#455]) +4 other tests skip
[302]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_hdr@invalid-hdr.html
[303]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_hdr@invalid-hdr.html
* igt@kms_joiner@invalid-modeset-ultra-joiner:
- shard-dg2-set2: [SKIP][304] ([Intel XE#2927]) -> [SKIP][305] ([Intel XE#4208])
[304]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_joiner@invalid-modeset-ultra-joiner.html
[305]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_joiner@invalid-modeset-ultra-joiner.html
* igt@kms_plane_multiple@2x-tiling-yf:
- shard-bmg: [SKIP][306] ([Intel XE#5021]) -> [SKIP][307] ([Intel XE#4596])
[306]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-2/igt@kms_plane_multiple@2x-tiling-yf.html
[307]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-4/igt@kms_plane_multiple@2x-tiling-yf.html
- shard-dg2-set2: [SKIP][308] ([Intel XE#5021]) -> [SKIP][309] ([Intel XE#4208] / [i915#2575])
[308]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_plane_multiple@2x-tiling-yf.html
[309]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_plane_multiple@2x-tiling-yf.html
* igt@kms_pm_backlight@basic-brightness:
- shard-dg2-set2: [SKIP][310] ([Intel XE#870]) -> [SKIP][311] ([Intel XE#4208])
[310]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_pm_backlight@basic-brightness.html
[311]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_pm_backlight@basic-brightness.html
* igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf:
- shard-dg2-set2: [SKIP][312] ([Intel XE#4208]) -> [SKIP][313] ([Intel XE#1489]) +5 other tests skip
[312]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf.html
[313]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_psr2_sf@pr-cursor-plane-move-continuous-sf.html
* igt@kms_psr2_sf@pr-cursor-plane-update-sf:
- shard-dg2-set2: [SKIP][314] ([Intel XE#1489]) -> [SKIP][315] ([Intel XE#4208]) +3 other tests skip
[314]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_psr2_sf@pr-cursor-plane-update-sf.html
[315]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_psr2_sf@pr-cursor-plane-update-sf.html
* igt@kms_psr@fbc-psr2-primary-render:
- shard-dg2-set2: [SKIP][316] ([Intel XE#2850] / [Intel XE#929]) -> [SKIP][317] ([Intel XE#4208]) +5 other tests skip
[316]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_psr@fbc-psr2-primary-render.html
[317]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_psr@fbc-psr2-primary-render.html
* igt@kms_psr@psr-dpms:
- shard-dg2-set2: [SKIP][318] ([Intel XE#2351] / [Intel XE#4208]) -> [SKIP][319] ([Intel XE#2850] / [Intel XE#929]) +1 other test skip
[318]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_psr@psr-dpms.html
[319]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@kms_psr@psr-dpms.html
* igt@kms_psr@psr-sprite-blt:
- shard-dg2-set2: [SKIP][320] ([Intel XE#2850] / [Intel XE#929]) -> [SKIP][321] ([Intel XE#2351] / [Intel XE#4208]) +2 other tests skip
[320]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_psr@psr-sprite-blt.html
[321]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_psr@psr-sprite-blt.html
* igt@kms_psr@psr2-basic:
- shard-dg2-set2: [SKIP][322] ([Intel XE#4208]) -> [SKIP][323] ([Intel XE#2850] / [Intel XE#929]) +8 other tests skip
[322]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_psr@psr2-basic.html
[323]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@kms_psr@psr2-basic.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-0:
- shard-dg2-set2: [SKIP][324] ([Intel XE#1127]) -> [SKIP][325] ([Intel XE#4208] / [i915#2575])
[324]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-435/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
[325]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_rotation_crc@primary-y-tiled-reflect-x-0.html
* igt@kms_rotation_crc@primary-y-tiled-reflect-x-270:
- shard-dg2-set2: [SKIP][326] ([Intel XE#4208] / [i915#2575]) -> [SKIP][327] ([Intel XE#3414]) +1 other test skip
[326]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@kms_rotation_crc@primary-y-tiled-reflect-x-270.html
[327]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-435/igt@kms_rotation_crc@primary-y-tiled-reflect-x-270.html
* igt@kms_tiled_display@basic-test-pattern:
- shard-dg2-set2: [FAIL][328] ([Intel XE#1729]) -> [SKIP][329] ([Intel XE#4208] / [i915#2575])
[328]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_tiled_display@basic-test-pattern.html
[329]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_tiled_display@basic-test-pattern.html
* igt@kms_vrr@flip-dpms:
- shard-dg2-set2: [SKIP][330] ([Intel XE#455]) -> [SKIP][331] ([Intel XE#4208] / [i915#2575]) +3 other tests skip
[330]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@kms_vrr@flip-dpms.html
[331]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@kms_vrr@flip-dpms.html
* igt@sriov_basic@enable-vfs-autoprobe-off:
- shard-dg2-set2: [SKIP][332] ([Intel XE#4208] / [i915#2575]) -> [SKIP][333] ([Intel XE#1091] / [Intel XE#2849])
[332]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@sriov_basic@enable-vfs-autoprobe-off.html
[333]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@sriov_basic@enable-vfs-autoprobe-off.html
* igt@xe_copy_basic@mem-copy-linear-0x369:
- shard-dg2-set2: [SKIP][334] ([Intel XE#4208]) -> [SKIP][335] ([Intel XE#1123])
[334]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_copy_basic@mem-copy-linear-0x369.html
[335]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@xe_copy_basic@mem-copy-linear-0x369.html
* igt@xe_eu_stall@blocking-read:
- shard-dg2-set2: [SKIP][336] ([Intel XE#4208]) -> [SKIP][337] ([Intel XE#5626]) +1 other test skip
[336]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_eu_stall@blocking-read.html
[337]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@xe_eu_stall@blocking-read.html
* igt@xe_eu_stall@invalid-gt-id:
- shard-dg2-set2: [SKIP][338] ([Intel XE#5626]) -> [SKIP][339] ([Intel XE#4208]) +1 other test skip
[338]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-435/igt@xe_eu_stall@invalid-gt-id.html
[339]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_eu_stall@invalid-gt-id.html
* igt@xe_eudebug@vm-bind-clear-faultable:
- shard-dg2-set2: [SKIP][340] ([Intel XE#4208]) -> [SKIP][341] ([Intel XE#4837]) +9 other tests skip
[340]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_eudebug@vm-bind-clear-faultable.html
[341]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@xe_eudebug@vm-bind-clear-faultable.html
* igt@xe_eudebug_online@debugger-reopen:
- shard-dg2-set2: [SKIP][342] ([Intel XE#4837]) -> [SKIP][343] ([Intel XE#4208]) +7 other tests skip
[342]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@xe_eudebug_online@debugger-reopen.html
[343]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_eudebug_online@debugger-reopen.html
* igt@xe_exec_fault_mode@once-rebind-imm:
- shard-dg2-set2: [SKIP][344] ([Intel XE#4208]) -> [SKIP][345] ([Intel XE#288]) +16 other tests skip
[344]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_exec_fault_mode@once-rebind-imm.html
[345]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@xe_exec_fault_mode@once-rebind-imm.html
* igt@xe_exec_fault_mode@twice-userptr-rebind-imm:
- shard-dg2-set2: [SKIP][346] ([Intel XE#288]) -> [SKIP][347] ([Intel XE#4208]) +14 other tests skip
[346]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-435/igt@xe_exec_fault_mode@twice-userptr-rebind-imm.html
[347]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_exec_fault_mode@twice-userptr-rebind-imm.html
* igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence:
- shard-dg2-set2: [SKIP][348] ([Intel XE#4208]) -> [SKIP][349] ([Intel XE#2360]) +1 other test skip
[348]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence.html
[349]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-shared-nomemset:
- shard-dg2-set2: [SKIP][350] ([Intel XE#4915]) -> [SKIP][351] ([Intel XE#4208]) +157 other tests skip
[350]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-shared-nomemset.html
[351]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-mmap-shared-nomemset.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-eocheck:
- shard-dg2-set2: [SKIP][352] ([Intel XE#4208]) -> [SKIP][353] ([Intel XE#4915]) +181 other tests skip
[352]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-eocheck.html
[353]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-mmap-remap-eocheck.html
* igt@xe_media_fill@media-fill:
- shard-dg2-set2: [SKIP][354] ([Intel XE#560]) -> [SKIP][355] ([Intel XE#4208])
[354]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@xe_media_fill@media-fill.html
[355]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_media_fill@media-fill.html
* igt@xe_mmap@small-bar:
- shard-dg2-set2: [SKIP][356] ([Intel XE#4208]) -> [SKIP][357] ([Intel XE#512])
[356]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_mmap@small-bar.html
[357]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@xe_mmap@small-bar.html
* igt@xe_oa@oa-unit-exclusive-stream-sample-oa:
- shard-dg2-set2: [SKIP][358] ([Intel XE#3573]) -> [SKIP][359] ([Intel XE#4208]) +4 other tests skip
[358]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@xe_oa@oa-unit-exclusive-stream-sample-oa.html
[359]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_oa@oa-unit-exclusive-stream-sample-oa.html
* igt@xe_oa@whitelisted-registers-userspace-config:
- shard-dg2-set2: [SKIP][360] ([Intel XE#4208]) -> [SKIP][361] ([Intel XE#3573]) +5 other tests skip
[360]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_oa@whitelisted-registers-userspace-config.html
[361]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@xe_oa@whitelisted-registers-userspace-config.html
* igt@xe_peer2peer@write:
- shard-dg2-set2: [SKIP][362] ([Intel XE#1061]) -> [FAIL][363] ([Intel XE#1173])
[362]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-432/igt@xe_peer2peer@write.html
[363]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-466/igt@xe_peer2peer@write.html
* igt@xe_pm@d3cold-multiple-execs:
- shard-dg2-set2: [SKIP][364] ([Intel XE#4208]) -> [SKIP][365] ([Intel XE#2284] / [Intel XE#366])
[364]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_pm@d3cold-multiple-execs.html
[365]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@xe_pm@d3cold-multiple-execs.html
* igt@xe_pmu@all-fn-engine-activity-load:
- shard-dg2-set2: [SKIP][366] ([Intel XE#4650]) -> [SKIP][367] ([Intel XE#4208])
[366]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@xe_pmu@all-fn-engine-activity-load.html
[367]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_pmu@all-fn-engine-activity-load.html
* igt@xe_pmu@fn-engine-activity-sched-if-idle:
- shard-bmg: [ABORT][368] ([Intel XE#3970]) -> [DMESG-WARN][369] ([Intel XE#3876])
[368]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-bmg-8/igt@xe_pmu@fn-engine-activity-sched-if-idle.html
[369]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-bmg-1/igt@xe_pmu@fn-engine-activity-sched-if-idle.html
* igt@xe_pxp@display-pxp-fb:
- shard-dg2-set2: [SKIP][370] ([Intel XE#4208]) -> [SKIP][371] ([Intel XE#4733]) +1 other test skip
[370]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_pxp@display-pxp-fb.html
[371]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@xe_pxp@display-pxp-fb.html
* igt@xe_pxp@pxp-src-to-pxp-dest-rendercopy:
- shard-dg2-set2: [SKIP][372] ([Intel XE#4733]) -> [SKIP][373] ([Intel XE#4208]) +1 other test skip
[372]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@xe_pxp@pxp-src-to-pxp-dest-rendercopy.html
[373]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_pxp@pxp-src-to-pxp-dest-rendercopy.html
* igt@xe_query@multigpu-query-gt-list:
- shard-dg2-set2: [SKIP][374] ([Intel XE#944]) -> [SKIP][375] ([Intel XE#4208]) +1 other test skip
[374]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-435/igt@xe_query@multigpu-query-gt-list.html
[375]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_query@multigpu-query-gt-list.html
* igt@xe_query@multigpu-query-invalid-query:
- shard-dg2-set2: [SKIP][376] ([Intel XE#4208]) -> [SKIP][377] ([Intel XE#944])
[376]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_query@multigpu-query-invalid-query.html
[377]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@xe_query@multigpu-query-invalid-query.html
* igt@xe_spin_batch@spin-mem-copy:
- shard-dg2-set2: [SKIP][378] ([Intel XE#4208]) -> [SKIP][379] ([Intel XE#4821])
[378]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_spin_batch@spin-mem-copy.html
[379]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-464/igt@xe_spin_batch@spin-mem-copy.html
* igt@xe_sriov_auto_provisioning@fair-allocation:
- shard-dg2-set2: [SKIP][380] ([Intel XE#4130]) -> [SKIP][381] ([Intel XE#4208])
[380]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-464/igt@xe_sriov_auto_provisioning@fair-allocation.html
[381]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-436/igt@xe_sriov_auto_provisioning@fair-allocation.html
* igt@xe_sriov_flr@flr-vfs-parallel:
- shard-dg2-set2: [SKIP][382] ([Intel XE#4208]) -> [SKIP][383] ([Intel XE#4273])
[382]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-3515-76741379fba333222be8a15bebf1e659eb84088a/shard-dg2-436/igt@xe_sriov_flr@flr-vfs-parallel.html
[383]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/shard-dg2-433/igt@xe_sriov_flr@flr-vfs-parallel.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#1061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1061
[Intel XE#1091]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1091
[Intel XE#1122]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1122
[Intel XE#1123]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1123
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1127]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1127
[Intel XE#1128]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1128
[Intel XE#1135]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1135
[Intel XE#1137]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1137
[Intel XE#1173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1173
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1188]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1188
[Intel XE#1337]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1337
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1397]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1397
[Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
[Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
[Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
[Intel XE#1874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1874
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2134]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2134
[Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
[Intel XE#2229]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2229
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2284]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2284
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2351
[Intel XE#2360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2360
[Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
[Intel XE#2387]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2387
[Intel XE#2571]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2571
[Intel XE#2594]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2594
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2705
[Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763
[Intel XE#2849]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2849
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
[Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
[Intel XE#2925]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2925
[Intel XE#2927]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2927
[Intel XE#2939]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2939
[Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
[Intel XE#3009]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3009
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#3012]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3012
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
[Intel XE#308]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/308
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#310]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/310
[Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
[Intel XE#3124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3124
[Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
[Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
[Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#3442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3442
[Intel XE#346]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/346
[Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
[Intel XE#356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/356
[Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
[Intel XE#366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/366
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#3862]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3862
[Intel XE#3876]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3876
[Intel XE#3970]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3970
[Intel XE#4130]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4130
[Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
[Intel XE#4208]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4208
[Intel XE#4212]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4212
[Intel XE#4227]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4227
[Intel XE#4273]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4273
[Intel XE#4294]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4294
[Intel XE#4302]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4302
[Intel XE#4345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4345
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4356]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4356
[Intel XE#4417]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4417
[Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422
[Intel XE#4494]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4494
[Intel XE#4522]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4522
[Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4608
[Intel XE#4650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4650
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4760]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4760
[Intel XE#4814]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4814
[Intel XE#4821]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4821
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
[Intel XE#512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/512
[Intel XE#5208]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5208
[Intel XE#5299]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5299
[Intel XE#5300]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5300
[Intel XE#5390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5390
[Intel XE#5503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5503
[Intel XE#5561]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5561
[Intel XE#5565]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5565
[Intel XE#5575]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5575
[Intel XE#5580]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5580
[Intel XE#560]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/560
[Intel XE#5626]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5626
[Intel XE#5729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5729
[Intel XE#5784]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5784
[Intel XE#579]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/579
[Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/658
[Intel XE#701]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/701
[Intel XE#776]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/776
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/908
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
[i915#2575]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2575
[i915#3804]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3804
Build changes
-------------
* Linux: xe-3515-76741379fba333222be8a15bebf1e659eb84088a -> xe-pw-149550v6
IGT_8488: c4a9bee161f4bb74cbbf81c73b24c416ecf93976 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
xe-3515-76741379fba333222be8a15bebf1e659eb84088a: 76741379fba333222be8a15bebf1e659eb84088a
xe-pw-149550v6: 149550v6
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-149550v6/index.html
[-- Attachment #2: Type: text/html, Size: 125871 bytes --]
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 19/26] drm/xe/svm: Consult madvise preferred location in prefetch
2025-08-07 16:43 ` [PATCH v6 19/26] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
@ 2025-08-08 0:30 ` Matthew Brost
0 siblings, 0 replies; 51+ messages in thread
From: Matthew Brost @ 2025-08-08 0:30 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström
On Thu, Aug 07, 2025 at 10:13:31PM +0530, Himal Prasad Ghimiray wrote:
> When prefetch region is DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, prefetch svm
> ranges to preferred location provided by madvise.
>
> v2 (Matthew Brost)
> - Fix region, devmem_fd usages
> - consult madvise is applicable for other vma's too.
>
> v3
> - Fix atomic handling
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm.c | 30 ++++++++++++++++++++++--------
> 1 file changed, 22 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index aa8d4c4fe0f0..ae966755255d 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -38,6 +38,7 @@
> #include "xe_res_cursor.h"
> #include "xe_svm.h"
> #include "xe_sync.h"
> +#include "xe_tile.h"
> #include "xe_trace_bo.h"
> #include "xe_wa.h"
> #include "xe_hmm.h"
> @@ -2913,15 +2914,28 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
> int err = 0;
>
> struct xe_svm_range *svm_range;
> + struct drm_pagemap *dpagemap;
> struct drm_gpusvm_ctx ctx = {};
> - struct xe_tile *tile;
> + struct xe_tile *tile = NULL;
> unsigned long i;
> u32 region;
>
> if (!xe_vma_is_cpu_addr_mirror(vma))
> return 0;
>
> - region = op->prefetch_range.region;
> + if (op->prefetch_range.region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC) {
> + dpagemap = xe_vma_resolve_pagemap(vma, xe_device_get_root_tile(vm->xe));
> + /*
> + * TODO: Once multigpu support is enabled will need
> + * something to dereference tile from dpagemap.
> + */
> + if (dpagemap)
> + tile = xe_device_get_root_tile(vm->xe);
> + } else {
> + region = op->prefetch_range.region;
> + if (region)
> + tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
> + }
You need similar logic in vm_bind_ioctl_ops_create ahead of
xe_svm_range_validate.
Maybe pull this out into a helper for do this calculation there and
stick the result in the op which could be used here.
Matt
>
> ctx.read_only = xe_vma_read_only(vma);
> ctx.devmem_possible = devmem_possible;
> @@ -2929,11 +2943,10 @@ static int prefetch_ranges(struct xe_vm *vm, struct xe_vma_op *op)
>
> /* TODO: Threading the migration */
> xa_for_each(&op->prefetch_range.range, i, svm_range) {
> - if (!region)
> + if (!tile)
> xe_svm_range_migrate_to_smem(vm, svm_range);
>
> - if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, region)) {
> - tile = &vm->xe->tiles[region_to_mem_type[region] - XE_PL_VRAM0];
> + if (xe_svm_range_needs_migrate_to_vram(svm_range, vma, !!tile)) {
> err = xe_svm_alloc_vram(tile, svm_range, &ctx);
> if (err) {
> drm_dbg(&vm->xe->drm, "VRAM allocation failed, retry from userspace, asid=%u, gpusvm=%p, errno=%pe\n",
> @@ -3001,7 +3014,8 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm,
> else
> region = op->prefetch.region;
>
> - xe_assert(vm->xe, region <= ARRAY_SIZE(region_to_mem_type));
> + xe_assert(vm->xe, region == DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC ||
> + region <= ARRAY_SIZE(region_to_mem_type));
>
> err = vma_lock_and_validate(exec,
> gpuva_to_vma(op->base.prefetch.va),
> @@ -3419,8 +3433,8 @@ static int vm_bind_ioctl_check_args(struct xe_device *xe, struct xe_vm *vm,
> op == DRM_XE_VM_BIND_OP_PREFETCH) ||
> XE_IOCTL_DBG(xe, prefetch_region &&
> op != DRM_XE_VM_BIND_OP_PREFETCH) ||
> - XE_IOCTL_DBG(xe, !(BIT(prefetch_region) &
> - xe->info.mem_region_mask)) ||
> + XE_IOCTL_DBG(xe, (prefetch_region != DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC &&
> + !(BIT(prefetch_region) & xe->info.mem_region_mask))) ||
> XE_IOCTL_DBG(xe, obj &&
> op == DRM_XE_VM_BIND_OP_UNMAP)) {
> err = -EINVAL;
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct
2025-08-07 16:43 ` [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
@ 2025-08-08 5:03 ` Matthew Brost
2025-08-09 12:43 ` Danilo Krummrich
2025-08-12 16:51 ` Danilo Krummrich
2 siblings, 0 replies; 51+ messages in thread
From: Matthew Brost @ 2025-08-08 5:03 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Thomas Hellström, Boris Brezillon,
Danilo Krummrich, Brendan King, Boris Brezillon, Caterina Shablia,
Rob Clark, dri-devel
On Thu, Aug 07, 2025 at 10:13:13PM +0530, Himal Prasad Ghimiray wrote:
> From: Boris Brezillon <boris.brezillon@collabora.com>
>
> We are about to pass more arguments to drm_gpuvm_sm_map[_ops_create](),
> so, before we do that, let's pass arguments through a struct instead
> of changing each call site every time a new optional argument is added.
>
> v5
> - Use drm_gpuva_op_map—same as drm_gpuvm_map_req
> - Rebase changes for drm_gpuvm_sm_map_exec_lock()
> - Fix kernel-docs
>
> v6
> - Use drm_gpuvm_map_req (Danilo/Matt)
>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: Brendan King <Brendan.King@imgtec.com>
> Cc: Boris Brezillon <bbrezillon@kernel.org>
> Cc: Caterina Shablia <caterina.shablia@collabora.com>
> Cc: Rob Clark <robin.clark@oss.qualcomm.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/drm_gpuvm.c | 105 ++++++++++---------------
> drivers/gpu/drm/imagination/pvr_vm.c | 15 ++--
> drivers/gpu/drm/msm/msm_gem_vma.c | 25 ++++--
> drivers/gpu/drm/nouveau/nouveau_uvmm.c | 11 ++-
> drivers/gpu/drm/panthor/panthor_mmu.c | 13 ++-
> drivers/gpu/drm/xe/xe_vm.c | 13 ++-
> include/drm/drm_gpuvm.h | 20 +++--
> 7 files changed, 114 insertions(+), 88 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index bbc7fecb6f4a..b3a01c40001b 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -486,13 +486,18 @@
> * u64 addr, u64 range,
> * struct drm_gem_object *obj, u64 offset)
> * {
> + * struct drm_gpuvm_map_req map_req = {
> + * .op_map.va.addr = addr,
> + * .op_map.va.range = range,
> + * .op_map.gem.obj = obj,
> + * .op_map.gem.offset = offset,
> + * };
> * struct drm_gpuva_ops *ops;
> * struct drm_gpuva_op *op
> * struct drm_gpuvm_bo *vm_bo;
> *
> * driver_lock_va_space();
> - * ops = drm_gpuvm_sm_map_ops_create(gpuvm, addr, range,
> - * obj, offset);
> + * ops = drm_gpuvm_sm_map_ops_create(gpuvm, &map_req);
> * if (IS_ERR(ops))
> * return PTR_ERR(ops);
> *
> @@ -2054,16 +2059,15 @@ EXPORT_SYMBOL_GPL(drm_gpuva_unmap);
>
> static int
> op_map_cb(const struct drm_gpuvm_ops *fn, void *priv,
> - u64 addr, u64 range,
> - struct drm_gem_object *obj, u64 offset)
> + const struct drm_gpuvm_map_req *req)
> {
> struct drm_gpuva_op op = {};
>
> op.op = DRM_GPUVA_OP_MAP;
> - op.map.va.addr = addr;
> - op.map.va.range = range;
> - op.map.gem.obj = obj;
> - op.map.gem.offset = offset;
> + op.map.va.addr = req->op_map.va.addr;
> + op.map.va.range = req->op_map.va.range;
> + op.map.gem.obj = req->op_map.gem.obj;
> + op.map.gem.offset = req->op_map.gem.offset;
>
> return fn->sm_step_map(&op, priv);
> }
> @@ -2102,17 +2106,16 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
> static int
> __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> const struct drm_gpuvm_ops *ops, void *priv,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *req_obj, u64 req_offset)
> + const struct drm_gpuvm_map_req *req)
> {
> struct drm_gpuva *va, *next;
> - u64 req_end = req_addr + req_range;
> + u64 req_end = req->op_map.va.addr + req->op_map.va.range;
> int ret;
>
> - if (unlikely(!drm_gpuvm_range_valid(gpuvm, req_addr, req_range)))
> + if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->op_map.va.addr, req->op_map.va.range)))
> return -EINVAL;
>
> - drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req_addr, req_end) {
> + drm_gpuvm_for_each_va_range_safe(va, next, gpuvm, req->op_map.va.addr, req_end) {
> struct drm_gem_object *obj = va->gem.obj;
> u64 offset = va->gem.offset;
> u64 addr = va->va.addr;
> @@ -2120,9 +2123,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> u64 end = addr + range;
> bool merge = !!va->gem.obj;
>
> - if (addr == req_addr) {
> - merge &= obj == req_obj &&
> - offset == req_offset;
> + if (addr == req->op_map.va.addr) {
> + merge &= obj == req->op_map.gem.obj &&
> + offset == req->op_map.gem.offset;
>
> if (end == req_end) {
> ret = op_unmap_cb(ops, priv, va, merge);
> @@ -2141,9 +2144,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> if (end > req_end) {
> struct drm_gpuva_op_map n = {
> .va.addr = req_end,
> - .va.range = range - req_range,
> + .va.range = range - req->op_map.va.range,
> .gem.obj = obj,
> - .gem.offset = offset + req_range,
> + .gem.offset = offset + req->op_map.va.range,
> };
> struct drm_gpuva_op_unmap u = {
> .va = va,
> @@ -2155,8 +2158,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> return ret;
> break;
> }
> - } else if (addr < req_addr) {
> - u64 ls_range = req_addr - addr;
> + } else if (addr < req->op_map.va.addr) {
> + u64 ls_range = req->op_map.va.addr - addr;
> struct drm_gpuva_op_map p = {
> .va.addr = addr,
> .va.range = ls_range,
> @@ -2165,8 +2168,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> };
> struct drm_gpuva_op_unmap u = { .va = va };
>
> - merge &= obj == req_obj &&
> - offset + ls_range == req_offset;
> + merge &= obj == req->op_map.gem.obj &&
> + offset + ls_range == req->op_map.gem.offset;
> u.keep = merge;
>
> if (end == req_end) {
> @@ -2189,7 +2192,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> .va.range = end - req_end,
> .gem.obj = obj,
> .gem.offset = offset + ls_range +
> - req_range,
> + req->op_map.va.range,
> };
>
> ret = op_remap_cb(ops, priv, &p, &n, &u);
> @@ -2197,10 +2200,10 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> return ret;
> break;
> }
> - } else if (addr > req_addr) {
> - merge &= obj == req_obj &&
> - offset == req_offset +
> - (addr - req_addr);
> + } else if (addr > req->op_map.va.addr) {
> + merge &= obj == req->op_map.gem.obj &&
> + offset == req->op_map.gem.offset +
> + (addr - req->op_map.va.addr);
>
> if (end == req_end) {
> ret = op_unmap_cb(ops, priv, va, merge);
> @@ -2236,9 +2239,7 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> }
> }
>
> - return op_map_cb(ops, priv,
> - req_addr, req_range,
> - req_obj, req_offset);
> + return op_map_cb(ops, priv, req);
> }
>
> static int
> @@ -2303,10 +2304,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
> * drm_gpuvm_sm_map() - calls the &drm_gpuva_op split/merge steps
> * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @priv: pointer to a driver private data structure
> - * @req_addr: the start address of the new mapping
> - * @req_range: the range of the new mapping
> - * @req_obj: the &drm_gem_object to map
> - * @req_offset: the offset within the &drm_gem_object
> + * @req: ptr to struct drm_gpuvm_map_req
> *
> * This function iterates the given range of the GPU VA space. It utilizes the
> * &drm_gpuvm_ops to call back into the driver providing the split and merge
> @@ -2333,8 +2331,7 @@ __drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm,
> */
> int
> drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *req_obj, u64 req_offset)
> + const struct drm_gpuvm_map_req *req)
> {
> const struct drm_gpuvm_ops *ops = gpuvm->ops;
>
> @@ -2343,9 +2340,7 @@ drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> ops->sm_step_unmap)))
> return -EINVAL;
>
> - return __drm_gpuvm_sm_map(gpuvm, ops, priv,
> - req_addr, req_range,
> - req_obj, req_offset);
> + return __drm_gpuvm_sm_map(gpuvm, ops, priv, req);
> }
> EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map);
>
> @@ -2421,10 +2416,7 @@ static const struct drm_gpuvm_ops lock_ops = {
> * @gpuvm: the &drm_gpuvm representing the GPU VA space
> * @exec: the &drm_exec locking context
> * @num_fences: for newly mapped objects, the # of fences to reserve
> - * @req_addr: the start address of the range to unmap
> - * @req_range: the range of the mappings to unmap
> - * @req_obj: the &drm_gem_object to map
> - * @req_offset: the offset within the &drm_gem_object
> + * @req: ptr to drm_gpuvm_map_req struct
> *
> * This function locks (drm_exec_lock_obj()) objects that will be unmapped/
> * remapped, and locks+prepares (drm_exec_prepare_object()) objects that
> @@ -2445,9 +2437,7 @@ static const struct drm_gpuvm_ops lock_ops = {
> * ret = drm_gpuvm_sm_unmap_exec_lock(gpuvm, &exec, op->addr, op->range);
> * break;
> * case DRIVER_OP_MAP:
> - * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences,
> - * op->addr, op->range,
> - * obj, op->obj_offset);
> + * ret = drm_gpuvm_sm_map_exec_lock(gpuvm, &exec, num_fences, &req);
> * break;
> * }
> *
> @@ -2478,18 +2468,15 @@ static const struct drm_gpuvm_ops lock_ops = {
> int
> drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
> struct drm_exec *exec, unsigned int num_fences,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *req_obj, u64 req_offset)
> + struct drm_gpuvm_map_req *req)
> {
> - if (req_obj) {
> - int ret = drm_exec_prepare_obj(exec, req_obj, num_fences);
> + if (req->op_map.gem.obj) {
> + int ret = drm_exec_prepare_obj(exec, req->op_map.gem.obj, num_fences);
> if (ret)
> return ret;
> }
>
> - return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec,
> - req_addr, req_range,
> - req_obj, req_offset);
> + return __drm_gpuvm_sm_map(gpuvm, &lock_ops, exec, req);
>
> }
> EXPORT_SYMBOL_GPL(drm_gpuvm_sm_map_exec_lock);
> @@ -2611,10 +2598,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
> /**
> * drm_gpuvm_sm_map_ops_create() - creates the &drm_gpuva_ops to split and merge
> * @gpuvm: the &drm_gpuvm representing the GPU VA space
> - * @req_addr: the start address of the new mapping
> - * @req_range: the range of the new mapping
> - * @req_obj: the &drm_gem_object to map
> - * @req_offset: the offset within the &drm_gem_object
> + * @req: map request arguments
> *
> * This function creates a list of operations to perform splitting and merging
> * of existent mapping(s) with the newly requested one.
> @@ -2642,8 +2626,7 @@ static const struct drm_gpuvm_ops gpuvm_list_ops = {
> */
> struct drm_gpuva_ops *
> drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *req_obj, u64 req_offset)
> + const struct drm_gpuvm_map_req *req)
> {
> struct drm_gpuva_ops *ops;
> struct {
> @@ -2661,9 +2644,7 @@ drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> args.vm = gpuvm;
> args.ops = ops;
>
> - ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args,
> - req_addr, req_range,
> - req_obj, req_offset);
> + ret = __drm_gpuvm_sm_map(gpuvm, &gpuvm_list_ops, &args, req);
> if (ret)
> goto err_free_ops;
>
> diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
> index 2896fa7501b1..0f6b4cdb5fd8 100644
> --- a/drivers/gpu/drm/imagination/pvr_vm.c
> +++ b/drivers/gpu/drm/imagination/pvr_vm.c
> @@ -185,12 +185,17 @@ struct pvr_vm_bind_op {
> static int pvr_vm_bind_op_exec(struct pvr_vm_bind_op *bind_op)
> {
> switch (bind_op->type) {
> - case PVR_VM_BIND_TYPE_MAP:
> + case PVR_VM_BIND_TYPE_MAP: {
> + const struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = bind_op->device_addr,
> + .op_map.va.range = bind_op->size,
> + .op_map.gem.obj = gem_from_pvr_gem(bind_op->pvr_obj),
> + .op_map.gem.offset = bind_op->offset,
> + };
> +
> return drm_gpuvm_sm_map(&bind_op->vm_ctx->gpuvm_mgr,
> - bind_op, bind_op->device_addr,
> - bind_op->size,
> - gem_from_pvr_gem(bind_op->pvr_obj),
> - bind_op->offset);
> + bind_op, &map_req);
> + }
>
> case PVR_VM_BIND_TYPE_UNMAP:
> return drm_gpuvm_sm_unmap(&bind_op->vm_ctx->gpuvm_mgr,
> diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
> index 3cd8562a5109..2ca408c40369 100644
> --- a/drivers/gpu/drm/msm/msm_gem_vma.c
> +++ b/drivers/gpu/drm/msm/msm_gem_vma.c
> @@ -1172,10 +1172,17 @@ vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exec)
> break;
> case MSM_VM_BIND_OP_MAP:
> case MSM_VM_BIND_OP_MAP_NULL:
> - ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1,
> - op->iova, op->range,
> - op->obj, op->obj_offset);
> + {
> + struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = op->iova,
> + .op_map.va.range = op->range,
> + .op_map.gem.obj = op->obj,
> + .op_map.gem.offset = op->obj_offset,
> + };
> +
> + ret = drm_gpuvm_sm_map_exec_lock(job->vm, exec, 1, &map_req);
> break;
> + }
> default:
> /*
> * lookup_op() should have already thrown an error for
> @@ -1283,9 +1290,17 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job)
> arg.flags |= MSM_VMA_DUMP;
> fallthrough;
> case MSM_VM_BIND_OP_MAP_NULL:
> - ret = drm_gpuvm_sm_map(job->vm, &arg, op->iova,
> - op->range, op->obj, op->obj_offset);
> + {
> + struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = op->iova,
> + .op_map.va.range = op->range,
> + .op_map.gem.obj = op->obj,
> + .op_map.gem.offset = op->obj_offset,
> + };
> +
> + ret = drm_gpuvm_sm_map(job->vm, &arg, &map_req);
> break;
> + }
> default:
> /*
> * lookup_op() should have already thrown an error for
> diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> index ddfc46bc1b3e..92f87520eeb8 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c
> @@ -1276,6 +1276,12 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
> break;
> case OP_MAP: {
> struct nouveau_uvma_region *reg;
> + struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = op->va.addr,
> + .op_map.va.range = op->va.range,
> + .op_map.gem.obj = op->gem.obj,
> + .op_map.gem.offset = op->gem.offset,
> + };
>
> reg = nouveau_uvma_region_find_first(uvmm,
> op->va.addr,
> @@ -1301,10 +1307,7 @@ nouveau_uvmm_bind_job_submit(struct nouveau_job *job,
> }
>
> op->ops = drm_gpuvm_sm_map_ops_create(&uvmm->base,
> - op->va.addr,
> - op->va.range,
> - op->gem.obj,
> - op->gem.offset);
> + &map_req);
> if (IS_ERR(op->ops)) {
> ret = PTR_ERR(op->ops);
> goto unwind_continue;
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> index 4140f697ba5a..5ed4573b3a6b 100644
> --- a/drivers/gpu/drm/panthor/panthor_mmu.c
> +++ b/drivers/gpu/drm/panthor/panthor_mmu.c
> @@ -2169,15 +2169,22 @@ panthor_vm_exec_op(struct panthor_vm *vm, struct panthor_vm_op_ctx *op,
> mutex_lock(&vm->op_lock);
> vm->op_ctx = op;
> switch (op_type) {
> - case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP:
> + case DRM_PANTHOR_VM_BIND_OP_TYPE_MAP: {
> + const struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = op->va.addr,
> + .op_map.va.range = op->va.range,
> + .op_map.gem.obj = op->map.vm_bo->obj,
> + .op_map.gem.offset = op->map.bo_offset,
> + };
> +
> if (vm->unusable) {
> ret = -EINVAL;
> break;
> }
>
> - ret = drm_gpuvm_sm_map(&vm->base, vm, op->va.addr, op->va.range,
> - op->map.vm_bo->obj, op->map.bo_offset);
> + ret = drm_gpuvm_sm_map(&vm->base, vm, &map_req);
> break;
> + }
>
> case DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP:
> ret = drm_gpuvm_sm_unmap(&vm->base, vm, op->va.addr, op->va.range);
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 432ea325677d..9fcc52032a1d 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -2316,10 +2316,17 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
>
> switch (operation) {
> case DRM_XE_VM_BIND_OP_MAP:
> - case DRM_XE_VM_BIND_OP_MAP_USERPTR:
> - ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, addr, range,
> - obj, bo_offset_or_userptr);
> + case DRM_XE_VM_BIND_OP_MAP_USERPTR: {
> + struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = addr,
> + .op_map.va.range = range,
> + .op_map.gem.obj = obj,
> + .op_map.gem.offset = bo_offset_or_userptr,
> + };
> +
> + ops = drm_gpuvm_sm_map_ops_create(&vm->gpuvm, &map_req);
> break;
> + }
> case DRM_XE_VM_BIND_OP_UNMAP:
> ops = drm_gpuvm_sm_unmap_ops_create(&vm->gpuvm, addr, range);
> break;
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 274532facfd6..3cf0a84b8b08 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -1058,10 +1058,20 @@ struct drm_gpuva_ops {
> */
> #define drm_gpuva_next_op(op) list_next_entry(op, entry)
>
> +/**
> + * struct drm_gpuvm_map_req - arguments passed to drm_gpuvm_sm_map[_ops_create]()
> + */
> +struct drm_gpuvm_map_req {
> + /**
> + * @op_map: struct drm_gpuva_op_map
> + */
> + struct drm_gpuva_op_map op_map;
> +};
> +
> struct drm_gpuva_ops *
> drm_gpuvm_sm_map_ops_create(struct drm_gpuvm *gpuvm,
> - u64 addr, u64 range,
> - struct drm_gem_object *obj, u64 offset);
> + const struct drm_gpuvm_map_req *req);
> +
> struct drm_gpuva_ops *
> drm_gpuvm_sm_unmap_ops_create(struct drm_gpuvm *gpuvm,
> u64 addr, u64 range);
> @@ -1205,16 +1215,14 @@ struct drm_gpuvm_ops {
> };
>
> int drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm, void *priv,
> - u64 addr, u64 range,
> - struct drm_gem_object *obj, u64 offset);
> + const struct drm_gpuvm_map_req *req);
>
> int drm_gpuvm_sm_unmap(struct drm_gpuvm *gpuvm, void *priv,
> u64 addr, u64 range);
>
> int drm_gpuvm_sm_map_exec_lock(struct drm_gpuvm *gpuvm,
> struct drm_exec *exec, unsigned int num_fences,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *obj, u64 offset);
> + struct drm_gpuvm_map_req *req);
>
> int drm_gpuvm_sm_unmap_exec_lock(struct drm_gpuvm *gpuvm, struct drm_exec *exec,
> u64 req_addr, u64 req_range);
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 03/26] drm/gpuvm: Support flags in drm_gpuvm_map_req
2025-08-07 16:43 ` [PATCH v6 03/26] drm/gpuvm: Support flags in drm_gpuvm_map_req Himal Prasad Ghimiray
@ 2025-08-08 5:04 ` Matthew Brost
2025-08-09 12:46 ` Danilo Krummrich
1 sibling, 0 replies; 51+ messages in thread
From: Matthew Brost @ 2025-08-08 5:04 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Thomas Hellström, Danilo Krummrich,
Boris Brezillon, Caterina Shablia, dri-devel
On Thu, Aug 07, 2025 at 10:13:15PM +0530, Himal Prasad Ghimiray wrote:
> This change adds support for passing flags to drm_gpuvm_sm_map() and
> sm_map_ops_create(), enabling future extensions that affect split/merge
> logic in drm_gpuvm.
>
> v2
> - Move flag to drm_gpuvm_map_req
>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: Boris Brezillon <bbrezillon@kernel.org>
> Cc: Caterina Shablia <caterina.shablia@collabora.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> include/drm/drm_gpuvm.h | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index cbb9b6519462..116f77abd570 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -1049,6 +1049,13 @@ struct drm_gpuva_ops {
> */
> #define drm_gpuva_next_op(op) list_next_entry(op, entry)
>
> +enum drm_gpuvm_sm_map_ops_flags {
> + /**
> + * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE: DEFAULT sm_map ops
> + */
> + DRM_GPUVM_SM_MAP_OPS_FLAG_NONE = 0,
> +};
> +
> /**
> * struct drm_gpuvm_map_req - arguments passed to drm_gpuvm_sm_map[_ops_create]()
> */
> @@ -1057,6 +1064,11 @@ struct drm_gpuvm_map_req {
> * @op_map: struct drm_gpuva_op_map
> */
> struct drm_gpuva_op_map op_map;
> +
> + /**
> + * @flags: drm_gpuvm_sm_map_ops_flags for this mapping request
> + */
> + enum drm_gpuvm_sm_map_ops_flags flags;
> };
>
> struct drm_gpuva_ops *
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-08-07 16:43 ` [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
@ 2025-08-08 5:20 ` Matthew Brost
2025-08-09 13:23 ` Danilo Krummrich
2025-08-12 16:58 ` Danilo Krummrich
2 siblings, 0 replies; 51+ messages in thread
From: Matthew Brost @ 2025-08-08 5:20 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Thomas Hellström, Danilo Krummrich,
Boris Brezillon, dri-devel
On Thu, Aug 07, 2025 at 10:13:16PM +0530, Himal Prasad Ghimiray wrote:
> - DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE: This flag is used by
> drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
> user-provided range and split the existing non-GEM object VMA if the
> start or end of the input range lies within it. The operations can
> create up to 2 REMAPS and 2 MAPs. The purpose of this operation is to be
> used by the Xe driver to assign attributes to GPUVMA's within the
> user-defined range. Unlike drm_gpuvm_sm_map_ops_flags in default mode,
> the operation with this flag will never have UNMAPs and
> merges, and can be without any final operations.
>
> v2
> - use drm_gpuvm_sm_map_ops_create with flags instead of defining new
> ops_create (Danilo)
> - Add doc (Danilo)
>
> v3
> - Fix doc
> - Fix unmapping check
>
> v4
> - Fix mapping for non madvise ops
>
> v5
> - Fix mapping (Matthew Brost)
> - Rebase on top of struct changes
>
> v6
> - flag moved to map_req
>
I’ll give this an RB—it looks right to me, though it’s a bit hard to be
certain. Before merge, I’d like to see xe_exec_system_allocator add a
section(s) that calls madvise() on each newly allocated memory; that should
creatd enough random fragmentation—particularly with threaded
sections—in the VMA state to be confident this is correct.
With that:
Reviewed-by: Matthew Brost matthew.brost@intel.com
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Boris Brezillon <bbrezillon@kernel.org>
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/drm_gpuvm.c | 87 +++++++++++++++++++++++++++++++------
> include/drm/drm_gpuvm.h | 11 +++++
> 2 files changed, 84 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> index b3a01c40001b..d8f5f594a415 100644
> --- a/drivers/gpu/drm/drm_gpuvm.c
> +++ b/drivers/gpu/drm/drm_gpuvm.c
> @@ -2110,6 +2110,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> {
> struct drm_gpuva *va, *next;
> u64 req_end = req->op_map.va.addr + req->op_map.va.range;
> + bool is_madvise_ops = (req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
> + bool needs_map = !is_madvise_ops;
> int ret;
>
> if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->op_map.va.addr, req->op_map.va.range)))
> @@ -2122,26 +2124,35 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> u64 range = va->va.range;
> u64 end = addr + range;
> bool merge = !!va->gem.obj;
> + bool skip_madvise_ops = is_madvise_ops && merge;
>
> + needs_map = !is_madvise_ops;
> if (addr == req->op_map.va.addr) {
> merge &= obj == req->op_map.gem.obj &&
> offset == req->op_map.gem.offset;
>
> if (end == req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
> + if (ret)
> + return ret;
> + }
> break;
> }
>
> if (end < req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
> + if (ret)
> + return ret;
> + }
> continue;
> }
>
> if (end > req_end) {
> + if (skip_madvise_ops)
> + break;
> +
> struct drm_gpuva_op_map n = {
> .va.addr = req_end,
> .va.range = range - req->op_map.va.range,
> @@ -2156,6 +2167,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> ret = op_remap_cb(ops, priv, NULL, &n, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops)
> + needs_map = true;
> break;
> }
> } else if (addr < req->op_map.va.addr) {
> @@ -2173,20 +2187,45 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> u.keep = merge;
>
> if (end == req_end) {
> + if (skip_madvise_ops)
> + break;
> +
> ret = op_remap_cb(ops, priv, &p, NULL, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops)
> + needs_map = true;
> +
> break;
> }
>
> if (end < req_end) {
> + if (skip_madvise_ops)
> + continue;
> +
> ret = op_remap_cb(ops, priv, &p, NULL, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops) {
> + struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = req->op_map.va.addr,
> + .op_map.va.range = end - req->op_map.va.addr,
> + };
> +
> + ret = op_map_cb(ops, priv, &map_req);
> + if (ret)
> + return ret;
> + }
> +
> continue;
> }
>
> if (end > req_end) {
> + if (skip_madvise_ops)
> + break;
> +
> struct drm_gpuva_op_map n = {
> .va.addr = req_end,
> .va.range = end - req_end,
> @@ -2198,6 +2237,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> ret = op_remap_cb(ops, priv, &p, &n, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops)
> + needs_map = true;
> break;
> }
> } else if (addr > req->op_map.va.addr) {
> @@ -2206,20 +2248,29 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> (addr - req->op_map.va.addr);
>
> if (end == req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
> + if (ret)
> + return ret;
> + }
> +
> break;
> }
>
> if (end < req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
> + if (ret)
> + return ret;
> + }
> +
> continue;
> }
>
> if (end > req_end) {
> + if (skip_madvise_ops)
> + break;
> +
> struct drm_gpuva_op_map n = {
> .va.addr = req_end,
> .va.range = end - req_end,
> @@ -2234,12 +2285,20 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> ret = op_remap_cb(ops, priv, NULL, &n, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops) {
> + struct drm_gpuvm_map_req map_req = {
> + .op_map.va.addr = addr,
> + .op_map.va.range = req_end - addr,
> + };
> +
> + return op_map_cb(ops, priv, &map_req);
> + }
> break;
> }
> }
> }
> -
> - return op_map_cb(ops, priv, req);
> + return needs_map ? op_map_cb(ops, priv, req) : 0;
> }
>
> static int
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index 116f77abd570..fa2b74a54534 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -1054,6 +1054,17 @@ enum drm_gpuvm_sm_map_ops_flags {
> * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE: DEFAULT sm_map ops
> */
> DRM_GPUVM_SM_MAP_OPS_FLAG_NONE = 0,
> +
> + /**
> + * @DRM_GPUVM_SKIP_GEM_OBJ_VA_SPLIT_MADVISE: This flag is used by
> + * drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
> + * user-provided range and split the existing non-GEM object VMA if the
> + * start or end of the input range lies within it. The operations can
> + * create up to 2 REMAPS and 2 MAPs. Unlike drm_gpuvm_sm_map_ops_flags
> + * in default mode, the operation with this flag will never have UNMAPs
> + * and merges, and can be without any final operations.
> + */
> + DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE = BIT(0),
> };
>
> /**
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 14/26] drm/xe/svm : Add svm ranges migration policy on atomic access
2025-08-07 16:43 ` [PATCH v6 14/26] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
@ 2025-08-08 5:42 ` Matthew Brost
0 siblings, 0 replies; 51+ messages in thread
From: Matthew Brost @ 2025-08-08 5:42 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström
On Thu, Aug 07, 2025 at 10:13:26PM +0530, Himal Prasad Ghimiray wrote:
Nit: you have an extra space here, between 'svm :'
'drm/xe/svm : Add svm ranges migration policy on atomic access'
Otherwise LGTM:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> If the platform does not support atomic access on system memory, and the
> ranges are in system memory, but the user requires atomic accesses on
> the VMA, then migrate the ranges to VRAM. Apply this policy for prefetch
> operations as well.
>
> v2
> - Drop unnecessary vm_dbg
>
> v3 (Matthew Brost)
> - fix atomic policy
> - prefetch shouldn't have any impact of atomic
> - bo can be accessed from vma, avoid duplicate parameter
>
> v4 (Matthew Brost)
> - Remove TODO comment
> - Fix comment
> - Dont allow gpu atomic ops when user is setting atomic attr as CPU
>
> v5 (Matthew Brost)
> - Fix atomic checks
> - Add userptr checks
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pt.c | 23 ++++++++------
> drivers/gpu/drm/xe/xe_svm.c | 50 ++++++++++++++++++------------
> drivers/gpu/drm/xe/xe_vm.c | 39 +++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 2 ++
> drivers/gpu/drm/xe/xe_vm_madvise.c | 15 ++++++++-
> 5 files changed, 99 insertions(+), 30 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index 593fef438cd8..6f5b384991cd 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -640,28 +640,31 @@ static const struct xe_pt_walk_ops xe_pt_stage_bind_ops = {
> * - In all other cases device atomics will be disabled with AE=0 until an application
> * request differently using a ioctl like madvise.
> */
> -static bool xe_atomic_for_vram(struct xe_vm *vm)
> +static bool xe_atomic_for_vram(struct xe_vm *vm, struct xe_vma *vma)
> {
> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
> + return false;
> +
> return true;
> }
>
> -static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_bo *bo)
> +static bool xe_atomic_for_system(struct xe_vm *vm, struct xe_vma *vma)
> {
> struct xe_device *xe = vm->xe;
> + struct xe_bo *bo = xe_vma_bo(vma);
>
> - if (!xe->info.has_device_atomics_on_smem)
> + if (!xe->info.has_device_atomics_on_smem ||
> + vma->attr.atomic_access == DRM_XE_ATOMIC_CPU)
> return false;
>
> + if (vma->attr.atomic_access == DRM_XE_ATOMIC_DEVICE)
> + return true;
> +
> /*
> * If a SMEM+LMEM allocation is backed by SMEM, a device
> * atomics will cause a gpu page fault and which then
> * gets migrated to LMEM, bind such allocations with
> * device atomics enabled.
> - *
> - * TODO: Revisit this. Perhaps add something like a
> - * fault_on_atomics_in_system UAPI flag.
> - * Note that this also prohibits GPU atomics in LR mode for
> - * userptr and system memory on DGFX.
> */
> return (!IS_DGFX(xe) || (!xe_vm_in_lr_mode(vm) ||
> (bo && xe_bo_has_single_placement(bo))));
> @@ -744,8 +747,8 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
> goto walk_pt;
>
> if (vma->gpuva.flags & XE_VMA_ATOMIC_PTE_BIT) {
> - xe_walk.default_vram_pte = xe_atomic_for_vram(vm) ? XE_USM_PPGTT_PTE_AE : 0;
> - xe_walk.default_system_pte = xe_atomic_for_system(vm, bo) ?
> + xe_walk.default_vram_pte = xe_atomic_for_vram(vm, vma) ? XE_USM_PPGTT_PTE_AE : 0;
> + xe_walk.default_system_pte = xe_atomic_for_system(vm, vma) ?
> XE_USM_PPGTT_PTE_AE : 0;
> }
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index c2306000f15e..c660ccb21945 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -789,22 +789,9 @@ bool xe_svm_range_needs_migrate_to_vram(struct xe_svm_range *range, struct xe_vm
> return true;
> }
>
> -/**
> - * xe_svm_handle_pagefault() - SVM handle page fault
> - * @vm: The VM.
> - * @vma: The CPU address mirror VMA.
> - * @gt: The gt upon the fault occurred.
> - * @fault_addr: The GPU fault address.
> - * @atomic: The fault atomic access bit.
> - *
> - * Create GPU bindings for a SVM page fault. Optionally migrate to device
> - * memory.
> - *
> - * Return: 0 on success, negative error code on error.
> - */
> -int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> - struct xe_gt *gt, u64 fault_addr,
> - bool atomic)
> +static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> + struct xe_gt *gt, u64 fault_addr,
> + bool need_vram)
> {
> struct drm_gpusvm_ctx ctx = {
> .read_only = xe_vma_read_only(vma),
> @@ -812,9 +799,8 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> .check_pages_threshold = IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ? SZ_64K : 0,
> - .devmem_only = atomic && IS_DGFX(vm->xe) &&
> - IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> - .timeslice_ms = atomic && IS_DGFX(vm->xe) &&
> + .devmem_only = need_vram && IS_ENABLED(CONFIG_DRM_XE_PAGEMAP),
> + .timeslice_ms = need_vram && IS_DGFX(vm->xe) &&
> IS_ENABLED(CONFIG_DRM_XE_PAGEMAP) ?
> vm->xe->atomic_svm_timeslice_ms : 0,
> };
> @@ -917,6 +903,32 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> return err;
> }
>
> +/**
> + * xe_svm_handle_pagefault() - SVM handle page fault
> + * @vm: The VM.
> + * @vma: The CPU address mirror VMA.
> + * @gt: The gt upon the fault occurred.
> + * @fault_addr: The GPU fault address.
> + * @atomic: The fault atomic access bit.
> + *
> + * Create GPU bindings for a SVM page fault. Optionally migrate to device
> + * memory.
> + *
> + * Return: 0 on success, negative error code on error.
> + */
> +int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> + struct xe_gt *gt, u64 fault_addr,
> + bool atomic)
> +{
> + int need_vram;
> +
> + need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
> + if (need_vram < 0)
> + return need_vram;
> +
> + return __xe_svm_handle_pagefault(vm, vma, gt, fault_addr, need_vram ? true : false);
> +}
> +
> /**
> * xe_svm_has_mapping() - SVM has mappings
> * @vm: The VM.
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 376850a22be2..aa8d4c4fe0f0 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -4183,6 +4183,45 @@ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap)
> kvfree(snap);
> }
>
> +/**
> + * xe_vma_need_vram_for_atomic - Check if VMA needs VRAM migration for atomic operations
> + * @xe: Pointer to the XE device structure
> + * @vma: Pointer to the virtual memory area (VMA) structure
> + * @is_atomic: In pagefault path and atomic operation
> + *
> + * This function determines whether the given VMA needs to be migrated to
> + * VRAM in order to do atomic GPU operation.
> + *
> + * Return:
> + * 1 - Migration to VRAM is required
> + * 0 - Migration is not required
> + * -EACCES - Invalid access for atomic memory attr
> + *
> + */
> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic)
> +{
> + if (!IS_DGFX(xe) || !is_atomic)
> + return 0;
> +
> + /*
> + * NOTE: The checks implemented here are platform-specific. For
> + * instance, on a device supporting CXL atomics, these would ideally
> + * work universally without additional handling.
> + */
> + switch (vma->attr.atomic_access) {
> + case DRM_XE_ATOMIC_DEVICE:
> + return !xe->info.has_device_atomics_on_smem;
> +
> + case DRM_XE_ATOMIC_CPU:
> + return -EACCES;
> +
> + case DRM_XE_ATOMIC_UNDEFINED:
> + case DRM_XE_ATOMIC_GLOBAL:
> + default:
> + return 1;
> + }
> +}
> +
> /**
> * xe_vm_alloc_madvise_vma - Allocate VMA's with madvise ops
> * @vm: Pointer to the xe_vm structure
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 0d6b08cc4163..05ac3118d9f4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -171,6 +171,8 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
>
> struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
>
> +int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic);
> +
> int xe_vm_alloc_madvise_vma(struct xe_vm *vm, uint64_t addr, uint64_t size);
>
> /**
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index b861c3349b0a..95258bb6a8ee 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -85,7 +85,20 @@ static void madvise_atomic(struct xe_device *xe, struct xe_vm *vm,
> struct xe_vma **vmas, int num_vmas,
> struct drm_xe_madvise *op)
> {
> - /* Implementation pending */
> + int i;
> +
> + xe_assert(vm->xe, op->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC);
> + xe_assert(vm->xe, op->atomic.val <= DRM_XE_ATOMIC_CPU);
> +
> + for (i = 0; i < num_vmas; i++) {
> + if ((xe_vma_is_userptr(vmas[i]) &&
> + !(op->atomic.val == DRM_XE_ATOMIC_DEVICE &&
> + xe->info.has_device_atomics_on_smem)))
> + continue;
> +
> + vmas[i]->attr.atomic_access = op->atomic.val;
> + /*TODO: handle bo backed vmas */
> + }
> }
>
> static void madvise_pat_index(struct xe_device *xe, struct xe_vm *vm,
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 16/26] drm/xe/pat: Add helper for compression mode of pat index
2025-08-07 16:43 ` [PATCH v6 16/26] drm/xe/pat: Add helper for compression mode of pat index Himal Prasad Ghimiray
@ 2025-08-08 5:47 ` Matthew Brost
0 siblings, 0 replies; 51+ messages in thread
From: Matthew Brost @ 2025-08-08 5:47 UTC (permalink / raw)
To: Himal Prasad Ghimiray; +Cc: intel-xe, Thomas Hellström
On Thu, Aug 07, 2025 at 10:13:28PM +0530, Himal Prasad Ghimiray wrote:
> From: Matthew Brost <matthew.brost@intel.com>
>
> Add `xe_pat_index_get_comp_mode()` to extract compression mode from a
> PAT index.
>
I'ed say just leave this one out, the initial patch [1] to disable this at
the bind IOCTL is still under review and we haven't landed on COH mode
issues or if we just nack the IOCTL for invalidate pat_index quite yet.
We have ~5 weeks until the 6.17 PR and close a on solution, so I'd say
let's get this series in with the same functional as bind IOCTLs for
now.
Matt
[1] https://patchwork.freedesktop.org/series/152547/
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pat.c | 10 ++++++++++
> drivers/gpu/drm/xe/xe_pat.h | 10 ++++++++++
> 2 files changed, 20 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
> index 2e7cb99ae87a..ac1767c812aa 100644
> --- a/drivers/gpu/drm/xe/xe_pat.c
> +++ b/drivers/gpu/drm/xe/xe_pat.c
> @@ -154,6 +154,16 @@ static const struct xe_pat_table_entry xe2_pat_table[] = {
> static const struct xe_pat_table_entry xe2_pat_ats = XE2_PAT( 0, 0, 0, 0, 3, 3 );
> static const struct xe_pat_table_entry xe2_pat_pta = XE2_PAT( 0, 0, 0, 0, 3, 0 );
>
> +bool xe_pat_index_get_comp_mode(struct xe_device *xe, u16 pat_index)
> +{
> + WARN_ON(pat_index >= xe->pat.n_entries);
> +
> + if (xe->pat.table != xe2_pat_table)
> + return false;
> +
> + return xe->pat.table[pat_index].value & XE2_COMP_EN;
> +}
> +
> u16 xe_pat_index_get_coh_mode(struct xe_device *xe, u16 pat_index)
> {
> WARN_ON(pat_index >= xe->pat.n_entries);
> diff --git a/drivers/gpu/drm/xe/xe_pat.h b/drivers/gpu/drm/xe/xe_pat.h
> index fa0dfbe525cd..8be2856a73af 100644
> --- a/drivers/gpu/drm/xe/xe_pat.h
> +++ b/drivers/gpu/drm/xe/xe_pat.h
> @@ -58,4 +58,14 @@ void xe_pat_dump(struct xe_gt *gt, struct drm_printer *p);
> */
> u16 xe_pat_index_get_coh_mode(struct xe_device *xe, u16 pat_index);
>
> +/**
> + * xe_pat_index_get_comp_mode() = Extract the compression mode for the given
> + * pat_index.
> + * @xe: xe device
> + * @pat_index: The pat_index to query
> + *
> + * Return: True if pat_index is compressed, False otherwise
> + */
> +bool xe_pat_index_get_comp_mode(struct xe_device *xe, u16 pat_index);
> +
> #endif
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct
2025-08-07 16:43 ` [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
2025-08-08 5:03 ` Matthew Brost
@ 2025-08-09 12:43 ` Danilo Krummrich
2025-08-11 6:54 ` Ghimiray, Himal Prasad
2025-08-12 16:51 ` Danilo Krummrich
2 siblings, 1 reply; 51+ messages in thread
From: Danilo Krummrich @ 2025-08-09 12:43 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
Brendan King, Boris Brezillon, Caterina Shablia, Rob Clark,
dri-devel
On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
> From: Boris Brezillon <boris.brezillon@collabora.com>
>
> We are about to pass more arguments to drm_gpuvm_sm_map[_ops_create](),
> so, before we do that, let's pass arguments through a struct instead
> of changing each call site every time a new optional argument is added.
>
> v5
> - Use drm_gpuva_op_map—same as drm_gpuvm_map_req
> - Rebase changes for drm_gpuvm_sm_map_exec_lock()
> - Fix kernel-docs
>
> v6
> - Use drm_gpuvm_map_req (Danilo/Matt)
>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: Brendan King <Brendan.King@imgtec.com>
> Cc: Boris Brezillon <bbrezillon@kernel.org>
> Cc: Caterina Shablia <caterina.shablia@collabora.com>
> Cc: Rob Clark <robin.clark@oss.qualcomm.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
Caterina does not seem to be involved in handling this patch. Either you should
remove this SoB or adda Co-developed-by: tag for her if that's the case.
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
You may also want to add a Co-developed-by: tag for yourself given that you made
significant changes to the patch. But that's between Boris and you of course.
> +/**
> + * struct drm_gpuvm_map_req - arguments passed to drm_gpuvm_sm_map[_ops_create]()
> + */
> +struct drm_gpuvm_map_req {
> + /**
> + * @op_map: struct drm_gpuva_op_map
> + */
> + struct drm_gpuva_op_map op_map;
I think this should just be 'op', the outer structure says 'map' already.
> +};
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 03/26] drm/gpuvm: Support flags in drm_gpuvm_map_req
2025-08-07 16:43 ` [PATCH v6 03/26] drm/gpuvm: Support flags in drm_gpuvm_map_req Himal Prasad Ghimiray
2025-08-08 5:04 ` Matthew Brost
@ 2025-08-09 12:46 ` Danilo Krummrich
2025-08-11 6:56 ` Ghimiray, Himal Prasad
1 sibling, 1 reply; 51+ messages in thread
From: Danilo Krummrich @ 2025-08-09 12:46 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
Caterina Shablia, dri-devel
On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
> This change adds support for passing flags to drm_gpuvm_sm_map() and
> sm_map_ops_create(), enabling future extensions that affect split/merge
> logic in drm_gpuvm.
>
> v2
> - Move flag to drm_gpuvm_map_req
>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: Boris Brezillon <bbrezillon@kernel.org>
> Cc: Caterina Shablia <caterina.shablia@collabora.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: <dri-devel@lists.freedesktop.org>
> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> ---
> include/drm/drm_gpuvm.h | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> index cbb9b6519462..116f77abd570 100644
> --- a/include/drm/drm_gpuvm.h
> +++ b/include/drm/drm_gpuvm.h
> @@ -1049,6 +1049,13 @@ struct drm_gpuva_ops {
> */
> #define drm_gpuva_next_op(op) list_next_entry(op, entry)
>
> +enum drm_gpuvm_sm_map_ops_flags {
Please also add a doc-comment for the enum type itself, explaing where those
flags are used, etc.
> + /**
> + * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE: DEFAULT sm_map ops
Shouldn't this be '@DRM_GPUVM_SM_MAP_OPS_FLAG_NONE:'?
> + */
> + DRM_GPUVM_SM_MAP_OPS_FLAG_NONE = 0,
> +};
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-08-07 16:43 ` [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
2025-08-08 5:20 ` Matthew Brost
@ 2025-08-09 13:23 ` Danilo Krummrich
2025-08-11 6:52 ` Ghimiray, Himal Prasad
2025-08-12 16:58 ` Danilo Krummrich
2 siblings, 1 reply; 51+ messages in thread
From: Danilo Krummrich @ 2025-08-09 13:23 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
dri-devel
On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
> - DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE: This flag is used by
> drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
> user-provided range and split the existing non-GEM object VMA if the
What do you mean with non-GEM object VMA? I assume you just mean sparse
mappings?
> start or end of the input range lies within it. The operations can
> create up to 2 REMAPS and 2 MAPs.
Wait -- that doesn't make sense with what I thought how this works.
In the RFC you gave the following example:
Example:
Input Range: 0x00007f0a54000000 to 0x00007f0a54400000
GPU VMA: 0x0000000000000000 to 0x0000800000000000
Operations Result:
- REMAP:UNMAP: addr=0x0000000000000000, range=0x0000800000000000
- REMAP:PREV: addr=0x0000000000000000, range=0x00007f0a54000000
- REMAP:NEXT: addr=0x00007f0a54400000, range=0x000000f5abc00000
- MAP: addr=0x00007f0a54000000, range=0x0000000000400000
That's exactly the same what the existing logic does. So in which case do you
have *two* MAP operations?
For completeness, the other example you gave was:
Example:
Input Range: 0x00007fc898800000 to 0x00007fc899000000
GPU VMAs:
- 0x0000000000000000 to 0x00007fc898800000
- 0x00007fc898800000 to 0x00007fc898a00000
- 0x00007fc898a00000 to 0x00007fc898c00000
- 0x00007fc898c00000 to 0x00007fc899000000
- 0x00007fc899000000 to 0x00007fc899200000
Operations Result: None
This just means, if things are split already, at the defined edges, just don't
do anything, which is also conform with the existing logic except for the "no
merge" part, which is obviously fine given that it's explicitly for splitting
things.
Can you please provide some additional *simple* examples, like the documentation
of GPUVM does today for the normal split/merge stuff? I.e. please don't use
complex real addresses, that makes it hard to parse.
Also, can you please provide some information on what this whole thing does
*semantically*? I thought I understood it, but now I'm not so sure anymore.
> The purpose of this operation is to be
> used by the Xe driver to assign attributes to GPUVMA's within the
> user-defined range.
Well, hopefully it's useful to other drivers as well. :)
> Unlike drm_gpuvm_sm_map_ops_flags in default mode,
> the operation with this flag will never have UNMAPs and
> merges, and can be without any final operations.
I really think this is significant enough of a feature to add some proper
documentation about it.
Please add a separate section about madvise operations to the documentation at
the beginning of the drivers/gpu/drm/drm_gpuvm.c file.
>
> v2
> - use drm_gpuvm_sm_map_ops_create with flags instead of defining new
> ops_create (Danilo)
If this turns out not to be what I thought semantically and we still agree it's
the correct approach, I think I have to take this back and it should indeed be
an entirely separate code path. But let's wait for your answers above.
Again, I really think this needs some proper documentation like in the
"DOC: Split and Merge" documentation section.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-08-09 13:23 ` Danilo Krummrich
@ 2025-08-11 6:52 ` Ghimiray, Himal Prasad
2025-08-12 16:06 ` Danilo Krummrich
0 siblings, 1 reply; 51+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-11 6:52 UTC (permalink / raw)
To: Danilo Krummrich
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
dri-devel
On 09-08-2025 18:53, Danilo Krummrich wrote:
> On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
>> - DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE: This flag is used by
>> drm_gpuvm_sm_map_ops_create to iterate over GPUVMA's in the
>> user-provided range and split the existing non-GEM object VMA if the
>
> What do you mean with non-GEM object VMA? I assume you just mean sparse
> mappings?
True>
>> start or end of the input range lies within it. The operations can
>> create up to 2 REMAPS and 2 MAPs.
>
> Wait -- that doesn't make sense with what I thought how this works.
>
> In the RFC you gave the following example:
>
> Example:
> Input Range: 0x00007f0a54000000 to 0x00007f0a54400000
> GPU VMA: 0x0000000000000000 to 0x0000800000000000
> Operations Result:
> - REMAP:UNMAP: addr=0x0000000000000000, range=0x0000800000000000
> - REMAP:PREV: addr=0x0000000000000000, range=0x00007f0a54000000
> - REMAP:NEXT: addr=0x00007f0a54400000, range=0x000000f5abc00000
> - MAP: addr=0x00007f0a54000000, range=0x0000000000400000
>
> That's exactly the same what the existing logic does. So in which case do you
> have *two* MAP operations?
Possible scenarios for ops functionality based on input start and end
address from user:
a) User-provided range is a subset of an existing drm_gpuva
Expected Result: Same behavior as the default sm_map logic.
Reference: Case 1 from [1].
b) Either start or end (but not both) is not aligned with a drm_gpuva
boundary
Expected Result: One REMAP and one MAP operation.
Reference: Case 3 from [1].
Existing GPUVMAs:
drm_gpuva1 drm_gpuva2
[a----------------------------b-1][b-------------------c-1]
User Input to ops:
start = inside drm_gpuva1
end = exactly at c-1 (end of drm_gpuva2)
Resulting Mapping:
drm_gpuva1:pre drm_gpuva:New map drm_gpuva2
[a---------start-1][start------- b-1] [b------------c-1]
Ops Created:
REMAP:UNMAP drm_gpuva1 a to b
REMAP:PREV a to start - 1
MAP: start to b-1
Note: No unmap of drm_gpuvma2 and no merging of New map and drm_gpuva2.
c) Both start and end are not aligned with drm_gpuva boundaries, and
they fall within different drm_gpuva regions
Expected Result: Two REMAP operations and two MAP operations.
Reference: Case 2 from [1].
d) User-provided range does not overlap with any existing drm_gpuva
Expected Result: No operations.
start and end exactly match the boundaries of one or more existing
drm_gpuva regions
e) This includes cases where start is at the beginning of drm_gpuva1 and
end is at the end of drm_gpuva2 (drm_gpuva1 and drm_gpuva2 can be same
or different).
Expected Result: No operations
[1]
https://lore.kernel.org/intel-xe/4203f450-4b49-401d-81a8-cdcca02035f9@intel.com/
>
> For completeness, the other example you gave was:
>
> Example:
> Input Range: 0x00007fc898800000 to 0x00007fc899000000
> GPU VMAs:
> - 0x0000000000000000 to 0x00007fc898800000
> - 0x00007fc898800000 to 0x00007fc898a00000
> - 0x00007fc898a00000 to 0x00007fc898c00000
> - 0x00007fc898c00000 to 0x00007fc899000000
> - 0x00007fc899000000 to 0x00007fc899200000
> Operations Result: None
>
> This just means, if things are split already, at the defined edges, just don't
> do anything, which is also conform with the existing logic except for the "no
> merge" part, which is obviously fine given that it's explicitly for splitting
> things.
>
> Can you please provide some additional *simple* examples, like the documentation
> of GPUVM does today for the normal split/merge stuff? I.e. please don't use
> complex real addresses, that makes it hard to parse.
>
> Also, can you please provide some information on what this whole thing does
> *semantically*? I thought I understood it, but now I'm not so sure anymore.
>
I’ve tried to explain the behavior/usecase with madvise and expected
outcomes of the ops logic in detail in [1]. Could you please take a
moment to review that and let me know if the explanation is sufficient
or if any part needs further clarification?
>> The purpose of this operation is to be
>> used by the Xe driver to assign attributes to GPUVMA's within the
>> user-defined range.
>
> Well, hopefully it's useful to other drivers as well. :)
It should be. :)
>
>> Unlike drm_gpuvm_sm_map_ops_flags in default mode,
>> the operation with this flag will never have UNMAPs and
>> merges, and can be without any final operations.
>
> I really think this is significant enough of a feature to add some proper
> documentation about it.
>
> Please add a separate section about madvise operations to the documentation at
> the beginning of the drivers/gpu/drm/drm_gpuvm.c file.
Sure will do that.
>
>>
>> v2
>> - use drm_gpuvm_sm_map_ops_create with flags instead of defining new
>> ops_create (Danilo)
>
> If this turns out not to be what I thought semantically and we still agree it's
> the correct approach, I think I have to take this back and it should indeed be
> an entirely separate code path. But let's wait for your answers above.
>
> Again, I really think this needs some proper documentation like in the
> "DOC: Split and Merge" documentation section.
Sure
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct
2025-08-09 12:43 ` Danilo Krummrich
@ 2025-08-11 6:54 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 51+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-11 6:54 UTC (permalink / raw)
To: Danilo Krummrich
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
Brendan King, Boris Brezillon, Caterina Shablia, Rob Clark,
dri-devel
On 09-08-2025 18:13, Danilo Krummrich wrote:
> On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
>> From: Boris Brezillon <boris.brezillon@collabora.com>
>>
>> We are about to pass more arguments to drm_gpuvm_sm_map[_ops_create](),
>> so, before we do that, let's pass arguments through a struct instead
>> of changing each call site every time a new optional argument is added.
>>
>> v5
>> - Use drm_gpuva_op_map—same as drm_gpuvm_map_req
>> - Rebase changes for drm_gpuvm_sm_map_exec_lock()
>> - Fix kernel-docs
>>
>> v6
>> - Use drm_gpuvm_map_req (Danilo/Matt)
>>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: Brendan King <Brendan.King@imgtec.com>
>> Cc: Boris Brezillon <bbrezillon@kernel.org>
>> Cc: Caterina Shablia <caterina.shablia@collabora.com>
>> Cc: Rob Clark <robin.clark@oss.qualcomm.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: <dri-devel@lists.freedesktop.org>
>> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
>> Signed-off-by: Caterina Shablia <caterina.shablia@collabora.com>
>
> Caterina does not seem to be involved in handling this patch. Either you should
> remove this SoB or adda Co-developed-by: tag for her if that's the case.
Caterina will it be ok to remove this SoB ?
>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>
> You may also want to add a Co-developed-by: tag for yourself given that you made
> significant changes to the patch. But that's between Boris and you of course.
If Boris doesn't have any objection would go ahead and add
Co-developed-by: tag for myself.
>
>> +/**
>> + * struct drm_gpuvm_map_req - arguments passed to drm_gpuvm_sm_map[_ops_create]()
>> + */
>> +struct drm_gpuvm_map_req {
>> + /**
>> + * @op_map: struct drm_gpuva_op_map
>> + */
>> + struct drm_gpuva_op_map op_map;
>
> I think this should just be 'op', the outer structure says 'map' already.
Makes sense. Will update in next patch.
>
>> +};
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 03/26] drm/gpuvm: Support flags in drm_gpuvm_map_req
2025-08-09 12:46 ` Danilo Krummrich
@ 2025-08-11 6:56 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 51+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-11 6:56 UTC (permalink / raw)
To: Danilo Krummrich
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
Caterina Shablia, dri-devel
On 09-08-2025 18:16, Danilo Krummrich wrote:
> On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
>> This change adds support for passing flags to drm_gpuvm_sm_map() and
>> sm_map_ops_create(), enabling future extensions that affect split/merge
>> logic in drm_gpuvm.
>>
>> v2
>> - Move flag to drm_gpuvm_map_req
>>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: Boris Brezillon <bbrezillon@kernel.org>
>> Cc: Caterina Shablia <caterina.shablia@collabora.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: <dri-devel@lists.freedesktop.org>
>> Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> ---
>> include/drm/drm_gpuvm.h | 12 ++++++++++++
>> 1 file changed, 12 insertions(+)
>>
>> diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
>> index cbb9b6519462..116f77abd570 100644
>> --- a/include/drm/drm_gpuvm.h
>> +++ b/include/drm/drm_gpuvm.h
>> @@ -1049,6 +1049,13 @@ struct drm_gpuva_ops {
>> */
>> #define drm_gpuva_next_op(op) list_next_entry(op, entry)
>>
>> +enum drm_gpuvm_sm_map_ops_flags {
>
> Please also add a doc-comment for the enum type itself, explaing where those
> flags are used, etc.
sure will do.
>
>> + /**
>> + * %DRM_GPUVM_SM_MAP_OPS_FLAG_NONE: DEFAULT sm_map ops
>
> Shouldn't this be '@DRM_GPUVM_SM_MAP_OPS_FLAG_NONE:'?
Yup. will change in next version.
>
>> + */
>> + DRM_GPUVM_SM_MAP_OPS_FLAG_NONE = 0,
>> +};
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-08-11 6:52 ` Ghimiray, Himal Prasad
@ 2025-08-12 16:06 ` Danilo Krummrich
2025-08-12 17:52 ` Ghimiray, Himal Prasad
0 siblings, 1 reply; 51+ messages in thread
From: Danilo Krummrich @ 2025-08-12 16:06 UTC (permalink / raw)
To: Ghimiray, Himal Prasad
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
dri-devel
On Mon Aug 11, 2025 at 8:52 AM CEST, Himal Prasad Ghimiray wrote:
> On 09-08-2025 18:53, Danilo Krummrich wrote:
> Possible scenarios for ops functionality based on input start and end
> address from user:
>
> a) User-provided range is a subset of an existing drm_gpuva
> Expected Result: Same behavior as the default sm_map logic.
> Reference: Case 1 from [1].
>
> b) Either start or end (but not both) is not aligned with a drm_gpuva
> boundary
> Expected Result: One REMAP and one MAP operation.
> Reference: Case 3 from [1].
>
> Existing GPUVMAs:
>
> drm_gpuva1 drm_gpuva2
> [a----------------------------b-1][b-------------------c-1]
>
> User Input to ops:
> start = inside drm_gpuva1
> end = exactly at c-1 (end of drm_gpuva2)
>
> Resulting Mapping:
> drm_gpuva1:pre drm_gpuva:New map drm_gpuva2
> [a---------start-1][start------- b-1] [b------------c-1]
>
> Ops Created:
> REMAP:UNMAP drm_gpuva1 a to b
> REMAP:PREV a to start - 1
> MAP: start to b-1
>
> Note: No unmap of drm_gpuvma2 and no merging of New map and drm_gpuva2.
>
> c) Both start and end are not aligned with drm_gpuva boundaries, and
> they fall within different drm_gpuva regions
> Expected Result: Two REMAP operations and two MAP operations.
> Reference: Case 2 from [1].
>
>
> d) User-provided range does not overlap with any existing drm_gpuva
> Expected Result: No operations.
> start and end exactly match the boundaries of one or more existing
> drm_gpuva regions
>
> e) This includes cases where start is at the beginning of drm_gpuva1 and
> end is at the end of drm_gpuva2 (drm_gpuva1 and drm_gpuva2 can be same
> or different).
> Expected Result: No operations
>
> [1]
> https://lore.kernel.org/intel-xe/4203f450-4b49-401d-81a8-cdcca02035f9@intel.com/
<snip>
> I’ve tried to explain the behavior/usecase with madvise and expected
> outcomes of the ops logic in detail in [1]. Could you please take a
> moment to review that and let me know if the explanation is sufficient
> or if any part needs further clarification?
Thanks a lot for writing this up!
I think this clarifies everything, the examples from [1] are good (sorry that
your reply from the RFC got lost somehow on my end).
>> Please add a separate section about madvise operations to the documentation at
>> the beginning of the drivers/gpu/drm/drm_gpuvm.c file.
>
> Sure will do that.
Great, this will help users (as well as reviewers) a lot. Please also add your
examples from [1] to this entry, similar to the existing examples for sm_map.
>>> v2
>>> - use drm_gpuvm_sm_map_ops_create with flags instead of defining new
>>> ops_create (Danilo)
>>
>> If this turns out not to be what I thought semantically and we still agree it's
>> the correct approach, I think I have to take this back and it should indeed be
>> an entirely separate code path. But let's wait for your answers above.
Having the correct understanding of how this is supposed to work (and seeing how
the code turns out) I think it's still OK to integrate it into sm_map().
However, it probably makes sense to factor out the code into a common function
and then build the madvise() and sm_map() functions on top of it.
Please also find some more comments on the patch itself.
>> Again, I really think this needs some proper documentation like in the
>> "DOC: Split and Merge" documentation section.
>
> Sure
Thanks!
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct
2025-08-07 16:43 ` [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
2025-08-08 5:03 ` Matthew Brost
2025-08-09 12:43 ` Danilo Krummrich
@ 2025-08-12 16:51 ` Danilo Krummrich
2025-08-12 17:55 ` Ghimiray, Himal Prasad
2 siblings, 1 reply; 51+ messages in thread
From: Danilo Krummrich @ 2025-08-12 16:51 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
Brendan King, Boris Brezillon, Caterina Shablia, Rob Clark,
dri-devel
On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
> @@ -2102,17 +2106,16 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
> static int
> __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> const struct drm_gpuvm_ops *ops, void *priv,
> - u64 req_addr, u64 req_range,
> - struct drm_gem_object *req_obj, u64 req_offset)
> + const struct drm_gpuvm_map_req *req)
> {
> struct drm_gpuva *va, *next;
> - u64 req_end = req_addr + req_range;
> + u64 req_end = req->op_map.va.addr + req->op_map.va.range;
Forgot to add, please extract all previous values from req, such that the below
diff is minimal and the code remiains easier to read..
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-08-07 16:43 ` [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
2025-08-08 5:20 ` Matthew Brost
2025-08-09 13:23 ` Danilo Krummrich
@ 2025-08-12 16:58 ` Danilo Krummrich
2025-08-12 17:54 ` Ghimiray, Himal Prasad
2 siblings, 1 reply; 51+ messages in thread
From: Danilo Krummrich @ 2025-08-12 16:58 UTC (permalink / raw)
To: Himal Prasad Ghimiray
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
dri-devel
On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
> @@ -2110,6 +2110,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> {
> struct drm_gpuva *va, *next;
> u64 req_end = req->op_map.va.addr + req->op_map.va.range;
> + bool is_madvise_ops = (req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
Let's just call this 'madvise'.
> + bool needs_map = !is_madvise_ops;
> int ret;
>
> if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->op_map.va.addr, req->op_map.va.range)))
> @@ -2122,26 +2124,35 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> u64 range = va->va.range;
> u64 end = addr + range;
> bool merge = !!va->gem.obj;
> + bool skip_madvise_ops = is_madvise_ops && merge;
IIUC, you're either going for continue or break in this case. I think continue
would always be correct and break is an optimization if end <= req_end?
If that's correct, please just do either
if (madvise && va->gem.obj)
continue;
or
if (madvise && va->gem.obj) {
if (end > req_end)
break;
else
continue;
}
instead of sprinkling the skip_madvise_ops checks everywhere.
>
> + needs_map = !is_madvise_ops;
> if (addr == req->op_map.va.addr) {
> merge &= obj == req->op_map.gem.obj &&
> offset == req->op_map.gem.offset;
>
> if (end == req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
> + if (ret)
> + return ret;
> + }
> break;
> }
>
> if (end < req_end) {
> - ret = op_unmap_cb(ops, priv, va, merge);
> - if (ret)
> - return ret;
> + if (!is_madvise_ops) {
> + ret = op_unmap_cb(ops, priv, va, merge);
I think we should pass madvise as argument to op_unmap_cb() and make it a noop
internally rather than having all the conditionals.
> + if (ret)
> + return ret;
> + }
> continue;
> }
>
> if (end > req_end) {
> + if (skip_madvise_ops)
> + break;
> +
> struct drm_gpuva_op_map n = {
> .va.addr = req_end,
> .va.range = range - req->op_map.va.range,
> @@ -2156,6 +2167,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
> ret = op_remap_cb(ops, priv, NULL, &n, &u);
> if (ret)
> return ret;
> +
> + if (is_madvise_ops)
> + needs_map = true;
I don't like this needs_map state...
Maybe we could have
struct drm_gpuvm_map_req *op_map = madvise ? NULL : req;
at the beginning of the function and then change this line to
if (madvise)
op_map = req;
and op_map_cb() can just handle a NULL pointer.
Yeah, I feel like that's better.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-08-12 16:06 ` Danilo Krummrich
@ 2025-08-12 17:52 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 51+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-12 17:52 UTC (permalink / raw)
To: Danilo Krummrich
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
dri-devel
On 12-08-2025 21:36, Danilo Krummrich wrote:
> On Mon Aug 11, 2025 at 8:52 AM CEST, Himal Prasad Ghimiray wrote:
>> On 09-08-2025 18:53, Danilo Krummrich wrote:
>> Possible scenarios for ops functionality based on input start and end
>> address from user:
>>
>> a) User-provided range is a subset of an existing drm_gpuva
>> Expected Result: Same behavior as the default sm_map logic.
>> Reference: Case 1 from [1].
>>
>> b) Either start or end (but not both) is not aligned with a drm_gpuva
>> boundary
>> Expected Result: One REMAP and one MAP operation.
>> Reference: Case 3 from [1].
>>
>> Existing GPUVMAs:
>>
>> drm_gpuva1 drm_gpuva2
>> [a----------------------------b-1][b-------------------c-1]
>>
>> User Input to ops:
>> start = inside drm_gpuva1
>> end = exactly at c-1 (end of drm_gpuva2)
>>
>> Resulting Mapping:
>> drm_gpuva1:pre drm_gpuva:New map drm_gpuva2
>> [a---------start-1][start------- b-1] [b------------c-1]
>>
>> Ops Created:
>> REMAP:UNMAP drm_gpuva1 a to b
>> REMAP:PREV a to start - 1
>> MAP: start to b-1
>>
>> Note: No unmap of drm_gpuvma2 and no merging of New map and drm_gpuva2.
>>
>> c) Both start and end are not aligned with drm_gpuva boundaries, and
>> they fall within different drm_gpuva regions
>> Expected Result: Two REMAP operations and two MAP operations.
>> Reference: Case 2 from [1].
>>
>>
>> d) User-provided range does not overlap with any existing drm_gpuva
>> Expected Result: No operations.
>> start and end exactly match the boundaries of one or more existing
>> drm_gpuva regions
>>
>> e) This includes cases where start is at the beginning of drm_gpuva1 and
>> end is at the end of drm_gpuva2 (drm_gpuva1 and drm_gpuva2 can be same
>> or different).
>> Expected Result: No operations
>>
>> [1]
>> https://lore.kernel.org/intel-xe/4203f450-4b49-401d-81a8-cdcca02035f9@intel.com/
>
> <snip>
>
>> I’ve tried to explain the behavior/usecase with madvise and expected
>> outcomes of the ops logic in detail in [1]. Could you please take a
>> moment to review that and let me know if the explanation is sufficient
>> or if any part needs further clarification?
>
> Thanks a lot for writing this up!
>
> I think this clarifies everything, the examples from [1] are good (sorry that
> your reply from the RFC got lost somehow on my end).
>
>>> Please add a separate section about madvise operations to the documentation at
>>> the beginning of the drivers/gpu/drm/drm_gpuvm.c file.
>>
>> Sure will do that.
>
> Great, this will help users (as well as reviewers) a lot. Please also add your
> examples from [1] to this entry, similar to the existing examples for sm_map.
>
>>>> v2
>>>> - use drm_gpuvm_sm_map_ops_create with flags instead of defining new
>>>> ops_create (Danilo)
>>>
>>> If this turns out not to be what I thought semantically and we still agree it's
>>> the correct approach, I think I have to take this back and it should indeed be
>>> an entirely separate code path. But let's wait for your answers above.
>
> Having the correct understanding of how this is supposed to work (and seeing how
> the code turns out) I think it's still OK to integrate it into sm_map().
>
> However, it probably makes sense to factor out the code into a common function
> and then build the madvise() and sm_map() functions on top of it.
__drm_gpuvm_sm_map is that common function, and does
drm_gpuvm_madvise_ops_create sound OK? With separate functions for
sm_map and madvise, I see there's no need to add a flag to
drm_gpuvm_map_req at this moment. I will drop [1] in the next version.
[1] https://patchwork.freedesktop.org/patch/667561/?series=149550&rev=6
Thanks
>
> Please also find some more comments on the patch itself.
>
>>> Again, I really think this needs some proper documentation like in the
>>> "DOC: Split and Merge" documentation section.
>>
>> Sure
>
> Thanks!
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag
2025-08-12 16:58 ` Danilo Krummrich
@ 2025-08-12 17:54 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 51+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-12 17:54 UTC (permalink / raw)
To: Danilo Krummrich
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
dri-devel
On 12-08-2025 22:28, Danilo Krummrich wrote:
> On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
>> @@ -2110,6 +2110,8 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> {
>> struct drm_gpuva *va, *next;
>> u64 req_end = req->op_map.va.addr + req->op_map.va.range;
>> + bool is_madvise_ops = (req->flags & DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE);
>
> Let's just call this 'madvise'.
Sure.
>
>> + bool needs_map = !is_madvise_ops;
>> int ret;
>>
>> if (unlikely(!drm_gpuvm_range_valid(gpuvm, req->op_map.va.addr, req->op_map.va.range)))
>> @@ -2122,26 +2124,35 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> u64 range = va->va.range;
>> u64 end = addr + range;
>> bool merge = !!va->gem.obj;
>> + bool skip_madvise_ops = is_madvise_ops && merge;
>
> IIUC, you're either going for continue or break in this case. I think continue
> would always be correct and break is an optimization if end <= req_end?
>
> If that's correct, please just do either
>
> if (madvise && va->gem.obj)
> continue;
Will use this.>
> or
>
> if (madvise && va->gem.obj) {
> if (end > req_end)
> break;
> else
> continue;
> }
>
> instead of sprinkling the skip_madvise_ops checks everywhere.
True, recommended checks make it cleaner.
>
>>
>> + needs_map = !is_madvise_ops;
>> if (addr == req->op_map.va.addr) {
>> merge &= obj == req->op_map.gem.obj &&
>> offset == req->op_map.gem.offset;
>>
>> if (end == req_end) {
>> - ret = op_unmap_cb(ops, priv, va, merge);
>> - if (ret)
>> - return ret;
>> + if (!is_madvise_ops) {
>> + ret = op_unmap_cb(ops, priv, va, merge);
>> + if (ret)
>> + return ret;
>> + }
>> break;
>> }
>>
>> if (end < req_end) {
>> - ret = op_unmap_cb(ops, priv, va, merge);
>> - if (ret)
>> - return ret;
>> + if (!is_madvise_ops) {
>> + ret = op_unmap_cb(ops, priv, va, merge);
>
> I think we should pass madvise as argument to op_unmap_cb() and make it a noop
> internally rather than having all the conditionals.
Makes sense. Will modify in next version.
>
>> + if (ret)
>> + return ret;
>> + }
>> continue;
>> }
>>
>> if (end > req_end) {
>> + if (skip_madvise_ops)
>> + break;
>> +
>> struct drm_gpuva_op_map n = {
>> .va.addr = req_end,
>> .va.range = range - req->op_map.va.range,
>> @@ -2156,6 +2167,9 @@ __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> ret = op_remap_cb(ops, priv, NULL, &n, &u);
>> if (ret)
>> return ret;
>> +
>> + if (is_madvise_ops)
>> + needs_map = true;
>
> I don't like this needs_map state...
>
> Maybe we could have
>
> struct drm_gpuvm_map_req *op_map = madvise ? NULL : req;
>
> at the beginning of the function and then change this line to
>
> if (madvise)
> op_map = req;
>
> and op_map_cb() can just handle a NULL pointer.
>
> Yeah, I feel like that's better.
Agreed.
Thanks for the review.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct
2025-08-12 16:51 ` Danilo Krummrich
@ 2025-08-12 17:55 ` Ghimiray, Himal Prasad
0 siblings, 0 replies; 51+ messages in thread
From: Ghimiray, Himal Prasad @ 2025-08-12 17:55 UTC (permalink / raw)
To: Danilo Krummrich
Cc: intel-xe, Matthew Brost, Thomas Hellström, Boris Brezillon,
Brendan King, Boris Brezillon, Caterina Shablia, Rob Clark,
dri-devel
On 12-08-2025 22:21, Danilo Krummrich wrote:
> On Thu Aug 7, 2025 at 6:43 PM CEST, Himal Prasad Ghimiray wrote:
>> @@ -2102,17 +2106,16 @@ op_unmap_cb(const struct drm_gpuvm_ops *fn, void *priv,
>> static int
>> __drm_gpuvm_sm_map(struct drm_gpuvm *gpuvm,
>> const struct drm_gpuvm_ops *ops, void *priv,
>> - u64 req_addr, u64 req_range,
>> - struct drm_gem_object *req_obj, u64 req_offset)
>> + const struct drm_gpuvm_map_req *req)
>> {
>> struct drm_gpuva *va, *next;
>> - u64 req_end = req_addr + req_range;
>> + u64 req_end = req->op_map.va.addr + req->op_map.va.range;
>
> Forgot to add, please extract all previous values from req, such that the below
> diff is minimal and the code remiains easier to read..
Noted.
^ permalink raw reply [flat|nested] 51+ messages in thread
end of thread, other threads:[~2025-08-12 17:55 UTC | newest]
Thread overview: 51+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-07 16:43 [PATCH v6 00/26] MADVISE FOR XE Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 01/26] drm/gpuvm: Pass map arguments through a struct Himal Prasad Ghimiray
2025-08-08 5:03 ` Matthew Brost
2025-08-09 12:43 ` Danilo Krummrich
2025-08-11 6:54 ` Ghimiray, Himal Prasad
2025-08-12 16:51 ` Danilo Krummrich
2025-08-12 17:55 ` Ghimiray, Himal Prasad
2025-08-07 16:43 ` [PATCH v6 02/26] drm/gpuvm: Kill drm_gpuva_init() Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 03/26] drm/gpuvm: Support flags in drm_gpuvm_map_req Himal Prasad Ghimiray
2025-08-08 5:04 ` Matthew Brost
2025-08-09 12:46 ` Danilo Krummrich
2025-08-11 6:56 ` Ghimiray, Himal Prasad
2025-08-07 16:43 ` [PATCH v6 04/26] drm/gpuvm: Introduce DRM_GPUVM_SM_MAP_OPS_FLAG_SPLIT_MADVISE flag Himal Prasad Ghimiray
2025-08-08 5:20 ` Matthew Brost
2025-08-09 13:23 ` Danilo Krummrich
2025-08-11 6:52 ` Ghimiray, Himal Prasad
2025-08-12 16:06 ` Danilo Krummrich
2025-08-12 17:52 ` Ghimiray, Himal Prasad
2025-08-12 16:58 ` Danilo Krummrich
2025-08-12 17:54 ` Ghimiray, Himal Prasad
2025-08-07 16:43 ` [PATCH v6 05/26] drm/xe/uapi: Add madvise interface Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 06/26] drm/xe/vm: Add attributes struct as member of vma Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 07/26] drm/xe/vma: Move pat_index to vma attributes Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 08/26] drm/xe/vma: Modify new_vma to accept struct xe_vma_mem_attr as parameter Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 09/26] drm/gpusvm: Make drm_gpusvm_for_each_* macros public Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 10/26] drm/xe/svm: Split system allocator vma incase of madvise call Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 11/26] drm/xe: Allow CPU address mirror VMA unbind with gpu bindings for madvise Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 12/26] drm/xe/svm: Add xe_svm_ranges_zap_ptes_in_range() for PTE zapping Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 13/26] drm/xe: Implement madvise ioctl for xe Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 14/26] drm/xe/svm : Add svm ranges migration policy on atomic access Himal Prasad Ghimiray
2025-08-08 5:42 ` Matthew Brost
2025-08-07 16:43 ` [PATCH v6 15/26] drm/xe/madvise: Update migration policy based on preferred location Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 16/26] drm/xe/pat: Add helper for compression mode of pat index Himal Prasad Ghimiray
2025-08-08 5:47 ` Matthew Brost
2025-08-07 16:43 ` [PATCH v6 17/26] drm/xe/svm: Support DRM_XE_SVM_MEM_RANGE_ATTR_PAT memory attribute Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 18/26] drm/xe/uapi: Add flag for consulting madvise hints on svm prefetch Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 19/26] drm/xe/svm: Consult madvise preferred location in prefetch Himal Prasad Ghimiray
2025-08-08 0:30 ` Matthew Brost
2025-08-07 16:43 ` [PATCH v6 20/26] drm/xe/bo: Add attributes field to xe_bo Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 21/26] drm/xe/bo: Update atomic_access attribute on madvise Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 22/26] drm/xe/madvise: Skip vma invalidation if mem attr are unchanged Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 23/26] drm/xe/vm: Add helper to check for default VMA memory attributes Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 24/26] drm/xe: Reset VMA attributes to default in SVM garbage collector Himal Prasad Ghimiray
2025-08-07 17:13 ` Matthew Brost
2025-08-07 16:43 ` [PATCH v6 25/26] drm/xe: Enable madvise ioctl for xe Himal Prasad Ghimiray
2025-08-07 16:43 ` [PATCH v6 26/26] drm/xe/uapi: Add UAPI for querying VMA count and memory attributes Himal Prasad Ghimiray
2025-08-07 18:02 ` ✗ CI.checkpatch: warning for MADVISE FOR XE (rev6) Patchwork
2025-08-07 18:03 ` ✓ CI.KUnit: success " Patchwork
2025-08-07 18:18 ` ✗ CI.checksparse: warning " Patchwork
2025-08-07 19:11 ` ✓ Xe.CI.BAT: success " Patchwork
2025-08-07 21:16 ` ✓ Xe.CI.Full: " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).