* [RFC 1/7] drm/xe/vm: Add CPU_AUTORESET_ACTIVE VMA flag
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
@ 2026-02-19 9:13 ` Arvind Yadav
2026-02-19 9:13 ` [RFC 2/7] drm/xe/vm: Preserve CPU_AUTORESET_ACTIVE across GPUVA operations Arvind Yadav
` (9 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Arvind Yadav @ 2026-02-19 9:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Add XE_VMA_CPU_AUTORESET_ACTIVE to track whether a
MADVISE_AUTORESET CPU address mirror VMA has been GPU-touched.
The flag is set at bind time and cleared on first GPU fault,
creating a one-way transition from CPU-only to GPU-touched state.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_vm.h | 5 +++++
drivers/gpu/drm/xe/xe_vm_types.h | 8 ++++++++
2 files changed, 13 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 288115c7844a..7bf400f068ce 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -174,6 +174,11 @@ static inline bool xe_vma_is_userptr(struct xe_vma *vma)
!xe_vma_is_cpu_addr_mirror(vma);
}
+static inline bool xe_vma_has_cpu_autoreset_active(struct xe_vma *vma)
+{
+ return vma->gpuva.flags & XE_VMA_CPU_AUTORESET_ACTIVE;
+}
+
struct xe_vma *xe_vm_find_vma_by_addr(struct xe_vm *vm, u64 page_addr);
int xe_vma_need_vram_for_atomic(struct xe_device *xe, struct xe_vma *vma, bool is_atomic);
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 43203e90ee3e..29ff63503d4c 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -50,6 +50,14 @@ struct xe_vm_pgtable_update_op;
#define XE_VMA_DUMPABLE (DRM_GPUVA_USERBITS << 8)
#define XE_VMA_SYSTEM_ALLOCATOR (DRM_GPUVA_USERBITS << 9)
#define XE_VMA_MADV_AUTORESET (DRM_GPUVA_USERBITS << 10)
+/*
+ * CPU-only runtime state for MADV_AUTORESET VMAs.
+ *
+ * Set at bind time and cleared before the first GPU PTEs are installed.
+ * Used to distinguish CPU-only VMAs from GPU-touched ones when handling
+ * munmap events.
+ */
+#define XE_VMA_CPU_AUTORESET_ACTIVE (DRM_GPUVA_USERBITS << 11)
/**
* struct xe_vma_mem_attr - memory attributes associated with vma
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [RFC 2/7] drm/xe/vm: Preserve CPU_AUTORESET_ACTIVE across GPUVA operations
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
2026-02-19 9:13 ` [RFC 1/7] drm/xe/vm: Add CPU_AUTORESET_ACTIVE VMA flag Arvind Yadav
@ 2026-02-19 9:13 ` Arvind Yadav
2026-02-19 9:13 ` [RFC 3/7] drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault Arvind Yadav
` (8 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Arvind Yadav @ 2026-02-19 9:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
GPUVA split/merge operations rebuild VMA flags from XE_VMA_CREATE_MASK.
While this preserves XE_VMA_MADV_AUTORESET, it drops runtime-only state
such as XE_VMA_CPU_AUTORESET_ACTIVE.
Preserve CPU_AUTORESET_ACTIVE when creating new VMAs during MAP/REMAP
so the CPU-only vs GPU-touched state survives VMA transformations.
Without this, split VMAs would lose their CPU-only state and be
incorrectly treated as GPU-touched.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 28 +++++++++++++++++++++++++---
1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 8fe54a998385..152ee355e5c3 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -2350,8 +2350,10 @@ vm_bind_ioctl_ops_create(struct xe_vm *vm, struct xe_vma_ops *vops,
op->map.vma_flags |= XE_VMA_SYSTEM_ALLOCATOR;
if (flags & DRM_XE_VM_BIND_FLAG_DUMPABLE)
op->map.vma_flags |= XE_VMA_DUMPABLE;
- if (flags & DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET)
+ if (flags & DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET) {
op->map.vma_flags |= XE_VMA_MADV_AUTORESET;
+ op->map.vma_flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
+ }
op->map.pat_index = pat_index;
op->map.invalidate_on_bind =
__xe_vm_needs_clear_scratch_pages(vm, flags);
@@ -2668,6 +2670,9 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
};
flags |= op->map.vma_flags & XE_VMA_CREATE_MASK;
+ /* Preserve CPU_AUTORESET_ACTIVE (runtime-only). */
+ if (op->map.vma_flags & XE_VMA_CPU_AUTORESET_ACTIVE)
+ flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
vma = new_vma(vm, &op->base.map, &default_attr,
flags);
@@ -2708,6 +2713,10 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
op->remap.range = xe_vma_size(old);
flags |= op->base.remap.unmap->va->flags & XE_VMA_CREATE_MASK;
+ /* Preserve CPU_AUTORESET_ACTIVE (runtime-only). */
+ if (op->base.remap.unmap->va->flags & XE_VMA_CPU_AUTORESET_ACTIVE)
+ flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
+
if (op->base.remap.prev) {
vma = new_vma(vm, op->base.remap.prev,
&old->attr, flags);
@@ -4409,19 +4418,28 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
if (!is_madvise) {
if (__op->op == DRM_GPUVA_OP_UNMAP) {
vma = gpuva_to_vma(op->base.unmap.va);
- XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma));
+ /*
+ * For CPU_AUTORESET_ACTIVE VMAs, attributes may be mid-reset and
+ * thus temporarily non-default.
+ */
+ XE_WARN_ON(!xe_vma_has_default_mem_attrs(vma) &&
+ !(vma->gpuva.flags & XE_VMA_CPU_AUTORESET_ACTIVE));
default_pat = vma->attr.default_pat_index;
vma_flags = vma->gpuva.flags;
}
if (__op->op == DRM_GPUVA_OP_REMAP) {
vma = gpuva_to_vma(op->base.remap.unmap->va);
- default_pat = vma->attr.default_pat_index;
+ /* Preserve current PAT index, not default, for remap */
+ default_pat = vma->attr.pat_index;
vma_flags = vma->gpuva.flags;
}
if (__op->op == DRM_GPUVA_OP_MAP) {
op->map.vma_flags |= vma_flags & XE_VMA_CREATE_MASK;
+ /* Preserve CPU_AUTORESET_ACTIVE (runtime-only). */
+ if (vma_flags & XE_VMA_CPU_AUTORESET_ACTIVE)
+ op->map.vma_flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
op->map.pat_index = default_pat;
}
} else {
@@ -4434,6 +4452,7 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
}
if (__op->op == DRM_GPUVA_OP_MAP) {
+ /* Madvise MAP follows REMAP (split/merge). */
xe_assert(vm->xe, remap_op);
remap_op = false;
/*
@@ -4443,6 +4462,9 @@ static int xe_vm_alloc_vma(struct xe_vm *vm,
* unmapping.
*/
op->map.vma_flags |= vma_flags & XE_VMA_CREATE_MASK;
+ /* Preserve CPU_AUTORESET_ACTIVE (not in CREATE_MASK). */
+ if (vma_flags & XE_VMA_CPU_AUTORESET_ACTIVE)
+ op->map.vma_flags |= XE_VMA_CPU_AUTORESET_ACTIVE;
}
}
print_op(vm->xe, __op);
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [RFC 3/7] drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
2026-02-19 9:13 ` [RFC 1/7] drm/xe/vm: Add CPU_AUTORESET_ACTIVE VMA flag Arvind Yadav
2026-02-19 9:13 ` [RFC 2/7] drm/xe/vm: Preserve CPU_AUTORESET_ACTIVE across GPUVA operations Arvind Yadav
@ 2026-02-19 9:13 ` Arvind Yadav
2026-02-20 20:12 ` Matthew Brost
2026-02-19 9:13 ` [RFC 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure Arvind Yadav
` (7 subsequent siblings)
10 siblings, 1 reply; 19+ messages in thread
From: Arvind Yadav @ 2026-02-19 9:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Clear XE_VMA_CPU_AUTORESET_ACTIVE before installing GPU PTEs for CPU
address mirror VMAs.
This marks the one-way transition from CPU-only to GPU-touched so munmap
handling can switch from the MADVISE autoreset notifier to the existing
SVM notifier.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 10 ++++++++++
drivers/gpu/drm/xe/xe_vm.h | 11 +++++++++++
2 files changed, 21 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index cda3bf7e2418..b9dbbb245779 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -1209,6 +1209,9 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
lockdep_assert_held_write(&vm->lock);
xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma));
+ /* Invariant: CPU_AUTORESET_ACTIVE cleared before reaching here. */
+ WARN_ON_ONCE(xe_vma_has_cpu_autoreset_active(vma));
+
xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
retry:
@@ -1360,6 +1363,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
bool atomic)
{
int need_vram, ret;
+
+ lockdep_assert_held_write(&vm->lock);
+
+ /* Transition CPU-only -> GPU-touched before installing PTEs. */
+ if (xe_vma_has_cpu_autoreset_active(vma))
+ xe_vma_gpu_touch(vma);
+
retry:
need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
if (need_vram < 0)
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 7bf400f068ce..3dc549550c91 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -423,4 +423,15 @@ static inline struct drm_exec *xe_vm_validation_exec(struct xe_vm *vm)
((READ_ONCE(tile_present) & ~READ_ONCE(tile_invalidated)) & BIT((tile)->id))
void xe_vma_mem_attr_copy(struct xe_vma_mem_attr *to, struct xe_vma_mem_attr *from);
+
+/**
+ * xe_vma_gpu_touch() - Mark VMA as GPU-touched
+ * @vma: VMA to mark
+ *
+ * Clear XE_VMA_CPU_AUTORESET_ACTIVE. Must be done before first GPU PTE install.
+ */
+static inline void xe_vma_gpu_touch(struct xe_vma *vma)
+{
+ vma->gpuva.flags &= ~XE_VMA_CPU_AUTORESET_ACTIVE;
+}
#endif
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [RFC 3/7] drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault
2026-02-19 9:13 ` [RFC 3/7] drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault Arvind Yadav
@ 2026-02-20 20:12 ` Matthew Brost
2026-02-20 22:33 ` Matthew Brost
0 siblings, 1 reply; 19+ messages in thread
From: Matthew Brost @ 2026-02-20 20:12 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom
On Thu, Feb 19, 2026 at 02:43:08PM +0530, Arvind Yadav wrote:
> Clear XE_VMA_CPU_AUTORESET_ACTIVE before installing GPU PTEs for CPU
> address mirror VMAs.
>
> This marks the one-way transition from CPU-only to GPU-touched so munmap
> handling can switch from the MADVISE autoreset notifier to the existing
> SVM notifier.
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_svm.c | 10 ++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 11 +++++++++++
> 2 files changed, 21 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> index cda3bf7e2418..b9dbbb245779 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -1209,6 +1209,9 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> lockdep_assert_held_write(&vm->lock);
> xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma));
>
> + /* Invariant: CPU_AUTORESET_ACTIVE cleared before reaching here. */
> + WARN_ON_ONCE(xe_vma_has_cpu_autoreset_active(vma));
> +
> xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
>
> retry:
> @@ -1360,6 +1363,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> bool atomic)
> {
> int need_vram, ret;
> +
> + lockdep_assert_held_write(&vm->lock);
> +
> + /* Transition CPU-only -> GPU-touched before installing PTEs. */
> + if (xe_vma_has_cpu_autoreset_active(vma))
> + xe_vma_gpu_touch(vma);
I don’t think this will work going forward. I plan on making the fault
handler run under vm->lock in read mode [1], and VMA state will only be
allowed to be modified under the lockdep constraints in [2], which are
vm->lock in write mode or vm->lock in read mode plus the garbage
collector lock. Maybe this is fine for now, and we can rework it once
[1] and [2] land—most likely by taking the garbage collector mutex
introduced in [1] before touching this VMA’s flags.
Another issue is what happens if we don’t want to taint the VMA unless
we actually fault in a range. It is valid to not find a range to fault
in if this is a prefetch, as that fault just gets suppressed on the
device. So at a minimum, this needs to be moved to where the function
returns zero (i.e., by the out label).
[1] https://gitlab.freedesktop.org/mbrost/xe-kernel-driver-svn-perf-6-15-2025/-/commit/08fa2b95800583e804a91caf477f9c30b3440a33#33e7d2d9323cd529c8d587d9d3801e353439d783_181_179
[2] https://gitlab.freedesktop.org/mbrost/xe-kernel-driver-svn-perf-6-15-2025/-/commit/08fa2b95800583e804a91caf477f9c30b3440a33#71bf077daec46f3ebd785235e8ecc786681aff99_1112_1128
> +
> retry:
> need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
> if (need_vram < 0)
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index 7bf400f068ce..3dc549550c91 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -423,4 +423,15 @@ static inline struct drm_exec *xe_vm_validation_exec(struct xe_vm *vm)
> ((READ_ONCE(tile_present) & ~READ_ONCE(tile_invalidated)) & BIT((tile)->id))
>
> void xe_vma_mem_attr_copy(struct xe_vma_mem_attr *to, struct xe_vma_mem_attr *from);
> +
> +/**
> + * xe_vma_gpu_touch() - Mark VMA as GPU-touched
> + * @vma: VMA to mark
> + *
> + * Clear XE_VMA_CPU_AUTORESET_ACTIVE. Must be done before first GPU PTE install.
> + */
> +static inline void xe_vma_gpu_touch(struct xe_vma *vma)
> +{
> + vma->gpuva.flags &= ~XE_VMA_CPU_AUTORESET_ACTIVE;
Thinking out loud — not strictly related to your series, but I think we
should route all accesses to vma->gpuva.flags through helpers with
lockdep annotations to prove we aren’t violating the rules I mentioned
above (right now this would just require vm->lock in write mode).
Perhaps when I post [1] and [2], I can clean all of that up, or if you
want to transition all access to vma->gpuva.flags to helpers (e.g.,
xe_vma_write_flags(vma, mask), xe_vma_read_flags(vma, mask),
xe_vma_clear_flags(vma, mask)), I wouldn’t complain.
Matt
> +}
> #endif
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [RFC 3/7] drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault
2026-02-20 20:12 ` Matthew Brost
@ 2026-02-20 22:33 ` Matthew Brost
2026-03-05 3:38 ` Yadav, Arvind
0 siblings, 1 reply; 19+ messages in thread
From: Matthew Brost @ 2026-02-20 22:33 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom
On Fri, Feb 20, 2026 at 12:12:32PM -0800, Matthew Brost wrote:
> On Thu, Feb 19, 2026 at 02:43:08PM +0530, Arvind Yadav wrote:
> > Clear XE_VMA_CPU_AUTORESET_ACTIVE before installing GPU PTEs for CPU
> > address mirror VMAs.
> >
> > This marks the one-way transition from CPU-only to GPU-touched so munmap
> > handling can switch from the MADVISE autoreset notifier to the existing
> > SVM notifier.
> >
> > Cc: Matthew Brost <matthew.brost@intel.com>
> > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> > Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_svm.c | 10 ++++++++++
> > drivers/gpu/drm/xe/xe_vm.h | 11 +++++++++++
> > 2 files changed, 21 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
> > index cda3bf7e2418..b9dbbb245779 100644
> > --- a/drivers/gpu/drm/xe/xe_svm.c
> > +++ b/drivers/gpu/drm/xe/xe_svm.c
> > @@ -1209,6 +1209,9 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> > lockdep_assert_held_write(&vm->lock);
> > xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma));
> >
> > + /* Invariant: CPU_AUTORESET_ACTIVE cleared before reaching here. */
> > + WARN_ON_ONCE(xe_vma_has_cpu_autoreset_active(vma));
> > +
> > xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
> >
> > retry:
> > @@ -1360,6 +1363,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
> > bool atomic)
> > {
> > int need_vram, ret;
> > +
> > + lockdep_assert_held_write(&vm->lock);
> > +
> > + /* Transition CPU-only -> GPU-touched before installing PTEs. */
> > + if (xe_vma_has_cpu_autoreset_active(vma))
> > + xe_vma_gpu_touch(vma);
>
> I don’t think this will work going forward. I plan on making the fault
> handler run under vm->lock in read mode [1], and VMA state will only be
> allowed to be modified under the lockdep constraints in [2], which are
> vm->lock in write mode or vm->lock in read mode plus the garbage
> collector lock. Maybe this is fine for now, and we can rework it once
> [1] and [2] land—most likely by taking the garbage collector mutex
> introduced in [1] before touching this VMA’s flags.
>
> Another issue is what happens if we don’t want to taint the VMA unless
> we actually fault in a range. It is valid to not find a range to fault
> in if this is a prefetch, as that fault just gets suppressed on the
> device. So at a minimum, this needs to be moved to where the function
> returns zero (i.e., by the out label).
>
> [1] https://gitlab.freedesktop.org/mbrost/xe-kernel-driver-svn-perf-6-15-2025/-/commit/08fa2b95800583e804a91caf477f9c30b3440a33#33e7d2d9323cd529c8d587d9d3801e353439d783_181_179
> [2] https://gitlab.freedesktop.org/mbrost/xe-kernel-driver-svn-perf-6-15-2025/-/commit/08fa2b95800583e804a91caf477f9c30b3440a33#71bf077daec46f3ebd785235e8ecc786681aff99_1112_1128
>
>
> > +
> > retry:
> > need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
> > if (need_vram < 0)
> > diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> > index 7bf400f068ce..3dc549550c91 100644
> > --- a/drivers/gpu/drm/xe/xe_vm.h
> > +++ b/drivers/gpu/drm/xe/xe_vm.h
> > @@ -423,4 +423,15 @@ static inline struct drm_exec *xe_vm_validation_exec(struct xe_vm *vm)
> > ((READ_ONCE(tile_present) & ~READ_ONCE(tile_invalidated)) & BIT((tile)->id))
> >
> > void xe_vma_mem_attr_copy(struct xe_vma_mem_attr *to, struct xe_vma_mem_attr *from);
> > +
> > +/**
> > + * xe_vma_gpu_touch() - Mark VMA as GPU-touched
> > + * @vma: VMA to mark
> > + *
> > + * Clear XE_VMA_CPU_AUTORESET_ACTIVE. Must be done before first GPU PTE install.
> > + */
> > +static inline void xe_vma_gpu_touch(struct xe_vma *vma)
> > +{
> > + vma->gpuva.flags &= ~XE_VMA_CPU_AUTORESET_ACTIVE;
>
> Thinking out loud — not strictly related to your series, but I think we
> should route all accesses to vma->gpuva.flags through helpers with
> lockdep annotations to prove we aren’t violating the rules I mentioned
> above (right now this would just require vm->lock in write mode).
>
Actually I looked at this and exec IOCTLs can set some of the
vma->gpuva.flags in vm->lock read mode only, so we basically don't have
any rules for vma->gpuva.flags and it all happens to work by chance. So
at minimum here let's define some clear rules /w lockdep for
XE_VMA_CPU_AUTORESET_ACTIVE, if that needs to be moved out
vma->gpuva.flags for now that would be my preference as we shouldn't
make the usage of vma->gpuva.flags worse.
Matt
> Perhaps when I post [1] and [2], I can clean all of that up, or if you
> want to transition all access to vma->gpuva.flags to helpers (e.g.,
> xe_vma_write_flags(vma, mask), xe_vma_read_flags(vma, mask),
> xe_vma_clear_flags(vma, mask)), I wouldn’t complain.
>
> Matt
>
> > +}
> > #endif
> > --
> > 2.43.0
> >
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [RFC 3/7] drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault
2026-02-20 22:33 ` Matthew Brost
@ 2026-03-05 3:38 ` Yadav, Arvind
0 siblings, 0 replies; 19+ messages in thread
From: Yadav, Arvind @ 2026-03-05 3:38 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom
On 21-02-2026 04:03, Matthew Brost wrote:
> On Fri, Feb 20, 2026 at 12:12:32PM -0800, Matthew Brost wrote:
>> On Thu, Feb 19, 2026 at 02:43:08PM +0530, Arvind Yadav wrote:
>>> Clear XE_VMA_CPU_AUTORESET_ACTIVE before installing GPU PTEs for CPU
>>> address mirror VMAs.
>>>
>>> This marks the one-way transition from CPU-only to GPU-touched so munmap
>>> handling can switch from the MADVISE autoreset notifier to the existing
>>> SVM notifier.
>>>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>>> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
>>> ---
>>> drivers/gpu/drm/xe/xe_svm.c | 10 ++++++++++
>>> drivers/gpu/drm/xe/xe_vm.h | 11 +++++++++++
>>> 2 files changed, 21 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
>>> index cda3bf7e2418..b9dbbb245779 100644
>>> --- a/drivers/gpu/drm/xe/xe_svm.c
>>> +++ b/drivers/gpu/drm/xe/xe_svm.c
>>> @@ -1209,6 +1209,9 @@ static int __xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>>> lockdep_assert_held_write(&vm->lock);
>>> xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma));
>>>
>>> + /* Invariant: CPU_AUTORESET_ACTIVE cleared before reaching here. */
>>> + WARN_ON_ONCE(xe_vma_has_cpu_autoreset_active(vma));
>>> +
>>> xe_gt_stats_incr(gt, XE_GT_STATS_ID_SVM_PAGEFAULT_COUNT, 1);
>>>
>>> retry:
>>> @@ -1360,6 +1363,13 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
>>> bool atomic)
>>> {
>>> int need_vram, ret;
>>> +
>>> + lockdep_assert_held_write(&vm->lock);
>>> +
>>> + /* Transition CPU-only -> GPU-touched before installing PTEs. */
>>> + if (xe_vma_has_cpu_autoreset_active(vma))
>>> + xe_vma_gpu_touch(vma);
>> I don’t think this will work going forward. I plan on making the fault
>> handler run under vm->lock in read mode [1], and VMA state will only be
>> allowed to be modified under the lockdep constraints in [2], which are
>> vm->lock in write mode or vm->lock in read mode plus the garbage
>> collector lock. Maybe this is fine for now, and we can rework it once
>> [1] and [2] land—most likely by taking the garbage collector mutex
>> introduced in [1] before touching this VMA’s flags.
>>
>> Another issue is what happens if we don’t want to taint the VMA unless
>> we actually fault in a range. It is valid to not find a range to fault
>> in if this is a prefetch, as that fault just gets suppressed on the
>> device. So at a minimum, this needs to be moved to where the function
>> returns zero (i.e., by the out label).
>>
>> [1] https://gitlab.freedesktop.org/mbrost/xe-kernel-driver-svn-perf-6-15-2025/-/commit/08fa2b95800583e804a91caf477f9c30b3440a33#33e7d2d9323cd529c8d587d9d3801e353439d783_181_179
>> [2] https://gitlab.freedesktop.org/mbrost/xe-kernel-driver-svn-perf-6-15-2025/-/commit/08fa2b95800583e804a91caf477f9c30b3440a33#71bf077daec46f3ebd785235e8ecc786681aff99_1112_1128
>>
Good catch on both. I will move the gpu_touch to after
__xe_svm_handle_pagefault() returns 0 so we donot taint on suppressed
prefetch faults. For the future locking rework, happy to revisit —
likely taking the GC mutex there as you mentioned.
>>> +
>>> retry:
>>> need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
>>> if (need_vram < 0)
>>> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
>>> index 7bf400f068ce..3dc549550c91 100644
>>> --- a/drivers/gpu/drm/xe/xe_vm.h
>>> +++ b/drivers/gpu/drm/xe/xe_vm.h
>>> @@ -423,4 +423,15 @@ static inline struct drm_exec *xe_vm_validation_exec(struct xe_vm *vm)
>>> ((READ_ONCE(tile_present) & ~READ_ONCE(tile_invalidated)) & BIT((tile)->id))
>>>
>>> void xe_vma_mem_attr_copy(struct xe_vma_mem_attr *to, struct xe_vma_mem_attr *from);
>>> +
>>> +/**
>>> + * xe_vma_gpu_touch() - Mark VMA as GPU-touched
>>> + * @vma: VMA to mark
>>> + *
>>> + * Clear XE_VMA_CPU_AUTORESET_ACTIVE. Must be done before first GPU PTE install.
>>> + */
>>> +static inline void xe_vma_gpu_touch(struct xe_vma *vma)
>>> +{
>>> + vma->gpuva.flags &= ~XE_VMA_CPU_AUTORESET_ACTIVE;
>> Thinking out loud — not strictly related to your series, but I think we
>> should route all accesses to vma->gpuva.flags through helpers with
>> lockdep annotations to prove we aren’t violating the rules I mentioned
>> above (right now this would just require vm->lock in write mode).
>>
> Actually I looked at this and exec IOCTLs can set some of the
> vma->gpuva.flags in vm->lock read mode only, so we basically don't have
> any rules for vma->gpuva.flags and it all happens to work by chance. So
> at minimum here let's define some clear rules /w lockdep for
> XE_VMA_CPU_AUTORESET_ACTIVE, if that needs to be moved out
> vma->gpuva.flags for now that would be my preference as we shouldn't
> make the usage of vma->gpuva.flags worse.
I will move it out of vma->gpuva.flags into a dedicated bool
cpu_autoreset_active in struct xe_vma with explicit lockdep comment
(write: vm->lock write; read: vm->lock read).
XE_VMA_CPU_AUTORESET_ACTIVE is kept as a pipeline-only bit in
op->map.vma_flags to carry state through MAP/REMAP ops, not stored in
vma->gpuva.flags.
>
> Matt
>
>> Perhaps when I post [1] and [2], I can clean all of that up, or if you
>> want to transition all access to vma->gpuva.flags to helpers (e.g.,
>> xe_vma_write_flags(vma, mask), xe_vma_read_flags(vma, mask),
>> xe_vma_clear_flags(vma, mask)), I wouldn’t complain.
We can add the xe_vma_write_flags/read_flags/clear_flags wrappers too if
you want that done in this series or prefer to do it alongside your
locking rework.
Thanks,
Arvind
>>
>> Matt
>>
>>> +}
>>> #endif
>>> --
>>> 2.43.0
>>>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [RFC 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
` (2 preceding siblings ...)
2026-02-19 9:13 ` [RFC 3/7] drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault Arvind Yadav
@ 2026-02-19 9:13 ` Arvind Yadav
2026-02-25 23:34 ` Matthew Brost
2026-02-19 9:13 ` [RFC 5/7] drm/xe/vm: Deactivate madvise notifier on GPU touch Arvind Yadav
` (6 subsequent siblings)
10 siblings, 1 reply; 19+ messages in thread
From: Arvind Yadav @ 2026-02-19 9:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
MADVISE_AUTORESET needs to reset VMA attributes when userspace unmaps
CPU-only ranges, but the MMU invalidate callback cannot take vm->lock
due to lock ordering (mmap_lock is already held).
Add mmu_interval_notifier that queues work items for MMU_NOTIFY_UNMAP
events. The worker runs under vm->lock and resets attributes for VMAs
still marked XE_VMA_CPU_AUTORESET_ACTIVE (i.e., not yet GPU-touched).
Work items are allocated from a mempool to handle atomic context in the
callback. The notifier is deactivated when GPU touches the VMA.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_vm_madvise.c | 394 +++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm_madvise.h | 8 +
drivers/gpu/drm/xe/xe_vm_types.h | 41 +++
3 files changed, 443 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 52147f5eaaa0..4c0ffb100bcc 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -6,9 +6,12 @@
#include "xe_vm_madvise.h"
#include <linux/nospec.h>
+#include <linux/mempool.h>
+#include <linux/workqueue.h>
#include <drm/xe_drm.h>
#include "xe_bo.h"
+#include "xe_macros.h"
#include "xe_pat.h"
#include "xe_pt.h"
#include "xe_svm.h"
@@ -500,3 +503,394 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
xe_vm_put(vm);
return err;
}
+
+/**
+ * struct xe_madvise_work_item - Work item for unmap processing
+ * @work: work_struct
+ * @vm: VM reference
+ * @pool: Mempool for recycling
+ * @start: Start address
+ * @end: End address
+ */
+struct xe_madvise_work_item {
+ struct work_struct work;
+ struct xe_vm *vm;
+ mempool_t *pool;
+ u64 start;
+ u64 end;
+};
+
+static void xe_vma_set_default_attributes(struct xe_vma *vma)
+{
+ vma->attr.preferred_loc.devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE;
+ vma->attr.preferred_loc.migration_policy = DRM_XE_MIGRATE_ALL_PAGES;
+ vma->attr.pat_index = vma->attr.default_pat_index;
+ vma->attr.atomic_access = DRM_XE_ATOMIC_UNDEFINED;
+}
+
+/**
+ * xe_vm_madvise_process_unmap - Process munmap for all VMAs in range
+ * @vm: VM
+ * @start: Start of unmap range
+ * @end: End of unmap range
+ *
+ * Processes all VMAs overlapping the unmap range. An unmap can span multiple
+ * VMAs, so we need to loop and process each segment.
+ *
+ * Return: 0 on success, negative error otherwise
+ */
+static int xe_vm_madvise_process_unmap(struct xe_vm *vm, u64 start, u64 end)
+{
+ u64 addr = start;
+ int err;
+
+ lockdep_assert_held_write(&vm->lock);
+
+ if (xe_vm_is_closed_or_banned(vm))
+ return 0;
+
+ while (addr < end) {
+ struct xe_vma *vma;
+ u64 seg_start, seg_end;
+ bool has_default_attr;
+
+ vma = xe_vm_find_overlapping_vma(vm, addr, end);
+ if (!vma)
+ break;
+
+ /* Skip GPU-touched VMAs - SVM handles them */
+ if (!xe_vma_has_cpu_autoreset_active(vma)) {
+ addr = xe_vma_end(vma);
+ continue;
+ }
+
+ has_default_attr = xe_vma_has_default_mem_attrs(vma);
+ seg_start = max(addr, xe_vma_start(vma));
+ seg_end = min(end, xe_vma_end(vma));
+
+ /* Expand for merging if VMA already has default attrs */
+ if (has_default_attr &&
+ xe_vma_start(vma) >= start &&
+ xe_vma_end(vma) <= end) {
+ seg_start = xe_vma_start(vma);
+ seg_end = xe_vma_end(vma);
+ xe_vm_find_cpu_addr_mirror_vma_range(vm, &seg_start, &seg_end);
+ } else if (xe_vma_start(vma) == seg_start && xe_vma_end(vma) == seg_end) {
+ xe_vma_set_default_attributes(vma);
+ addr = seg_end;
+ continue;
+ }
+
+ if (xe_vma_start(vma) == seg_start &&
+ xe_vma_end(vma) == seg_end &&
+ has_default_attr) {
+ addr = seg_end;
+ continue;
+ }
+
+ err = xe_vm_alloc_cpu_addr_mirror_vma(vm, seg_start, seg_end - seg_start);
+ if (err) {
+ if (err == -ENOENT) {
+ addr = seg_end;
+ continue;
+ }
+ return err;
+ }
+
+ addr = seg_end;
+ }
+
+ return 0;
+}
+
+/**
+ * xe_madvise_work_func - Worker to process unmap
+ * @w: work_struct
+ *
+ * Processes a single unmap by taking vm->lock and calling the helper.
+ * Each unmap has its own work item, so no interval loss.
+ */
+static void xe_madvise_work_func(struct work_struct *w)
+{
+ struct xe_madvise_work_item *item = container_of(w, struct xe_madvise_work_item, work);
+ struct xe_vm *vm = item->vm;
+ int err;
+
+ down_write(&vm->lock);
+ err = xe_vm_madvise_process_unmap(vm, item->start, item->end);
+ if (err)
+ drm_warn(&vm->xe->drm,
+ "madvise autoreset failed [%#llx-%#llx]: %d\n",
+ item->start, item->end, err);
+ /*
+ * Best-effort: Log failure and continue.
+ * Core correctness from CPU_AUTORESET_ACTIVE flag.
+ */
+ up_write(&vm->lock);
+ xe_vm_put(vm);
+ mempool_free(item, item->pool);
+}
+
+/**
+ * xe_madvise_notifier_callback - MMU notifier callback for CPU munmap
+ * @mni: mmu_interval_notifier
+ * @range: mmu_notifier_range
+ * @cur_seq: current sequence number
+ *
+ * Queues work to reset VMA attributes. Cannot take vm->lock (circular locking),
+ * so uses workqueue. GFP_ATOMIC allocation may fail; drops event if so.
+ *
+ * Return: true (never blocks)
+ */
+static bool xe_madvise_notifier_callback(struct mmu_interval_notifier *mni,
+ const struct mmu_notifier_range *range,
+ unsigned long cur_seq)
+{
+ struct xe_madvise_notifier *notifier =
+ container_of(mni, struct xe_madvise_notifier, mmu_notifier);
+ struct xe_vm *vm = notifier->vm;
+ struct xe_madvise_work_item *item;
+ struct workqueue_struct *wq;
+ mempool_t *pool;
+ u64 start, end;
+
+ if (range->event != MMU_NOTIFY_UNMAP)
+ return true;
+
+ /*
+ * Best-effort: skip in non-blockable contexts to avoid building up work.
+ * Correctness does not rely on this notifier - CPU_AUTORESET_ACTIVE flag
+ * prevents GPU PTE zaps on CPU-only VMAs in the zap path.
+ */
+ if (!mmu_notifier_range_blockable(range))
+ return true;
+
+ /* Consume seq (interval-notifier convention) */
+ mmu_interval_set_seq(mni, cur_seq);
+
+ /* Best-effort: core correctness from CPU_AUTORESET_ACTIVE check in zap path */
+
+ start = max_t(u64, range->start, notifier->vma_start);
+ end = min_t(u64, range->end, notifier->vma_end);
+
+ if (start >= end)
+ return true;
+
+ pool = READ_ONCE(vm->svm.madvise_work.pool);
+ wq = READ_ONCE(vm->svm.madvise_work.wq);
+ if (!pool || !wq || atomic_read(&vm->svm.madvise_work.closing))
+ return true;
+
+ /* GFP_ATOMIC to avoid fs_reclaim lockdep in notifier context */
+ item = mempool_alloc(pool, GFP_ATOMIC);
+ if (!item)
+ return true;
+
+ memset(item, 0, sizeof(*item));
+ INIT_WORK(&item->work, xe_madvise_work_func);
+ item->vm = xe_vm_get(vm);
+ item->pool = pool;
+ item->start = start;
+ item->end = end;
+
+ if (unlikely(atomic_read(&vm->svm.madvise_work.closing))) {
+ xe_vm_put(item->vm);
+ mempool_free(item, pool);
+ return true;
+ }
+
+ queue_work(wq, &item->work);
+
+ return true;
+}
+
+static const struct mmu_interval_notifier_ops xe_madvise_notifier_ops = {
+ .invalidate = xe_madvise_notifier_callback,
+};
+
+/**
+ * xe_vm_madvise_init - Initialize madvise notifier infrastructure
+ * @vm: VM
+ *
+ * Sets up workqueue and mempool for async munmap processing.
+ *
+ * Return: 0 on success, -ENOMEM on failure
+ */
+int xe_vm_madvise_init(struct xe_vm *vm)
+{
+ struct workqueue_struct *wq;
+ mempool_t *pool;
+
+ /* Always initialize list and mutex - fini may be called on partial init */
+ INIT_LIST_HEAD(&vm->svm.madvise_notifiers.list);
+ mutex_init(&vm->svm.madvise_notifiers.lock);
+
+ wq = READ_ONCE(vm->svm.madvise_work.wq);
+ pool = READ_ONCE(vm->svm.madvise_work.pool);
+
+ /* Guard against double initialization and detect partial init */
+ if (wq || pool) {
+ XE_WARN_ON(!wq || !pool);
+ return 0;
+ }
+
+ WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
+ WRITE_ONCE(vm->svm.madvise_work.pool, NULL);
+ atomic_set(&vm->svm.madvise_work.closing, 1);
+
+ /*
+ * WQ_UNBOUND: best-effort optimization, not critical path.
+ * No WQ_MEM_RECLAIM: worker allocates memory (VMA ops with GFP_KERNEL).
+ * Not on reclaim path - merely resets attributes after munmap.
+ */
+ vm->svm.madvise_work.wq = alloc_workqueue("xe_madvise", WQ_UNBOUND, 0);
+ if (!vm->svm.madvise_work.wq)
+ return -ENOMEM;
+
+ /* Mempool for GFP_ATOMIC allocs in notifier callback */
+ vm->svm.madvise_work.pool =
+ mempool_create_kmalloc_pool(64,
+ sizeof(struct xe_madvise_work_item));
+ if (!vm->svm.madvise_work.pool) {
+ destroy_workqueue(vm->svm.madvise_work.wq);
+ WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
+ return -ENOMEM;
+ }
+
+ atomic_set(&vm->svm.madvise_work.closing, 0);
+
+ return 0;
+}
+
+/**
+ * xe_vm_madvise_fini - Cleanup all madvise notifiers
+ * @vm: VM
+ *
+ * Tears down notifiers and drains workqueue. Safe if init partially failed.
+ * Order: closing flag → remove notifiers (SRCU sync) → drain wq → destroy.
+ */
+void xe_vm_madvise_fini(struct xe_vm *vm)
+{
+ struct xe_madvise_notifier *notifier, *next;
+ struct workqueue_struct *wq;
+ mempool_t *pool;
+ LIST_HEAD(tmp);
+
+ atomic_set(&vm->svm.madvise_work.closing, 1);
+
+ /*
+ * Detach notifiers under lock, then remove outside lock (SRCU sync can be slow).
+ * Splice avoids holding mutex across mmu_interval_notifier_remove() SRCU sync.
+ * Removing notifiers first (before drain) prevents new invalidate callbacks.
+ */
+ mutex_lock(&vm->svm.madvise_notifiers.lock);
+ list_splice_init(&vm->svm.madvise_notifiers.list, &tmp);
+ mutex_unlock(&vm->svm.madvise_notifiers.lock);
+
+ /* Now remove notifiers without holding lock - mmu_interval_notifier_remove() SRCU-syncs */
+ list_for_each_entry_safe(notifier, next, &tmp, list) {
+ list_del(¬ifier->list);
+ mmu_interval_notifier_remove(¬ifier->mmu_notifier);
+ xe_vm_put(notifier->vm);
+ kfree(notifier);
+ }
+
+ /* Drain and destroy workqueue */
+ wq = xchg(&vm->svm.madvise_work.wq, NULL);
+ if (wq) {
+ drain_workqueue(wq);
+ destroy_workqueue(wq);
+ }
+
+ pool = xchg(&vm->svm.madvise_work.pool, NULL);
+ if (pool)
+ mempool_destroy(pool);
+}
+
+/**
+ * xe_vm_madvise_register_notifier_range - Register MMU notifier for address range
+ * @vm: VM
+ * @start: Start address (page-aligned)
+ * @end: End address (page-aligned)
+ *
+ * Registers interval notifier for munmap tracking. Uses addresses (not VMA pointers)
+ * to avoid UAF after dropping vm->lock. Deduplicates by range.
+ *
+ * Return: 0 on success, negative error code on failure
+ */
+int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct xe_madvise_notifier *notifier, *existing;
+ int err;
+
+ if (!IS_ALIGNED(start, PAGE_SIZE) || !IS_ALIGNED(end, PAGE_SIZE))
+ return -EINVAL;
+
+ if (WARN_ON_ONCE(end <= start))
+ return -EINVAL;
+
+ if (atomic_read(&vm->svm.madvise_work.closing))
+ return -ENOENT;
+
+ if (!READ_ONCE(vm->svm.madvise_work.wq) ||
+ !READ_ONCE(vm->svm.madvise_work.pool))
+ return -ENOMEM;
+
+ /* Check mm early to avoid allocation if it's missing */
+ if (!vm->svm.gpusvm.mm)
+ return -EINVAL;
+
+ /* Dedupe: check if notifier exists for this range */
+ mutex_lock(&vm->svm.madvise_notifiers.lock);
+ list_for_each_entry(existing, &vm->svm.madvise_notifiers.list, list) {
+ if (existing->vma_start == start && existing->vma_end == end) {
+ mutex_unlock(&vm->svm.madvise_notifiers.lock);
+ return 0;
+ }
+ }
+ mutex_unlock(&vm->svm.madvise_notifiers.lock);
+
+ notifier = kzalloc(sizeof(*notifier), GFP_KERNEL);
+ if (!notifier)
+ return -ENOMEM;
+
+ notifier->vm = xe_vm_get(vm);
+ notifier->vma_start = start;
+ notifier->vma_end = end;
+ INIT_LIST_HEAD(¬ifier->list);
+
+ err = mmu_interval_notifier_insert(¬ifier->mmu_notifier,
+ vm->svm.gpusvm.mm,
+ start,
+ end - start,
+ &xe_madvise_notifier_ops);
+ if (err) {
+ xe_vm_put(notifier->vm);
+ kfree(notifier);
+ return err;
+ }
+
+ /* Re-check closing to avoid teardown race */
+ if (unlikely(atomic_read(&vm->svm.madvise_work.closing))) {
+ mmu_interval_notifier_remove(¬ifier->mmu_notifier);
+ xe_vm_put(notifier->vm);
+ kfree(notifier);
+ return -ENOENT;
+ }
+
+ /* Add to list - check again for concurrent registration race */
+ mutex_lock(&vm->svm.madvise_notifiers.lock);
+ list_for_each_entry(existing, &vm->svm.madvise_notifiers.list, list) {
+ if (existing->vma_start == start && existing->vma_end == end) {
+ mutex_unlock(&vm->svm.madvise_notifiers.lock);
+ mmu_interval_notifier_remove(¬ifier->mmu_notifier);
+ xe_vm_put(notifier->vm);
+ kfree(notifier);
+ return 0;
+ }
+ }
+ list_add(¬ifier->list, &vm->svm.madvise_notifiers.list);
+ mutex_unlock(&vm->svm.madvise_notifiers.lock);
+
+ return 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
index b0e1fc445f23..ba9cd7912113 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.h
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
@@ -6,10 +6,18 @@
#ifndef _XE_VM_MADVISE_H_
#define _XE_VM_MADVISE_H_
+#include <linux/types.h>
+
struct drm_device;
struct drm_file;
+struct xe_vm;
+struct xe_vma;
int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
+int xe_vm_madvise_init(struct xe_vm *vm);
+void xe_vm_madvise_fini(struct xe_vm *vm);
+int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64 start, u64 end);
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 29ff63503d4c..eb978995000c 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -12,6 +12,7 @@
#include <linux/dma-resv.h>
#include <linux/kref.h>
+#include <linux/mempool.h>
#include <linux/mmu_notifier.h>
#include <linux/scatterlist.h>
@@ -29,6 +30,26 @@ struct xe_user_fence;
struct xe_vm;
struct xe_vm_pgtable_update_op;
+/**
+ * struct xe_madvise_notifier - CPU madvise notifier for memory attribute reset
+ *
+ * Tracks CPU munmap operations on SVM CPU address mirror VMAs.
+ * When userspace unmaps CPU memory, this notifier processes attribute reset
+ * via work queue to avoid circular locking (can't take vm->lock in callback).
+ */
+struct xe_madvise_notifier {
+ /** @mmu_notifier: MMU interval notifier */
+ struct mmu_interval_notifier mmu_notifier;
+ /** @vm: VM this notifier belongs to (holds reference via xe_vm_get) */
+ struct xe_vm *vm;
+ /** @vma_start: Start address of VMA being tracked */
+ u64 vma_start;
+ /** @vma_end: End address of VMA being tracked */
+ u64 vma_end;
+ /** @list: Link in vm->svm.madvise_notifiers.list */
+ struct list_head list;
+};
+
#if IS_ENABLED(CONFIG_DRM_XE_DEBUG)
#define TEST_VM_OPS_ERROR
#define FORCE_OP_ERROR BIT(31)
@@ -212,6 +233,26 @@ struct xe_vm {
struct xe_pagemap *pagemaps[XE_MAX_TILES_PER_DEVICE];
/** @svm.peer: Used for pagemap connectivity computations. */
struct drm_pagemap_peer peer;
+
+ /**
+ * @svm.madvise_notifiers: Active CPU madvise notifiers
+ */
+ struct {
+ /** @svm.madvise_notifiers.list: List of active notifiers */
+ struct list_head list;
+ /** @svm.madvise_notifiers.lock: Protects notifiers list */
+ struct mutex lock;
+ } madvise_notifiers;
+
+ /** @svm.madvise_work: Workqueue for async munmap processing */
+ struct {
+ /** @svm.madvise_work.wq: Workqueue */
+ struct workqueue_struct *wq;
+ /** @svm.madvise_work.pool: Mempool for work items */
+ mempool_t *pool;
+ /** @svm.madvise_work.closing: Teardown flag */
+ atomic_t closing;
+ } madvise_work;
} svm;
struct xe_device *xe;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [RFC 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure
2026-02-19 9:13 ` [RFC 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure Arvind Yadav
@ 2026-02-25 23:34 ` Matthew Brost
2026-03-09 7:07 ` Yadav, Arvind
0 siblings, 1 reply; 19+ messages in thread
From: Matthew Brost @ 2026-02-25 23:34 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom
On Thu, Feb 19, 2026 at 02:43:09PM +0530, Arvind Yadav wrote:
> MADVISE_AUTORESET needs to reset VMA attributes when userspace unmaps
> CPU-only ranges, but the MMU invalidate callback cannot take vm->lock
> due to lock ordering (mmap_lock is already held).
>
> Add mmu_interval_notifier that queues work items for MMU_NOTIFY_UNMAP
> events. The worker runs under vm->lock and resets attributes for VMAs
> still marked XE_VMA_CPU_AUTORESET_ACTIVE (i.e., not yet GPU-touched).
>
> Work items are allocated from a mempool to handle atomic context in the
> callback. The notifier is deactivated when GPU touches the VMA.
>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
> Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
> ---
> drivers/gpu/drm/xe/xe_vm_madvise.c | 394 +++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm_madvise.h | 8 +
> drivers/gpu/drm/xe/xe_vm_types.h | 41 +++
> 3 files changed, 443 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
> index 52147f5eaaa0..4c0ffb100bcc 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> @@ -6,9 +6,12 @@
> #include "xe_vm_madvise.h"
>
> #include <linux/nospec.h>
> +#include <linux/mempool.h>
> +#include <linux/workqueue.h>
> #include <drm/xe_drm.h>
>
> #include "xe_bo.h"
> +#include "xe_macros.h"
> #include "xe_pat.h"
> #include "xe_pt.h"
> #include "xe_svm.h"
> @@ -500,3 +503,394 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
> xe_vm_put(vm);
> return err;
> }
> +
> +/**
> + * struct xe_madvise_work_item - Work item for unmap processing
> + * @work: work_struct
> + * @vm: VM reference
> + * @pool: Mempool for recycling
> + * @start: Start address
> + * @end: End address
> + */
> +struct xe_madvise_work_item {
> + struct work_struct work;
> + struct xe_vm *vm;
> + mempool_t *pool;
Why mempool? Seems like we could just do kmalloc with correct gfp flags.
> + u64 start;
> + u64 end;
> +};
> +
> +static void xe_vma_set_default_attributes(struct xe_vma *vma)
> +{
> + vma->attr.preferred_loc.devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE;
> + vma->attr.preferred_loc.migration_policy = DRM_XE_MIGRATE_ALL_PAGES;
> + vma->attr.pat_index = vma->attr.default_pat_index;
> + vma->attr.atomic_access = DRM_XE_ATOMIC_UNDEFINED;
> +}
> +
> +/**
> + * xe_vm_madvise_process_unmap - Process munmap for all VMAs in range
> + * @vm: VM
> + * @start: Start of unmap range
> + * @end: End of unmap range
> + *
> + * Processes all VMAs overlapping the unmap range. An unmap can span multiple
> + * VMAs, so we need to loop and process each segment.
> + *
> + * Return: 0 on success, negative error otherwise
> + */
> +static int xe_vm_madvise_process_unmap(struct xe_vm *vm, u64 start, u64 end)
> +{
> + u64 addr = start;
> + int err;
> +
> + lockdep_assert_held_write(&vm->lock);
> +
> + if (xe_vm_is_closed_or_banned(vm))
> + return 0;
> +
> + while (addr < end) {
> + struct xe_vma *vma;
> + u64 seg_start, seg_end;
> + bool has_default_attr;
> +
> + vma = xe_vm_find_overlapping_vma(vm, addr, end);
> + if (!vma)
> + break;
> +
> + /* Skip GPU-touched VMAs - SVM handles them */
> + if (!xe_vma_has_cpu_autoreset_active(vma)) {
> + addr = xe_vma_end(vma);
> + continue;
> + }
> +
> + has_default_attr = xe_vma_has_default_mem_attrs(vma);
> + seg_start = max(addr, xe_vma_start(vma));
> + seg_end = min(end, xe_vma_end(vma));
> +
> + /* Expand for merging if VMA already has default attrs */
> + if (has_default_attr &&
> + xe_vma_start(vma) >= start &&
> + xe_vma_end(vma) <= end) {
> + seg_start = xe_vma_start(vma);
> + seg_end = xe_vma_end(vma);
> + xe_vm_find_cpu_addr_mirror_vma_range(vm, &seg_start, &seg_end);
> + } else if (xe_vma_start(vma) == seg_start && xe_vma_end(vma) == seg_end) {
> + xe_vma_set_default_attributes(vma);
> + addr = seg_end;
> + continue;
> + }
> +
> + if (xe_vma_start(vma) == seg_start &&
> + xe_vma_end(vma) == seg_end &&
> + has_default_attr) {
> + addr = seg_end;
> + continue;
> + }
> +
> + err = xe_vm_alloc_cpu_addr_mirror_vma(vm, seg_start, seg_end - seg_start);
> + if (err) {
> + if (err == -ENOENT) {
> + addr = seg_end;
> + continue;
> + }
> + return err;
> + }
> +
> + addr = seg_end;
> + }
> +
> + return 0;
> +}
> +
> +/**
> + * xe_madvise_work_func - Worker to process unmap
> + * @w: work_struct
> + *
> + * Processes a single unmap by taking vm->lock and calling the helper.
> + * Each unmap has its own work item, so no interval loss.
> + */
> +static void xe_madvise_work_func(struct work_struct *w)
> +{
> + struct xe_madvise_work_item *item = container_of(w, struct xe_madvise_work_item, work);
> + struct xe_vm *vm = item->vm;
> + int err;
> +
> + down_write(&vm->lock);
> + err = xe_vm_madvise_process_unmap(vm, item->start, item->end);
> + if (err)
> + drm_warn(&vm->xe->drm,
> + "madvise autoreset failed [%#llx-%#llx]: %d\n",
> + item->start, item->end, err);
> + /*
> + * Best-effort: Log failure and continue.
> + * Core correctness from CPU_AUTORESET_ACTIVE flag.
> + */
> + up_write(&vm->lock);
> + xe_vm_put(vm);
> + mempool_free(item, item->pool);
> +}
> +
> +/**
> + * xe_madvise_notifier_callback - MMU notifier callback for CPU munmap
> + * @mni: mmu_interval_notifier
> + * @range: mmu_notifier_range
> + * @cur_seq: current sequence number
> + *
> + * Queues work to reset VMA attributes. Cannot take vm->lock (circular locking),
> + * so uses workqueue. GFP_ATOMIC allocation may fail; drops event if so.
> + *
> + * Return: true (never blocks)
> + */
> +static bool xe_madvise_notifier_callback(struct mmu_interval_notifier *mni,
> + const struct mmu_notifier_range *range,
> + unsigned long cur_seq)
> +{
> + struct xe_madvise_notifier *notifier =
> + container_of(mni, struct xe_madvise_notifier, mmu_notifier);
> + struct xe_vm *vm = notifier->vm;
> + struct xe_madvise_work_item *item;
> + struct workqueue_struct *wq;
> + mempool_t *pool;
> + u64 start, end;
> +
> + if (range->event != MMU_NOTIFY_UNMAP)
> + return true;
> +
> + /*
> + * Best-effort: skip in non-blockable contexts to avoid building up work.
> + * Correctness does not rely on this notifier - CPU_AUTORESET_ACTIVE flag
> + * prevents GPU PTE zaps on CPU-only VMAs in the zap path.
> + */
> + if (!mmu_notifier_range_blockable(range))
> + return true;
> +
> + /* Consume seq (interval-notifier convention) */
> + mmu_interval_set_seq(mni, cur_seq);
> +
> + /* Best-effort: core correctness from CPU_AUTORESET_ACTIVE check in zap path */
> +
> + start = max_t(u64, range->start, notifier->vma_start);
> + end = min_t(u64, range->end, notifier->vma_end);
> +
> + if (start >= end)
> + return true;
> +
> + pool = READ_ONCE(vm->svm.madvise_work.pool);
> + wq = READ_ONCE(vm->svm.madvise_work.wq);
> + if (!pool || !wq || atomic_read(&vm->svm.madvise_work.closing))
Can you explain the use of READ_ONCE, xchg, and atomics? At first glance
it seems unnecessary or overly complicated. Let’s start with the problem
this is trying to solve and see if we can find a simpler approach.
My initial thought is a VM-wide rwsem, marked as reclaim-safe. The
notifiers would take it in read mode to check whether the VM is tearing
down, and the fini path would take it in write mode to initiate
teardown...
> + return true;
> +
> + /* GFP_ATOMIC to avoid fs_reclaim lockdep in notifier context */
> + item = mempool_alloc(pool, GFP_ATOMIC);
Again, probably just use kmalloc. Also s/GFP_ATOMIC/GFP_NOWAIT. We
really shouldn’t be using GFP_ATOMIC in Xe per the DRM docs unless a
failed memory allocation would take down the device. We likely abuse
GFP_ATOMIC in several places that we should clean up, but in this case
it’s pretty clear GFP_NOWAIT is what we want, as failure isn’t
fatal—just sub-optimal.
> + if (!item)
> + return true;
> +
> + memset(item, 0, sizeof(*item));
> + INIT_WORK(&item->work, xe_madvise_work_func);
> + item->vm = xe_vm_get(vm);
> + item->pool = pool;
> + item->start = start;
> + item->end = end;
> +
> + if (unlikely(atomic_read(&vm->svm.madvise_work.closing))) {
Same as above the atomic usage...
> + xe_vm_put(item->vm);
> + mempool_free(item, pool);
> + return true;
> + }
> +
> + queue_work(wq, &item->work);
> +
> + return true;
> +}
> +
> +static const struct mmu_interval_notifier_ops xe_madvise_notifier_ops = {
> + .invalidate = xe_madvise_notifier_callback,
> +};
> +
> +/**
> + * xe_vm_madvise_init - Initialize madvise notifier infrastructure
> + * @vm: VM
> + *
> + * Sets up workqueue and mempool for async munmap processing.
> + *
> + * Return: 0 on success, -ENOMEM on failure
> + */
> +int xe_vm_madvise_init(struct xe_vm *vm)
> +{
> + struct workqueue_struct *wq;
> + mempool_t *pool;
> +
> + /* Always initialize list and mutex - fini may be called on partial init */
> + INIT_LIST_HEAD(&vm->svm.madvise_notifiers.list);
> + mutex_init(&vm->svm.madvise_notifiers.lock);
> +
> + wq = READ_ONCE(vm->svm.madvise_work.wq);
> + pool = READ_ONCE(vm->svm.madvise_work.pool);
> +
> + /* Guard against double initialization and detect partial init */
> + if (wq || pool) {
> + XE_WARN_ON(!wq || !pool);
> + return 0;
> + }
> +
> + WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
> + WRITE_ONCE(vm->svm.madvise_work.pool, NULL);
> + atomic_set(&vm->svm.madvise_work.closing, 1);
> +
> + /*
> + * WQ_UNBOUND: best-effort optimization, not critical path.
> + * No WQ_MEM_RECLAIM: worker allocates memory (VMA ops with GFP_KERNEL).
> + * Not on reclaim path - merely resets attributes after munmap.
> + */
> + vm->svm.madvise_work.wq = alloc_workqueue("xe_madvise", WQ_UNBOUND, 0);
> + if (!vm->svm.madvise_work.wq)
> + return -ENOMEM;
> +
> + /* Mempool for GFP_ATOMIC allocs in notifier callback */
> + vm->svm.madvise_work.pool =
> + mempool_create_kmalloc_pool(64,
> + sizeof(struct xe_madvise_work_item));
> + if (!vm->svm.madvise_work.pool) {
> + destroy_workqueue(vm->svm.madvise_work.wq);
> + WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
> + return -ENOMEM;
> + }
> +
> + atomic_set(&vm->svm.madvise_work.closing, 0);
> +
> + return 0;
> +}
> +
> +/**
> + * xe_vm_madvise_fini - Cleanup all madvise notifiers
> + * @vm: VM
> + *
> + * Tears down notifiers and drains workqueue. Safe if init partially failed.
> + * Order: closing flag → remove notifiers (SRCU sync) → drain wq → destroy.
> + */
> +void xe_vm_madvise_fini(struct xe_vm *vm)
> +{
> + struct xe_madvise_notifier *notifier, *next;
> + struct workqueue_struct *wq;
> + mempool_t *pool;
> + LIST_HEAD(tmp);
> +
> + atomic_set(&vm->svm.madvise_work.closing, 1);
> +
> + /*
> + * Detach notifiers under lock, then remove outside lock (SRCU sync can be slow).
> + * Splice avoids holding mutex across mmu_interval_notifier_remove() SRCU sync.
> + * Removing notifiers first (before drain) prevents new invalidate callbacks.
> + */
> + mutex_lock(&vm->svm.madvise_notifiers.lock);
> + list_splice_init(&vm->svm.madvise_notifiers.list, &tmp);
> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
> +
> + /* Now remove notifiers without holding lock - mmu_interval_notifier_remove() SRCU-syncs */
> + list_for_each_entry_safe(notifier, next, &tmp, list) {
> + list_del(¬ifier->list);
> + mmu_interval_notifier_remove(¬ifier->mmu_notifier);
> + xe_vm_put(notifier->vm);
> + kfree(notifier);
> + }
> +
> + /* Drain and destroy workqueue */
> + wq = xchg(&vm->svm.madvise_work.wq, NULL);
> + if (wq) {
> + drain_workqueue(wq);
Work items in wq call xe_madvise_work_func, which takes vm->lock in
write mode. If we try to drain here after the work item executing
xe_madvise_work_func has started or is queued, I think we could
deadlock. Lockdep should complain about this if you run a test that
triggers xe_madvise_work_func at least once — or at least it should. If
it doesn’t, then workqueues likely have an issue in their lockdep
implementation as 'drain_workqueue' should touch its lockdep map which
has tainted vm->lock (i.e., is outside of it).
So perhaps call this function without vm->lock and take as need in the
this function, then drop it drain the work queue, etc...
> + destroy_workqueue(wq);
> + }
> +
> + pool = xchg(&vm->svm.madvise_work.pool, NULL);
> + if (pool)
> + mempool_destroy(pool);
> +}
> +
> +/**
> + * xe_vm_madvise_register_notifier_range - Register MMU notifier for address range
> + * @vm: VM
> + * @start: Start address (page-aligned)
> + * @end: End address (page-aligned)
> + *
> + * Registers interval notifier for munmap tracking. Uses addresses (not VMA pointers)
> + * to avoid UAF after dropping vm->lock. Deduplicates by range.
> + *
> + * Return: 0 on success, negative error code on failure
> + */
> +int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64 start, u64 end)
> +{
> + struct xe_madvise_notifier *notifier, *existing;
> + int err;
> +
I see this isn’t called under the vm->lock write lock. Is there a reason
not to? I think taking it under the write lock would help with the
teardown sequence, since you wouldn’t be able to get here if
xe_vm_is_closed_or_banned were stable—and we wouldn’t enter this
function if that helper returned true.
> + if (!IS_ALIGNED(start, PAGE_SIZE) || !IS_ALIGNED(end, PAGE_SIZE))
> + return -EINVAL;
> +
> + if (WARN_ON_ONCE(end <= start))
> + return -EINVAL;
> +
> + if (atomic_read(&vm->svm.madvise_work.closing))
> + return -ENOENT;
> +
> + if (!READ_ONCE(vm->svm.madvise_work.wq) ||
> + !READ_ONCE(vm->svm.madvise_work.pool))
> + return -ENOMEM;
> +
> + /* Check mm early to avoid allocation if it's missing */
> + if (!vm->svm.gpusvm.mm)
> + return -EINVAL;
> +
> + /* Dedupe: check if notifier exists for this range */
> + mutex_lock(&vm->svm.madvise_notifiers.lock);
If we had the vm->lock in write mode we could likely just drop
svm.madvise_notifiers.lock for now, but once we move to fine grained
locking in page faults [1] we'd in fact need a dedicated lock. So let's
keep this.
[1] https://patchwork.freedesktop.org/patch/707238/?series=162167&rev=2
> + list_for_each_entry(existing, &vm->svm.madvise_notifiers.list, list) {
> + if (existing->vma_start == start && existing->vma_end == end) {
This is O(N) which typically isn't ideal. Better structure here? mtree?
Does an mtree have its own locking so svm.madvise_notifiers.lock could
just be dropped? I'd look into this.
> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
> + return 0;
> + }
> + }
> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
> +
> + notifier = kzalloc(sizeof(*notifier), GFP_KERNEL);
> + if (!notifier)
> + return -ENOMEM;
> +
> + notifier->vm = xe_vm_get(vm);
> + notifier->vma_start = start;
> + notifier->vma_end = end;
> + INIT_LIST_HEAD(¬ifier->list);
> +
> + err = mmu_interval_notifier_insert(¬ifier->mmu_notifier,
> + vm->svm.gpusvm.mm,
> + start,
> + end - start,
> + &xe_madvise_notifier_ops);
> + if (err) {
> + xe_vm_put(notifier->vm);
> + kfree(notifier);
> + return err;
> + }
> +
> + /* Re-check closing to avoid teardown race */
> + if (unlikely(atomic_read(&vm->svm.madvise_work.closing))) {
> + mmu_interval_notifier_remove(¬ifier->mmu_notifier);
> + xe_vm_put(notifier->vm);
> + kfree(notifier);
> + return -ENOENT;
> + }
> +
> + /* Add to list - check again for concurrent registration race */
> + mutex_lock(&vm->svm.madvise_notifiers.lock);
If we had the vm->lock in write mode, we couldn't get concurrent
registrations.
I likely have more comments, but I have enough concerns with the locking
and structure in this patch that I’m going to pause reviewing the series
until most of my comments are addressed. It’s hard to focus on anything
else until we get these issues worked out.
Matt
> + list_for_each_entry(existing, &vm->svm.madvise_notifiers.list, list) {
> + if (existing->vma_start == start && existing->vma_end == end) {
> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
> + mmu_interval_notifier_remove(¬ifier->mmu_notifier);
> + xe_vm_put(notifier->vm);
> + kfree(notifier);
> + return 0;
> + }
> + }
> + list_add(¬ifier->list, &vm->svm.madvise_notifiers.list);
> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
> +
> + return 0;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
> index b0e1fc445f23..ba9cd7912113 100644
> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> @@ -6,10 +6,18 @@
> #ifndef _XE_VM_MADVISE_H_
> #define _XE_VM_MADVISE_H_
>
> +#include <linux/types.h>
> +
> struct drm_device;
> struct drm_file;
> +struct xe_vm;
> +struct xe_vma;
>
> int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
> struct drm_file *file);
>
> +int xe_vm_madvise_init(struct xe_vm *vm);
> +void xe_vm_madvise_fini(struct xe_vm *vm);
> +int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64 start, u64 end);
> +
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index 29ff63503d4c..eb978995000c 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -12,6 +12,7 @@
>
> #include <linux/dma-resv.h>
> #include <linux/kref.h>
> +#include <linux/mempool.h>
> #include <linux/mmu_notifier.h>
> #include <linux/scatterlist.h>
>
> @@ -29,6 +30,26 @@ struct xe_user_fence;
> struct xe_vm;
> struct xe_vm_pgtable_update_op;
>
> +/**
> + * struct xe_madvise_notifier - CPU madvise notifier for memory attribute reset
> + *
> + * Tracks CPU munmap operations on SVM CPU address mirror VMAs.
> + * When userspace unmaps CPU memory, this notifier processes attribute reset
> + * via work queue to avoid circular locking (can't take vm->lock in callback).
> + */
> +struct xe_madvise_notifier {
> + /** @mmu_notifier: MMU interval notifier */
> + struct mmu_interval_notifier mmu_notifier;
> + /** @vm: VM this notifier belongs to (holds reference via xe_vm_get) */
> + struct xe_vm *vm;
> + /** @vma_start: Start address of VMA being tracked */
> + u64 vma_start;
> + /** @vma_end: End address of VMA being tracked */
> + u64 vma_end;
> + /** @list: Link in vm->svm.madvise_notifiers.list */
> + struct list_head list;
> +};
> +
> #if IS_ENABLED(CONFIG_DRM_XE_DEBUG)
> #define TEST_VM_OPS_ERROR
> #define FORCE_OP_ERROR BIT(31)
> @@ -212,6 +233,26 @@ struct xe_vm {
> struct xe_pagemap *pagemaps[XE_MAX_TILES_PER_DEVICE];
> /** @svm.peer: Used for pagemap connectivity computations. */
> struct drm_pagemap_peer peer;
> +
> + /**
> + * @svm.madvise_notifiers: Active CPU madvise notifiers
> + */
> + struct {
> + /** @svm.madvise_notifiers.list: List of active notifiers */
> + struct list_head list;
> + /** @svm.madvise_notifiers.lock: Protects notifiers list */
> + struct mutex lock;
> + } madvise_notifiers;
> +
> + /** @svm.madvise_work: Workqueue for async munmap processing */
> + struct {
> + /** @svm.madvise_work.wq: Workqueue */
> + struct workqueue_struct *wq;
> + /** @svm.madvise_work.pool: Mempool for work items */
> + mempool_t *pool;
> + /** @svm.madvise_work.closing: Teardown flag */
> + atomic_t closing;
> + } madvise_work;
> } svm;
>
> struct xe_device *xe;
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [RFC 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure
2026-02-25 23:34 ` Matthew Brost
@ 2026-03-09 7:07 ` Yadav, Arvind
2026-03-09 9:32 ` Thomas Hellström
0 siblings, 1 reply; 19+ messages in thread
From: Yadav, Arvind @ 2026-03-09 7:07 UTC (permalink / raw)
To: Matthew Brost; +Cc: intel-xe, himal.prasad.ghimiray, thomas.hellstrom
[-- Attachment #1: Type: text/plain, Size: 22856 bytes --]
On 26-02-2026 05:04, Matthew Brost wrote:
> On Thu, Feb 19, 2026 at 02:43:09PM +0530, Arvind Yadav wrote:
>> MADVISE_AUTORESET needs to reset VMA attributes when userspace unmaps
>> CPU-only ranges, but the MMU invalidate callback cannot take vm->lock
>> due to lock ordering (mmap_lock is already held).
>>
>> Add mmu_interval_notifier that queues work items for MMU_NOTIFY_UNMAP
>> events. The worker runs under vm->lock and resets attributes for VMAs
>> still marked XE_VMA_CPU_AUTORESET_ACTIVE (i.e., not yet GPU-touched).
>>
>> Work items are allocated from a mempool to handle atomic context in the
>> callback. The notifier is deactivated when GPU touches the VMA.
>>
>> Cc: Matthew Brost<matthew.brost@intel.com>
>> Cc: Thomas Hellström<thomas.hellstrom@linux.intel.com>
>> Cc: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Arvind Yadav<arvind.yadav@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_vm_madvise.c | 394 +++++++++++++++++++++++++++++
>> drivers/gpu/drm/xe/xe_vm_madvise.h | 8 +
>> drivers/gpu/drm/xe/xe_vm_types.h | 41 +++
>> 3 files changed, 443 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> index 52147f5eaaa0..4c0ffb100bcc 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>> @@ -6,9 +6,12 @@
>> #include "xe_vm_madvise.h"
>>
>> #include <linux/nospec.h>
>> +#include <linux/mempool.h>
>> +#include <linux/workqueue.h>
>> #include <drm/xe_drm.h>
>>
>> #include "xe_bo.h"
>> +#include "xe_macros.h"
>> #include "xe_pat.h"
>> #include "xe_pt.h"
>> #include "xe_svm.h"
>> @@ -500,3 +503,394 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
>> xe_vm_put(vm);
>> return err;
>> }
>> +
>> +/**
>> + * struct xe_madvise_work_item - Work item for unmap processing
>> + * @work: work_struct
>> + * @vm: VM reference
>> + * @pool: Mempool for recycling
>> + * @start: Start address
>> + * @end: End address
>> + */
>> +struct xe_madvise_work_item {
>> + struct work_struct work;
>> + struct xe_vm *vm;
>> + mempool_t *pool;
> Why mempool? Seems like we could just do kmalloc with correct gfp flags.
I tried kmalloc first, but ran into two issues:
GFP_KERNEL — fails because MMU notifier callbacks must not block, and
GFP_KERNEL can sleep waiting for memory reclaim.
GFP_ATOMIC — triggers a circular lockdep warning: the MMU notifier holds
mmu_notifier_invalidate_range_start, and GFP_ATOMIC internally tries to
acquire fs_reclaim, which already depends on the MMU notifier lock.
Agreed. mempool looks unnecessary here. I re-tested this with
kmalloc(..., GFP_NOWAIT) and that avoids both blocking and the
reclaim-related lockdep issue I saw with the earlier approach. I will
switch to that and drop the pool in the next version.
>
>> + u64 start;
>> + u64 end;
>> +};
>> +
>> +static void xe_vma_set_default_attributes(struct xe_vma *vma)
>> +{
>> + vma->attr.preferred_loc.devmem_fd = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE;
>> + vma->attr.preferred_loc.migration_policy = DRM_XE_MIGRATE_ALL_PAGES;
>> + vma->attr.pat_index = vma->attr.default_pat_index;
>> + vma->attr.atomic_access = DRM_XE_ATOMIC_UNDEFINED;
>> +}
>> +
>> +/**
>> + * xe_vm_madvise_process_unmap - Process munmap for all VMAs in range
>> + * @vm: VM
>> + * @start: Start of unmap range
>> + * @end: End of unmap range
>> + *
>> + * Processes all VMAs overlapping the unmap range. An unmap can span multiple
>> + * VMAs, so we need to loop and process each segment.
>> + *
>> + * Return: 0 on success, negative error otherwise
>> + */
>> +static int xe_vm_madvise_process_unmap(struct xe_vm *vm, u64 start, u64 end)
>> +{
>> + u64 addr = start;
>> + int err;
>> +
>> + lockdep_assert_held_write(&vm->lock);
>> +
>> + if (xe_vm_is_closed_or_banned(vm))
>> + return 0;
>> +
>> + while (addr < end) {
>> + struct xe_vma *vma;
>> + u64 seg_start, seg_end;
>> + bool has_default_attr;
>> +
>> + vma = xe_vm_find_overlapping_vma(vm, addr, end);
>> + if (!vma)
>> + break;
>> +
>> + /* Skip GPU-touched VMAs - SVM handles them */
>> + if (!xe_vma_has_cpu_autoreset_active(vma)) {
>> + addr = xe_vma_end(vma);
>> + continue;
>> + }
>> +
>> + has_default_attr = xe_vma_has_default_mem_attrs(vma);
>> + seg_start = max(addr, xe_vma_start(vma));
>> + seg_end = min(end, xe_vma_end(vma));
>> +
>> + /* Expand for merging if VMA already has default attrs */
>> + if (has_default_attr &&
>> + xe_vma_start(vma) >= start &&
>> + xe_vma_end(vma) <= end) {
>> + seg_start = xe_vma_start(vma);
>> + seg_end = xe_vma_end(vma);
>> + xe_vm_find_cpu_addr_mirror_vma_range(vm, &seg_start, &seg_end);
>> + } else if (xe_vma_start(vma) == seg_start && xe_vma_end(vma) == seg_end) {
>> + xe_vma_set_default_attributes(vma);
>> + addr = seg_end;
>> + continue;
>> + }
>> +
>> + if (xe_vma_start(vma) == seg_start &&
>> + xe_vma_end(vma) == seg_end &&
>> + has_default_attr) {
>> + addr = seg_end;
>> + continue;
>> + }
>> +
>> + err = xe_vm_alloc_cpu_addr_mirror_vma(vm, seg_start, seg_end - seg_start);
>> + if (err) {
>> + if (err == -ENOENT) {
>> + addr = seg_end;
>> + continue;
>> + }
>> + return err;
>> + }
>> +
>> + addr = seg_end;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +/**
>> + * xe_madvise_work_func - Worker to process unmap
>> + * @w: work_struct
>> + *
>> + * Processes a single unmap by taking vm->lock and calling the helper.
>> + * Each unmap has its own work item, so no interval loss.
>> + */
>> +static void xe_madvise_work_func(struct work_struct *w)
>> +{
>> + struct xe_madvise_work_item *item = container_of(w, struct xe_madvise_work_item, work);
>> + struct xe_vm *vm = item->vm;
>> + int err;
>> +
>> + down_write(&vm->lock);
>> + err = xe_vm_madvise_process_unmap(vm, item->start, item->end);
>> + if (err)
>> + drm_warn(&vm->xe->drm,
>> + "madvise autoreset failed [%#llx-%#llx]: %d\n",
>> + item->start, item->end, err);
>> + /*
>> + * Best-effort: Log failure and continue.
>> + * Core correctness from CPU_AUTORESET_ACTIVE flag.
>> + */
>> + up_write(&vm->lock);
>> + xe_vm_put(vm);
>> + mempool_free(item, item->pool);
>> +}
>> +
>> +/**
>> + * xe_madvise_notifier_callback - MMU notifier callback for CPU munmap
>> + * @mni: mmu_interval_notifier
>> + * @range: mmu_notifier_range
>> + * @cur_seq: current sequence number
>> + *
>> + * Queues work to reset VMA attributes. Cannot take vm->lock (circular locking),
>> + * so uses workqueue. GFP_ATOMIC allocation may fail; drops event if so.
>> + *
>> + * Return: true (never blocks)
>> + */
>> +static bool xe_madvise_notifier_callback(struct mmu_interval_notifier *mni,
>> + const struct mmu_notifier_range *range,
>> + unsigned long cur_seq)
>> +{
>> + struct xe_madvise_notifier *notifier =
>> + container_of(mni, struct xe_madvise_notifier, mmu_notifier);
>> + struct xe_vm *vm = notifier->vm;
>> + struct xe_madvise_work_item *item;
>> + struct workqueue_struct *wq;
>> + mempool_t *pool;
>> + u64 start, end;
>> +
>> + if (range->event != MMU_NOTIFY_UNMAP)
>> + return true;
>> +
>> + /*
>> + * Best-effort: skip in non-blockable contexts to avoid building up work.
>> + * Correctness does not rely on this notifier - CPU_AUTORESET_ACTIVE flag
>> + * prevents GPU PTE zaps on CPU-only VMAs in the zap path.
>> + */
>> + if (!mmu_notifier_range_blockable(range))
>> + return true;
>> +
>> + /* Consume seq (interval-notifier convention) */
>> + mmu_interval_set_seq(mni, cur_seq);
>> +
>> + /* Best-effort: core correctness from CPU_AUTORESET_ACTIVE check in zap path */
>> +
>> + start = max_t(u64, range->start, notifier->vma_start);
>> + end = min_t(u64, range->end, notifier->vma_end);
>> +
>> + if (start >= end)
>> + return true;
>> +
>> + pool = READ_ONCE(vm->svm.madvise_work.pool);
>> + wq = READ_ONCE(vm->svm.madvise_work.wq);
>> + if (!pool || !wq || atomic_read(&vm->svm.madvise_work.closing))
> Can you explain the use of READ_ONCE, xchg, and atomics? At first glance
> it seems unnecessary or overly complicated. Let’s start with the problem
> this is trying to solve and see if we can find a simpler approach.
>
> My initial thought is a VM-wide rwsem, marked as reclaim-safe. The
> notifiers would take it in read mode to check whether the VM is tearing
> down, and the fini path would take it in write mode to initiate
> teardown...
Agreed. This got more complicated than it needs to be. I reworked it to
use a VM-wide rw_semaphore for teardown serialization, so the atomic_t,
READ_ONCE(), and xchg() go away..
>
>> + return true;
>> +
>> + /* GFP_ATOMIC to avoid fs_reclaim lockdep in notifier context */
>> + item = mempool_alloc(pool, GFP_ATOMIC);
> Again, probably just use kmalloc. Also s/GFP_ATOMIC/GFP_NOWAIT. We
> really shouldn’t be using GFP_ATOMIC in Xe per the DRM docs unless a
> failed memory allocation would take down the device. We likely abuse
> GFP_ATOMIC in several places that we should clean up, but in this case
> it’s pretty clear GFP_NOWAIT is what we want, as failure isn’t
> fatal—just sub-optimal.
Agreed. This should be |GFP_NOWAIT|, not |GFP_ATOMIC|. Allocation
failure here is non-fatal, so |GFP_NOWAIT| is the right fit. I willl
switch to |kmalloc(..., GFP_NOWAIT)| and drop the mempool.
>
>> + if (!item)
>> + return true;
>> +
>> + memset(item, 0, sizeof(*item));
>> + INIT_WORK(&item->work, xe_madvise_work_func);
>> + item->vm = xe_vm_get(vm);
>> + item->pool = pool;
>> + item->start = start;
>> + item->end = end;
>> +
>> + if (unlikely(atomic_read(&vm->svm.madvise_work.closing))) {
> Same as above the atomic usage...
Noted, Will remove.
>
>> + xe_vm_put(item->vm);
>> + mempool_free(item, pool);
>> + return true;
>> + }
>> +
>> + queue_work(wq, &item->work);
>> +
>> + return true;
>> +}
>> +
>> +static const struct mmu_interval_notifier_ops xe_madvise_notifier_ops = {
>> + .invalidate = xe_madvise_notifier_callback,
>> +};
>> +
>> +/**
>> + * xe_vm_madvise_init - Initialize madvise notifier infrastructure
>> + * @vm: VM
>> + *
>> + * Sets up workqueue and mempool for async munmap processing.
>> + *
>> + * Return: 0 on success, -ENOMEM on failure
>> + */
>> +int xe_vm_madvise_init(struct xe_vm *vm)
>> +{
>> + struct workqueue_struct *wq;
>> + mempool_t *pool;
>> +
>> + /* Always initialize list and mutex - fini may be called on partial init */
>> + INIT_LIST_HEAD(&vm->svm.madvise_notifiers.list);
>> + mutex_init(&vm->svm.madvise_notifiers.lock);
>> +
>> + wq = READ_ONCE(vm->svm.madvise_work.wq);
>> + pool = READ_ONCE(vm->svm.madvise_work.pool);
>> +
>> + /* Guard against double initialization and detect partial init */
>> + if (wq || pool) {
>> + XE_WARN_ON(!wq || !pool);
>> + return 0;
>> + }
>> +
>> + WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
>> + WRITE_ONCE(vm->svm.madvise_work.pool, NULL);
>> + atomic_set(&vm->svm.madvise_work.closing, 1);
>> +
>> + /*
>> + * WQ_UNBOUND: best-effort optimization, not critical path.
>> + * No WQ_MEM_RECLAIM: worker allocates memory (VMA ops with GFP_KERNEL).
>> + * Not on reclaim path - merely resets attributes after munmap.
>> + */
>> + vm->svm.madvise_work.wq = alloc_workqueue("xe_madvise", WQ_UNBOUND, 0);
>> + if (!vm->svm.madvise_work.wq)
>> + return -ENOMEM;
>> +
>> + /* Mempool for GFP_ATOMIC allocs in notifier callback */
>> + vm->svm.madvise_work.pool =
>> + mempool_create_kmalloc_pool(64,
>> + sizeof(struct xe_madvise_work_item));
>> + if (!vm->svm.madvise_work.pool) {
>> + destroy_workqueue(vm->svm.madvise_work.wq);
>> + WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
>> + return -ENOMEM;
>> + }
>> +
>> + atomic_set(&vm->svm.madvise_work.closing, 0);
>> +
>> + return 0;
>> +}
>> +
>> +/**
>> + * xe_vm_madvise_fini - Cleanup all madvise notifiers
>> + * @vm: VM
>> + *
>> + * Tears down notifiers and drains workqueue. Safe if init partially failed.
>> + * Order: closing flag → remove notifiers (SRCU sync) → drain wq → destroy.
>> + */
>> +void xe_vm_madvise_fini(struct xe_vm *vm)
>> +{
>> + struct xe_madvise_notifier *notifier, *next;
>> + struct workqueue_struct *wq;
>> + mempool_t *pool;
>> + LIST_HEAD(tmp);
>> +
>> + atomic_set(&vm->svm.madvise_work.closing, 1);
>> +
>> + /*
>> + * Detach notifiers under lock, then remove outside lock (SRCU sync can be slow).
>> + * Splice avoids holding mutex across mmu_interval_notifier_remove() SRCU sync.
>> + * Removing notifiers first (before drain) prevents new invalidate callbacks.
>> + */
>> + mutex_lock(&vm->svm.madvise_notifiers.lock);
>> + list_splice_init(&vm->svm.madvise_notifiers.list, &tmp);
>> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
>> +
>> + /* Now remove notifiers without holding lock - mmu_interval_notifier_remove() SRCU-syncs */
>> + list_for_each_entry_safe(notifier, next, &tmp, list) {
>> + list_del(¬ifier->list);
>> + mmu_interval_notifier_remove(¬ifier->mmu_notifier);
>> + xe_vm_put(notifier->vm);
>> + kfree(notifier);
>> + }
>> +
>> + /* Drain and destroy workqueue */
>> + wq = xchg(&vm->svm.madvise_work.wq, NULL);
>> + if (wq) {
>> + drain_workqueue(wq);
> Work items in wq call xe_madvise_work_func, which takes vm->lock in
> write mode. If we try to drain here after the work item executing
> xe_madvise_work_func has started or is queued, I think we could
> deadlock. Lockdep should complain about this if you run a test that
> triggers xe_madvise_work_func at least once — or at least it should. If
> it doesn’t, then workqueues likely have an issue in their lockdep
> implementation as 'drain_workqueue' should touch its lockdep map which
> has tainted vm->lock (i.e., is outside of it).
>
> So perhaps call this function without vm->lock and take as need in the
> this function, then drop it drain the work queue, etc...
Good catch. Draining the workqueue while holding |vm->lock| can deadlock
against a worker that takes |vm->lock|. I fixed that by dropping
|vm->lock| before |xe_vm_madvise_fini()|. In the reworked teardown path,
|drain_workqueue()| runs with neither |vm->lock| nor the teardown
semaphore held.
>
>> + destroy_workqueue(wq);
>> + }
>> +
>> + pool = xchg(&vm->svm.madvise_work.pool, NULL);
>> + if (pool)
>> + mempool_destroy(pool);
>> +}
>> +
>> +/**
>> + * xe_vm_madvise_register_notifier_range - Register MMU notifier for address range
>> + * @vm: VM
>> + * @start: Start address (page-aligned)
>> + * @end: End address (page-aligned)
>> + *
>> + * Registers interval notifier for munmap tracking. Uses addresses (not VMA pointers)
>> + * to avoid UAF after dropping vm->lock. Deduplicates by range.
>> + *
>> + * Return: 0 on success, negative error code on failure
>> + */
>> +int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64 start, u64 end)
>> +{
>> + struct xe_madvise_notifier *notifier, *existing;
>> + int err;
>> +
> I see this isn’t called under the vm->lock write lock. Is there a reason
> not to? I think taking it under the write lock would help with the
> teardown sequence, since you wouldn’t be able to get here if
> xe_vm_is_closed_or_banned were stable—and we wouldn’t enter this
> function if that helper returned true.
I can make the closed/banned check stable at the call site under
|vm->lock|, but I don’t think I can hold it across
|mmu_interval_notifier_insert()| itself since that may take |mmap_lock|
internally. I’ll restructure this so the state check happens under
|vm->lock|, while the actual insert remains outside that lock.
>
>> + if (!IS_ALIGNED(start, PAGE_SIZE) || !IS_ALIGNED(end, PAGE_SIZE))
>> + return -EINVAL;
>> +
>> + if (WARN_ON_ONCE(end <= start))
>> + return -EINVAL;
>> +
>> + if (atomic_read(&vm->svm.madvise_work.closing))
>> + return -ENOENT;
>> +
>> + if (!READ_ONCE(vm->svm.madvise_work.wq) ||
>> + !READ_ONCE(vm->svm.madvise_work.pool))
>> + return -ENOMEM;
>> +
>> + /* Check mm early to avoid allocation if it's missing */
>> + if (!vm->svm.gpusvm.mm)
>> + return -EINVAL;
>> +
>> + /* Dedupe: check if notifier exists for this range */
>> + mutex_lock(&vm->svm.madvise_notifiers.lock);
> If we had the vm->lock in write mode we could likely just drop
> svm.madvise_notifiers.lock for now, but once we move to fine grained
> locking in page faults [1] we'd in fact need a dedicated lock. So let's
> keep this.
>
> [1]https://patchwork.freedesktop.org/patch/707238/?series=162167&rev=2
Agreed. We should keep a dedicated lock here.
I donot think |vm->lock| can cover |mmu_interval_notifier_insert()|
itself, since that path may take |mmap_lock| internally and would risk
inverting the existing |mmap_lock -> vm->lock| ordering.
So I will keep |svm.madvise_notifiers.lock| in place. That also lines up
better with the planned fine-grained page-fault locking work.
>
>> + list_for_each_entry(existing, &vm->svm.madvise_notifiers.list, list) {
>> + if (existing->vma_start == start && existing->vma_end == end) {
> This is O(N) which typically isn't ideal. Better structure here? mtree?
> Does an mtree have its own locking so svm.madvise_notifiers.lock could
> just be dropped? I'd look into this.
Agreed. I switched this over to a maple tree, so the exact-range lookup
is no longer O(N). That also lets me drop the list walk in the duplicate
check.
>
>> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
>> + return 0;
>> + }
>> + }
>> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
>> +
>> + notifier = kzalloc(sizeof(*notifier), GFP_KERNEL);
>> + if (!notifier)
>> + return -ENOMEM;
>> +
>> + notifier->vm = xe_vm_get(vm);
>> + notifier->vma_start = start;
>> + notifier->vma_end = end;
>> + INIT_LIST_HEAD(¬ifier->list);
>> +
>> + err = mmu_interval_notifier_insert(¬ifier->mmu_notifier,
>> + vm->svm.gpusvm.mm,
>> + start,
>> + end - start,
>> + &xe_madvise_notifier_ops);
>> + if (err) {
>> + xe_vm_put(notifier->vm);
>> + kfree(notifier);
>> + return err;
>> + }
>> +
>> + /* Re-check closing to avoid teardown race */
>> + if (unlikely(atomic_read(&vm->svm.madvise_work.closing))) {
>> + mmu_interval_notifier_remove(¬ifier->mmu_notifier);
>> + xe_vm_put(notifier->vm);
>> + kfree(notifier);
>> + return -ENOENT;
>> + }
>> +
>> + /* Add to list - check again for concurrent registration race */
>> + mutex_lock(&vm->svm.madvise_notifiers.lock);
> If we had the vm->lock in write mode, we couldn't get concurrent
> registrations.
>
> I likely have more comments, but I have enough concerns with the locking
> and structure in this patch that I’m going to pause reviewing the series
> until most of my comments are addressed. It’s hard to focus on anything
> else until we get these issues worked out.
I think the main issue is exactly the locking story around notifier
insert/remove. We cannot hold |vm->lock| across
|mmu_interval_notifier_insert()| because that may take |mmap_lock|
internally and invert the existing ordering.
I have reworked this to simplify the teardown/registration side: drop
the atomic/READ_ONCE/xchg handling, use a single teardown |rwsem|, and
replace the list-based dedupe with a maple tree.
I will send a cleaned-up version with the locking documented more
clearly. Sorry for the churn here.
Thanks,
Arvind
>
> Matt
>
>> + list_for_each_entry(existing, &vm->svm.madvise_notifiers.list, list) {
>> + if (existing->vma_start == start && existing->vma_end == end) {
>> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
>> + mmu_interval_notifier_remove(¬ifier->mmu_notifier);
>> + xe_vm_put(notifier->vm);
>> + kfree(notifier);
>> + return 0;
>> + }
>> + }
>> + list_add(¬ifier->list, &vm->svm.madvise_notifiers.list);
>> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
>> +
>> + return 0;
>> +}
>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
>> index b0e1fc445f23..ba9cd7912113 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
>> @@ -6,10 +6,18 @@
>> #ifndef _XE_VM_MADVISE_H_
>> #define _XE_VM_MADVISE_H_
>>
>> +#include <linux/types.h>
>> +
>> struct drm_device;
>> struct drm_file;
>> +struct xe_vm;
>> +struct xe_vma;
>>
>> int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
>> struct drm_file *file);
>>
>> +int xe_vm_madvise_init(struct xe_vm *vm);
>> +void xe_vm_madvise_fini(struct xe_vm *vm);
>> +int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64 start, u64 end);
>> +
>> #endif
>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
>> index 29ff63503d4c..eb978995000c 100644
>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>> @@ -12,6 +12,7 @@
>>
>> #include <linux/dma-resv.h>
>> #include <linux/kref.h>
>> +#include <linux/mempool.h>
>> #include <linux/mmu_notifier.h>
>> #include <linux/scatterlist.h>
>>
>> @@ -29,6 +30,26 @@ struct xe_user_fence;
>> struct xe_vm;
>> struct xe_vm_pgtable_update_op;
>>
>> +/**
>> + * struct xe_madvise_notifier - CPU madvise notifier for memory attribute reset
>> + *
>> + * Tracks CPU munmap operations on SVM CPU address mirror VMAs.
>> + * When userspace unmaps CPU memory, this notifier processes attribute reset
>> + * via work queue to avoid circular locking (can't take vm->lock in callback).
>> + */
>> +struct xe_madvise_notifier {
>> + /** @mmu_notifier: MMU interval notifier */
>> + struct mmu_interval_notifier mmu_notifier;
>> + /** @vm: VM this notifier belongs to (holds reference via xe_vm_get) */
>> + struct xe_vm *vm;
>> + /** @vma_start: Start address of VMA being tracked */
>> + u64 vma_start;
>> + /** @vma_end: End address of VMA being tracked */
>> + u64 vma_end;
>> + /** @list: Link in vm->svm.madvise_notifiers.list */
>> + struct list_head list;
>> +};
>> +
>> #if IS_ENABLED(CONFIG_DRM_XE_DEBUG)
>> #define TEST_VM_OPS_ERROR
>> #define FORCE_OP_ERROR BIT(31)
>> @@ -212,6 +233,26 @@ struct xe_vm {
>> struct xe_pagemap *pagemaps[XE_MAX_TILES_PER_DEVICE];
>> /** @svm.peer: Used for pagemap connectivity computations. */
>> struct drm_pagemap_peer peer;
>> +
>> + /**
>> + * @svm.madvise_notifiers: Active CPU madvise notifiers
>> + */
>> + struct {
>> + /** @svm.madvise_notifiers.list: List of active notifiers */
>> + struct list_head list;
>> + /** @svm.madvise_notifiers.lock: Protects notifiers list */
>> + struct mutex lock;
>> + } madvise_notifiers;
>> +
>> + /** @svm.madvise_work: Workqueue for async munmap processing */
>> + struct {
>> + /** @svm.madvise_work.wq: Workqueue */
>> + struct workqueue_struct *wq;
>> + /** @svm.madvise_work.pool: Mempool for work items */
>> + mempool_t *pool;
>> + /** @svm.madvise_work.closing: Teardown flag */
>> + atomic_t closing;
>> + } madvise_work;
>> } svm;
>>
>> struct xe_device *xe;
>> --
>> 2.43.0
>>
[-- Attachment #2: Type: text/html, Size: 27452 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [RFC 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure
2026-03-09 7:07 ` Yadav, Arvind
@ 2026-03-09 9:32 ` Thomas Hellström
2026-03-11 6:34 ` Yadav, Arvind
0 siblings, 1 reply; 19+ messages in thread
From: Thomas Hellström @ 2026-03-09 9:32 UTC (permalink / raw)
To: Yadav, Arvind, Matthew Brost; +Cc: intel-xe, himal.prasad.ghimiray
On Mon, 2026-03-09 at 12:37 +0530, Yadav, Arvind wrote:
>
> On 26-02-2026 05:04, Matthew Brost wrote:
> > On Thu, Feb 19, 2026 at 02:43:09PM +0530, Arvind Yadav wrote:
> > > MADVISE_AUTORESET needs to reset VMA attributes when userspace
> > > unmaps
> > > CPU-only ranges, but the MMU invalidate callback cannot take vm-
> > > >lock
> > > due to lock ordering (mmap_lock is already held).
> > >
> > > Add mmu_interval_notifier that queues work items for
> > > MMU_NOTIFY_UNMAP
> > > events. The worker runs under vm->lock and resets attributes for
> > > VMAs
> > > still marked XE_VMA_CPU_AUTORESET_ACTIVE (i.e., not yet GPU-
> > > touched).
> > >
> > > Work items are allocated from a mempool to handle atomic context
> > > in the
> > > callback. The notifier is deactivated when GPU touches the VMA.
> > >
> > > Cc: Matthew Brost<matthew.brost@intel.com>
> > > Cc: Thomas Hellström<thomas.hellstrom@linux.intel.com>
> > > Cc: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
> > > Signed-off-by: Arvind Yadav<arvind.yadav@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_vm_madvise.c | 394
> > > +++++++++++++++++++++++++++++
> > > drivers/gpu/drm/xe/xe_vm_madvise.h | 8 +
> > > drivers/gpu/drm/xe/xe_vm_types.h | 41 +++
> > > 3 files changed, 443 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > index 52147f5eaaa0..4c0ffb100bcc 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
> > > @@ -6,9 +6,12 @@
> > > #include "xe_vm_madvise.h"
> > >
> > > #include <linux/nospec.h>
> > > +#include <linux/mempool.h>
> > > +#include <linux/workqueue.h>
> > > #include <drm/xe_drm.h>
> > >
> > > #include "xe_bo.h"
> > > +#include "xe_macros.h"
> > > #include "xe_pat.h"
> > > #include "xe_pt.h"
> > > #include "xe_svm.h"
> > > @@ -500,3 +503,394 @@ int xe_vm_madvise_ioctl(struct drm_device
> > > *dev, void *data, struct drm_file *fil
> > > xe_vm_put(vm);
> > > return err;
> > > }
> > > +
> > > +/**
> > > + * struct xe_madvise_work_item - Work item for unmap processing
> > > + * @work: work_struct
> > > + * @vm: VM reference
> > > + * @pool: Mempool for recycling
> > > + * @start: Start address
> > > + * @end: End address
> > > + */
> > > +struct xe_madvise_work_item {
> > > + struct work_struct work;
> > > + struct xe_vm *vm;
> > > + mempool_t *pool;
> > Why mempool? Seems like we could just do kmalloc with correct gfp
> > flags.
>
>
> I tried kmalloc first, but ran into two issues:
> GFP_KERNEL — fails because MMU notifier callbacks must not block, and
> GFP_KERNEL can sleep waiting for memory reclaim.
> GFP_ATOMIC — triggers a circular lockdep warning: the MMU notifier
> holds
> mmu_notifier_invalidate_range_start, and GFP_ATOMIC internally tries
> to
> acquire fs_reclaim, which already depends on the MMU notifier lock.
>
> Agreed. mempool looks unnecessary here. I re-tested this with
> kmalloc(..., GFP_NOWAIT) and that avoids both blocking and the
> reclaim-related lockdep issue I saw with the earlier approach. I will
> switch to that and drop the pool in the next version.
Note that GFP_NOWAIT can only be used as a potential optimization in
case memory happens to be available. GFP_NOWAIT is very likely to fail
in a reclaim situation and should not be used unless there is a backup
path. We shouldn't really try to work around lockdep problems with GFP
flags.
/Thomas
>
>
> >
> > > + u64 start;
> > > + u64 end;
> > > +};
> > > +
> > > +static void xe_vma_set_default_attributes(struct xe_vma *vma)
> > > +{
> > > + vma->attr.preferred_loc.devmem_fd =
> > > DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE;
> > > + vma->attr.preferred_loc.migration_policy =
> > > DRM_XE_MIGRATE_ALL_PAGES;
> > > + vma->attr.pat_index = vma->attr.default_pat_index;
> > > + vma->attr.atomic_access = DRM_XE_ATOMIC_UNDEFINED;
> > > +}
> > > +
> > > +/**
> > > + * xe_vm_madvise_process_unmap - Process munmap for all VMAs in
> > > range
> > > + * @vm: VM
> > > + * @start: Start of unmap range
> > > + * @end: End of unmap range
> > > + *
> > > + * Processes all VMAs overlapping the unmap range. An unmap can
> > > span multiple
> > > + * VMAs, so we need to loop and process each segment.
> > > + *
> > > + * Return: 0 on success, negative error otherwise
> > > + */
> > > +static int xe_vm_madvise_process_unmap(struct xe_vm *vm, u64
> > > start, u64 end)
> > > +{
> > > + u64 addr = start;
> > > + int err;
> > > +
> > > + lockdep_assert_held_write(&vm->lock);
> > > +
> > > + if (xe_vm_is_closed_or_banned(vm))
> > > + return 0;
> > > +
> > > + while (addr < end) {
> > > + struct xe_vma *vma;
> > > + u64 seg_start, seg_end;
> > > + bool has_default_attr;
> > > +
> > > + vma = xe_vm_find_overlapping_vma(vm, addr, end);
> > > + if (!vma)
> > > + break;
> > > +
> > > + /* Skip GPU-touched VMAs - SVM handles them */
> > > + if (!xe_vma_has_cpu_autoreset_active(vma)) {
> > > + addr = xe_vma_end(vma);
> > > + continue;
> > > + }
> > > +
> > > + has_default_attr =
> > > xe_vma_has_default_mem_attrs(vma);
> > > + seg_start = max(addr, xe_vma_start(vma));
> > > + seg_end = min(end, xe_vma_end(vma));
> > > +
> > > + /* Expand for merging if VMA already has default
> > > attrs */
> > > + if (has_default_attr &&
> > > + xe_vma_start(vma) >= start &&
> > > + xe_vma_end(vma) <= end) {
> > > + seg_start = xe_vma_start(vma);
> > > + seg_end = xe_vma_end(vma);
> > > + xe_vm_find_cpu_addr_mirror_vma_range(vm,
> > > &seg_start, &seg_end);
> > > + } else if (xe_vma_start(vma) == seg_start &&
> > > xe_vma_end(vma) == seg_end) {
> > > + xe_vma_set_default_attributes(vma);
> > > + addr = seg_end;
> > > + continue;
> > > + }
> > > +
> > > + if (xe_vma_start(vma) == seg_start &&
> > > + xe_vma_end(vma) == seg_end &&
> > > + has_default_attr) {
> > > + addr = seg_end;
> > > + continue;
> > > + }
> > > +
> > > + err = xe_vm_alloc_cpu_addr_mirror_vma(vm,
> > > seg_start, seg_end - seg_start);
> > > + if (err) {
> > > + if (err == -ENOENT) {
> > > + addr = seg_end;
> > > + continue;
> > > + }
> > > + return err;
> > > + }
> > > +
> > > + addr = seg_end;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +/**
> > > + * xe_madvise_work_func - Worker to process unmap
> > > + * @w: work_struct
> > > + *
> > > + * Processes a single unmap by taking vm->lock and calling the
> > > helper.
> > > + * Each unmap has its own work item, so no interval loss.
> > > + */
> > > +static void xe_madvise_work_func(struct work_struct *w)
> > > +{
> > > + struct xe_madvise_work_item *item = container_of(w,
> > > struct xe_madvise_work_item, work);
> > > + struct xe_vm *vm = item->vm;
> > > + int err;
> > > +
> > > + down_write(&vm->lock);
> > > + err = xe_vm_madvise_process_unmap(vm, item->start, item-
> > > >end);
> > > + if (err)
> > > + drm_warn(&vm->xe->drm,
> > > + "madvise autoreset failed [%#llx-
> > > %#llx]: %d\n",
> > > + item->start, item->end, err);
> > > + /*
> > > + * Best-effort: Log failure and continue.
> > > + * Core correctness from CPU_AUTORESET_ACTIVE flag.
> > > + */
> > > + up_write(&vm->lock);
> > > + xe_vm_put(vm);
> > > + mempool_free(item, item->pool);
> > > +}
> > > +
> > > +/**
> > > + * xe_madvise_notifier_callback - MMU notifier callback for CPU
> > > munmap
> > > + * @mni: mmu_interval_notifier
> > > + * @range: mmu_notifier_range
> > > + * @cur_seq: current sequence number
> > > + *
> > > + * Queues work to reset VMA attributes. Cannot take vm->lock
> > > (circular locking),
> > > + * so uses workqueue. GFP_ATOMIC allocation may fail; drops
> > > event if so.
> > > + *
> > > + * Return: true (never blocks)
> > > + */
> > > +static bool xe_madvise_notifier_callback(struct
> > > mmu_interval_notifier *mni,
> > > + const struct
> > > mmu_notifier_range *range,
> > > + unsigned long cur_seq)
> > > +{
> > > + struct xe_madvise_notifier *notifier =
> > > + container_of(mni, struct xe_madvise_notifier,
> > > mmu_notifier);
> > > + struct xe_vm *vm = notifier->vm;
> > > + struct xe_madvise_work_item *item;
> > > + struct workqueue_struct *wq;
> > > + mempool_t *pool;
> > > + u64 start, end;
> > > +
> > > + if (range->event != MMU_NOTIFY_UNMAP)
> > > + return true;
> > > +
> > > + /*
> > > + * Best-effort: skip in non-blockable contexts to avoid
> > > building up work.
> > > + * Correctness does not rely on this notifier -
> > > CPU_AUTORESET_ACTIVE flag
> > > + * prevents GPU PTE zaps on CPU-only VMAs in the zap
> > > path.
> > > + */
> > > + if (!mmu_notifier_range_blockable(range))
> > > + return true;
> > > +
> > > + /* Consume seq (interval-notifier convention) */
> > > + mmu_interval_set_seq(mni, cur_seq);
> > > +
> > > + /* Best-effort: core correctness from
> > > CPU_AUTORESET_ACTIVE check in zap path */
> > > +
> > > + start = max_t(u64, range->start, notifier->vma_start);
> > > + end = min_t(u64, range->end, notifier->vma_end);
> > > +
> > > + if (start >= end)
> > > + return true;
> > > +
> > > + pool = READ_ONCE(vm->svm.madvise_work.pool);
> > > + wq = READ_ONCE(vm->svm.madvise_work.wq);
> > > + if (!pool || !wq || atomic_read(&vm-
> > > >svm.madvise_work.closing))
> > Can you explain the use of READ_ONCE, xchg, and atomics? At first
> > glance
> > it seems unnecessary or overly complicated. Let’s start with the
> > problem
> > this is trying to solve and see if we can find a simpler approach.
> >
> > My initial thought is a VM-wide rwsem, marked as reclaim-safe. The
> > notifiers would take it in read mode to check whether the VM is
> > tearing
> > down, and the fini path would take it in write mode to initiate
> > teardown...
>
>
> Agreed. This got more complicated than it needs to be. I reworked it
> to
> use a VM-wide rw_semaphore for teardown serialization, so the
> atomic_t,
> READ_ONCE(), and xchg() go away..
>
> >
> > > + return true;
> > > +
> > > + /* GFP_ATOMIC to avoid fs_reclaim lockdep in notifier
> > > context */
> > > + item = mempool_alloc(pool, GFP_ATOMIC);
> > Again, probably just use kmalloc. Also s/GFP_ATOMIC/GFP_NOWAIT. We
> > really shouldn’t be using GFP_ATOMIC in Xe per the DRM docs unless
> > a
> > failed memory allocation would take down the device. We likely
> > abuse
> > GFP_ATOMIC in several places that we should clean up, but in this
> > case
> > it’s pretty clear GFP_NOWAIT is what we want, as failure isn’t
> > fatal—just sub-optimal.
>
>
> Agreed. This should be |GFP_NOWAIT|, not |GFP_ATOMIC|. Allocation
> failure here is non-fatal, so |GFP_NOWAIT| is the right fit. I willl
> switch to |kmalloc(..., GFP_NOWAIT)| and drop the mempool.
>
> >
> > > + if (!item)
> > > + return true;
> > > +
> > > + memset(item, 0, sizeof(*item));
> > > + INIT_WORK(&item->work, xe_madvise_work_func);
> > > + item->vm = xe_vm_get(vm);
> > > + item->pool = pool;
> > > + item->start = start;
> > > + item->end = end;
> > > +
> > > + if (unlikely(atomic_read(&vm-
> > > >svm.madvise_work.closing))) {
> > Same as above the atomic usage...
>
>
> Noted, Will remove.
>
> >
> > > + xe_vm_put(item->vm);
> > > + mempool_free(item, pool);
> > > + return true;
> > > + }
> > > +
> > > + queue_work(wq, &item->work);
> > > +
> > > + return true;
> > > +}
> > > +
> > > +static const struct mmu_interval_notifier_ops
> > > xe_madvise_notifier_ops = {
> > > + .invalidate = xe_madvise_notifier_callback,
> > > +};
> > > +
> > > +/**
> > > + * xe_vm_madvise_init - Initialize madvise notifier
> > > infrastructure
> > > + * @vm: VM
> > > + *
> > > + * Sets up workqueue and mempool for async munmap processing.
> > > + *
> > > + * Return: 0 on success, -ENOMEM on failure
> > > + */
> > > +int xe_vm_madvise_init(struct xe_vm *vm)
> > > +{
> > > + struct workqueue_struct *wq;
> > > + mempool_t *pool;
> > > +
> > > + /* Always initialize list and mutex - fini may be called
> > > on partial init */
> > > + INIT_LIST_HEAD(&vm->svm.madvise_notifiers.list);
> > > + mutex_init(&vm->svm.madvise_notifiers.lock);
> > > +
> > > + wq = READ_ONCE(vm->svm.madvise_work.wq);
> > > + pool = READ_ONCE(vm->svm.madvise_work.pool);
> > > +
> > > + /* Guard against double initialization and detect
> > > partial init */
> > > + if (wq || pool) {
> > > + XE_WARN_ON(!wq || !pool);
> > > + return 0;
> > > + }
> > > +
> > > + WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
> > > + WRITE_ONCE(vm->svm.madvise_work.pool, NULL);
> > > + atomic_set(&vm->svm.madvise_work.closing, 1);
> > > +
> > > + /*
> > > + * WQ_UNBOUND: best-effort optimization, not critical
> > > path.
> > > + * No WQ_MEM_RECLAIM: worker allocates memory (VMA ops
> > > with GFP_KERNEL).
> > > + * Not on reclaim path - merely resets attributes after
> > > munmap.
> > > + */
> > > + vm->svm.madvise_work.wq = alloc_workqueue("xe_madvise",
> > > WQ_UNBOUND, 0);
> > > + if (!vm->svm.madvise_work.wq)
> > > + return -ENOMEM;
> > > +
> > > + /* Mempool for GFP_ATOMIC allocs in notifier callback */
> > > + vm->svm.madvise_work.pool =
> > > + mempool_create_kmalloc_pool(64,
> > > + sizeof(struct
> > > xe_madvise_work_item));
> > > + if (!vm->svm.madvise_work.pool) {
> > > + destroy_workqueue(vm->svm.madvise_work.wq);
> > > + WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
> > > + return -ENOMEM;
> > > + }
> > > +
> > > + atomic_set(&vm->svm.madvise_work.closing, 0);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +/**
> > > + * xe_vm_madvise_fini - Cleanup all madvise notifiers
> > > + * @vm: VM
> > > + *
> > > + * Tears down notifiers and drains workqueue. Safe if init
> > > partially failed.
> > > + * Order: closing flag → remove notifiers (SRCU sync) → drain wq
> > > → destroy.
> > > + */
> > > +void xe_vm_madvise_fini(struct xe_vm *vm)
> > > +{
> > > + struct xe_madvise_notifier *notifier, *next;
> > > + struct workqueue_struct *wq;
> > > + mempool_t *pool;
> > > + LIST_HEAD(tmp);
> > > +
> > > + atomic_set(&vm->svm.madvise_work.closing, 1);
> > > +
> > > + /*
> > > + * Detach notifiers under lock, then remove outside lock
> > > (SRCU sync can be slow).
> > > + * Splice avoids holding mutex across
> > > mmu_interval_notifier_remove() SRCU sync.
> > > + * Removing notifiers first (before drain) prevents new
> > > invalidate callbacks.
> > > + */
> > > + mutex_lock(&vm->svm.madvise_notifiers.lock);
> > > + list_splice_init(&vm->svm.madvise_notifiers.list, &tmp);
> > > + mutex_unlock(&vm->svm.madvise_notifiers.lock);
> > > +
> > > + /* Now remove notifiers without holding lock -
> > > mmu_interval_notifier_remove() SRCU-syncs */
> > > + list_for_each_entry_safe(notifier, next, &tmp, list) {
> > > + list_del(¬ifier->list);
> > > + mmu_interval_notifier_remove(¬ifier-
> > > >mmu_notifier);
> > > + xe_vm_put(notifier->vm);
> > > + kfree(notifier);
> > > + }
> > > +
> > > + /* Drain and destroy workqueue */
> > > + wq = xchg(&vm->svm.madvise_work.wq, NULL);
> > > + if (wq) {
> > > + drain_workqueue(wq);
> > Work items in wq call xe_madvise_work_func, which takes vm->lock in
> > write mode. If we try to drain here after the work item executing
> > xe_madvise_work_func has started or is queued, I think we could
> > deadlock. Lockdep should complain about this if you run a test that
> > triggers xe_madvise_work_func at least once — or at least it
> > should. If
> > it doesn’t, then workqueues likely have an issue in their lockdep
> > implementation as 'drain_workqueue' should touch its lockdep map
> > which
> > has tainted vm->lock (i.e., is outside of it).
> >
> > So perhaps call this function without vm->lock and take as need in
> > the
> > this function, then drop it drain the work queue, etc...
>
>
> Good catch. Draining the workqueue while holding |vm->lock| can
> deadlock
> against a worker that takes |vm->lock|. I fixed that by dropping
> > vm->lock| before |xe_vm_madvise_fini()|. In the reworked teardown
> > path,
> > drain_workqueue()| runs with neither |vm->lock| nor the teardown
> semaphore held.
>
>
> >
> > > + destroy_workqueue(wq);
> > > + }
> > > +
> > > + pool = xchg(&vm->svm.madvise_work.pool, NULL);
> > > + if (pool)
> > > + mempool_destroy(pool);
> > > +}
> > > +
> > > +/**
> > > + * xe_vm_madvise_register_notifier_range - Register MMU notifier
> > > for address range
> > > + * @vm: VM
> > > + * @start: Start address (page-aligned)
> > > + * @end: End address (page-aligned)
> > > + *
> > > + * Registers interval notifier for munmap tracking. Uses
> > > addresses (not VMA pointers)
> > > + * to avoid UAF after dropping vm->lock. Deduplicates by range.
> > > + *
> > > + * Return: 0 on success, negative error code on failure
> > > + */
> > > +int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64
> > > start, u64 end)
> > > +{
> > > + struct xe_madvise_notifier *notifier, *existing;
> > > + int err;
> > > +
> > I see this isn’t called under the vm->lock write lock. Is there a
> > reason
> > not to? I think taking it under the write lock would help with the
> > teardown sequence, since you wouldn’t be able to get here if
> > xe_vm_is_closed_or_banned were stable—and we wouldn’t enter this
> > function if that helper returned true.
>
>
> I can make the closed/banned check stable at the call site under
> > vm->lock|, but I don’t think I can hold it across
> > mmu_interval_notifier_insert()| itself since that may take
> > |mmap_lock|
> internally. I’ll restructure this so the state check happens under
> > vm->lock|, while the actual insert remains outside that lock.
>
> >
> > > + if (!IS_ALIGNED(start, PAGE_SIZE) || !IS_ALIGNED(end,
> > > PAGE_SIZE))
> > > + return -EINVAL;
> > > +
> > > + if (WARN_ON_ONCE(end <= start))
> > > + return -EINVAL;
> > > +
> > > + if (atomic_read(&vm->svm.madvise_work.closing))
> > > + return -ENOENT;
> > > +
> > > + if (!READ_ONCE(vm->svm.madvise_work.wq) ||
> > > + !READ_ONCE(vm->svm.madvise_work.pool))
> > > + return -ENOMEM;
> > > +
> > > + /* Check mm early to avoid allocation if it's missing */
> > > + if (!vm->svm.gpusvm.mm)
> > > + return -EINVAL;
> > > +
> > > + /* Dedupe: check if notifier exists for this range */
> > > + mutex_lock(&vm->svm.madvise_notifiers.lock);
> > If we had the vm->lock in write mode we could likely just drop
> > svm.madvise_notifiers.lock for now, but once we move to fine
> > grained
> > locking in page faults [1] we'd in fact need a dedicated lock. So
> > let's
> > keep this.
> >
> > [1]
> > https://patchwork.freedesktop.org/patch/707238/?series=162167&rev=2
>
>
> Agreed. We should keep a dedicated lock here.
>
> I donot think |vm->lock| can cover |mmu_interval_notifier_insert()|
> itself, since that path may take |mmap_lock| internally and would
> risk
> inverting the existing |mmap_lock -> vm->lock| ordering.
>
> So I will keep |svm.madvise_notifiers.lock| in place. That also lines
> up
> better with the planned fine-grained page-fault locking work.
>
> >
> > > + list_for_each_entry(existing, &vm-
> > > >svm.madvise_notifiers.list, list) {
> > > + if (existing->vma_start == start && existing-
> > > >vma_end == end) {
> > This is O(N) which typically isn't ideal. Better structure here?
> > mtree?
> > Does an mtree have its own locking so svm.madvise_notifiers.lock
> > could
> > just be dropped? I'd look into this.
>
>
> Agreed. I switched this over to a maple tree, so the exact-range
> lookup
> is no longer O(N). That also lets me drop the list walk in the
> duplicate
> check.
>
> >
> > > + mutex_unlock(&vm-
> > > >svm.madvise_notifiers.lock);
> > > + return 0;
> > > + }
> > > + }
> > > + mutex_unlock(&vm->svm.madvise_notifiers.lock);
> > > +
> > > + notifier = kzalloc(sizeof(*notifier), GFP_KERNEL);
> > > + if (!notifier)
> > > + return -ENOMEM;
> > > +
> > > + notifier->vm = xe_vm_get(vm);
> > > + notifier->vma_start = start;
> > > + notifier->vma_end = end;
> > > + INIT_LIST_HEAD(¬ifier->list);
> > > +
> > > + err = mmu_interval_notifier_insert(¬ifier-
> > > >mmu_notifier,
> > > + vm->svm.gpusvm.mm,
> > > + start,
> > > + end - start,
> > > +
> > > &xe_madvise_notifier_ops);
> > > + if (err) {
> > > + xe_vm_put(notifier->vm);
> > > + kfree(notifier);
> > > + return err;
> > > + }
> > > +
> > > + /* Re-check closing to avoid teardown race */
> > > + if (unlikely(atomic_read(&vm-
> > > >svm.madvise_work.closing))) {
> > > + mmu_interval_notifier_remove(¬ifier-
> > > >mmu_notifier);
> > > + xe_vm_put(notifier->vm);
> > > + kfree(notifier);
> > > + return -ENOENT;
> > > + }
> > > +
> > > + /* Add to list - check again for concurrent registration
> > > race */
> > > + mutex_lock(&vm->svm.madvise_notifiers.lock);
> > If we had the vm->lock in write mode, we couldn't get concurrent
> > registrations.
> >
> > I likely have more comments, but I have enough concerns with the
> > locking
> > and structure in this patch that I’m going to pause reviewing the
> > series
> > until most of my comments are addressed. It’s hard to focus on
> > anything
> > else until we get these issues worked out.
>
>
> I think the main issue is exactly the locking story around notifier
> insert/remove. We cannot hold |vm->lock| across
> > mmu_interval_notifier_insert()| because that may take |mmap_lock|
> internally and invert the existing ordering.
>
> I have reworked this to simplify the teardown/registration side: drop
> the atomic/READ_ONCE/xchg handling, use a single teardown |rwsem|,
> and
> replace the list-based dedupe with a maple tree.
> I will send a cleaned-up version with the locking documented more
> clearly. Sorry for the churn here.
>
>
> Thanks,
> Arvind
>
> >
> > Matt
> >
> > > + list_for_each_entry(existing, &vm-
> > > >svm.madvise_notifiers.list, list) {
> > > + if (existing->vma_start == start && existing-
> > > >vma_end == end) {
> > > + mutex_unlock(&vm-
> > > >svm.madvise_notifiers.lock);
> > > + mmu_interval_notifier_remove(¬ifier-
> > > >mmu_notifier);
> > > + xe_vm_put(notifier->vm);
> > > + kfree(notifier);
> > > + return 0;
> > > + }
> > > + }
> > > + list_add(¬ifier->list, &vm-
> > > >svm.madvise_notifiers.list);
> > > + mutex_unlock(&vm->svm.madvise_notifiers.lock);
> > > +
> > > + return 0;
> > > +}
> > > diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > index b0e1fc445f23..ba9cd7912113 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
> > > @@ -6,10 +6,18 @@
> > > #ifndef _XE_VM_MADVISE_H_
> > > #define _XE_VM_MADVISE_H_
> > >
> > > +#include <linux/types.h>
> > > +
> > > struct drm_device;
> > > struct drm_file;
> > > +struct xe_vm;
> > > +struct xe_vma;
> > >
> > > int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
> > > struct drm_file *file);
> > >
> > > +int xe_vm_madvise_init(struct xe_vm *vm);
> > > +void xe_vm_madvise_fini(struct xe_vm *vm);
> > > +int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64
> > > start, u64 end);
> > > +
> > > #endif
> > > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> > > b/drivers/gpu/drm/xe/xe_vm_types.h
> > > index 29ff63503d4c..eb978995000c 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > > @@ -12,6 +12,7 @@
> > >
> > > #include <linux/dma-resv.h>
> > > #include <linux/kref.h>
> > > +#include <linux/mempool.h>
> > > #include <linux/mmu_notifier.h>
> > > #include <linux/scatterlist.h>
> > >
> > > @@ -29,6 +30,26 @@ struct xe_user_fence;
> > > struct xe_vm;
> > > struct xe_vm_pgtable_update_op;
> > >
> > > +/**
> > > + * struct xe_madvise_notifier - CPU madvise notifier for memory
> > > attribute reset
> > > + *
> > > + * Tracks CPU munmap operations on SVM CPU address mirror VMAs.
> > > + * When userspace unmaps CPU memory, this notifier processes
> > > attribute reset
> > > + * via work queue to avoid circular locking (can't take vm->lock
> > > in callback).
> > > + */
> > > +struct xe_madvise_notifier {
> > > + /** @mmu_notifier: MMU interval notifier */
> > > + struct mmu_interval_notifier mmu_notifier;
> > > + /** @vm: VM this notifier belongs to (holds reference
> > > via xe_vm_get) */
> > > + struct xe_vm *vm;
> > > + /** @vma_start: Start address of VMA being tracked */
> > > + u64 vma_start;
> > > + /** @vma_end: End address of VMA being tracked */
> > > + u64 vma_end;
> > > + /** @list: Link in vm->svm.madvise_notifiers.list */
> > > + struct list_head list;
> > > +};
> > > +
> > > #if IS_ENABLED(CONFIG_DRM_XE_DEBUG)
> > > #define TEST_VM_OPS_ERROR
> > > #define FORCE_OP_ERROR BIT(31)
> > > @@ -212,6 +233,26 @@ struct xe_vm {
> > > struct xe_pagemap
> > > *pagemaps[XE_MAX_TILES_PER_DEVICE];
> > > /** @svm.peer: Used for pagemap connectivity
> > > computations. */
> > > struct drm_pagemap_peer peer;
> > > +
> > > + /**
> > > + * @svm.madvise_notifiers: Active CPU madvise
> > > notifiers
> > > + */
> > > + struct {
> > > + /** @svm.madvise_notifiers.list: List of
> > > active notifiers */
> > > + struct list_head list;
> > > + /** @svm.madvise_notifiers.lock:
> > > Protects notifiers list */
> > > + struct mutex lock;
> > > + } madvise_notifiers;
> > > +
> > > + /** @svm.madvise_work: Workqueue for async
> > > munmap processing */
> > > + struct {
> > > + /** @svm.madvise_work.wq: Workqueue */
> > > + struct workqueue_struct *wq;
> > > + /** @svm.madvise_work.pool: Mempool for
> > > work items */
> > > + mempool_t *pool;
> > > + /** @svm.madvise_work.closing: Teardown
> > > flag */
> > > + atomic_t closing;
> > > + } madvise_work;
> > > } svm;
> > >
> > > struct xe_device *xe;
> > > --
> > > 2.43.0
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [RFC 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure
2026-03-09 9:32 ` Thomas Hellström
@ 2026-03-11 6:34 ` Yadav, Arvind
0 siblings, 0 replies; 19+ messages in thread
From: Yadav, Arvind @ 2026-03-11 6:34 UTC (permalink / raw)
To: Thomas Hellström, Matthew Brost; +Cc: intel-xe, himal.prasad.ghimiray
On 09-03-2026 15:02, Thomas Hellström wrote:
> On Mon, 2026-03-09 at 12:37 +0530, Yadav, Arvind wrote:
>> On 26-02-2026 05:04, Matthew Brost wrote:
>>> On Thu, Feb 19, 2026 at 02:43:09PM +0530, Arvind Yadav wrote:
>>>> MADVISE_AUTORESET needs to reset VMA attributes when userspace
>>>> unmaps
>>>> CPU-only ranges, but the MMU invalidate callback cannot take vm-
>>>>> lock
>>>> due to lock ordering (mmap_lock is already held).
>>>>
>>>> Add mmu_interval_notifier that queues work items for
>>>> MMU_NOTIFY_UNMAP
>>>> events. The worker runs under vm->lock and resets attributes for
>>>> VMAs
>>>> still marked XE_VMA_CPU_AUTORESET_ACTIVE (i.e., not yet GPU-
>>>> touched).
>>>>
>>>> Work items are allocated from a mempool to handle atomic context
>>>> in the
>>>> callback. The notifier is deactivated when GPU touches the VMA.
>>>>
>>>> Cc: Matthew Brost<matthew.brost@intel.com>
>>>> Cc: Thomas Hellström<thomas.hellstrom@linux.intel.com>
>>>> Cc: Himal Prasad Ghimiray<himal.prasad.ghimiray@intel.com>
>>>> Signed-off-by: Arvind Yadav<arvind.yadav@intel.com>
>>>> ---
>>>> drivers/gpu/drm/xe/xe_vm_madvise.c | 394
>>>> +++++++++++++++++++++++++++++
>>>> drivers/gpu/drm/xe/xe_vm_madvise.h | 8 +
>>>> drivers/gpu/drm/xe/xe_vm_types.h | 41 +++
>>>> 3 files changed, 443 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>> b/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>> index 52147f5eaaa0..4c0ffb100bcc 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
>>>> @@ -6,9 +6,12 @@
>>>> #include "xe_vm_madvise.h"
>>>>
>>>> #include <linux/nospec.h>
>>>> +#include <linux/mempool.h>
>>>> +#include <linux/workqueue.h>
>>>> #include <drm/xe_drm.h>
>>>>
>>>> #include "xe_bo.h"
>>>> +#include "xe_macros.h"
>>>> #include "xe_pat.h"
>>>> #include "xe_pt.h"
>>>> #include "xe_svm.h"
>>>> @@ -500,3 +503,394 @@ int xe_vm_madvise_ioctl(struct drm_device
>>>> *dev, void *data, struct drm_file *fil
>>>> xe_vm_put(vm);
>>>> return err;
>>>> }
>>>> +
>>>> +/**
>>>> + * struct xe_madvise_work_item - Work item for unmap processing
>>>> + * @work: work_struct
>>>> + * @vm: VM reference
>>>> + * @pool: Mempool for recycling
>>>> + * @start: Start address
>>>> + * @end: End address
>>>> + */
>>>> +struct xe_madvise_work_item {
>>>> + struct work_struct work;
>>>> + struct xe_vm *vm;
>>>> + mempool_t *pool;
>>> Why mempool? Seems like we could just do kmalloc with correct gfp
>>> flags.
>>
>> I tried kmalloc first, but ran into two issues:
>> GFP_KERNEL — fails because MMU notifier callbacks must not block, and
>> GFP_KERNEL can sleep waiting for memory reclaim.
>> GFP_ATOMIC — triggers a circular lockdep warning: the MMU notifier
>> holds
>> mmu_notifier_invalidate_range_start, and GFP_ATOMIC internally tries
>> to
>> acquire fs_reclaim, which already depends on the MMU notifier lock.
>>
>> Agreed. mempool looks unnecessary here. I re-tested this with
>> kmalloc(..., GFP_NOWAIT) and that avoids both blocking and the
>> reclaim-related lockdep issue I saw with the earlier approach. I will
>> switch to that and drop the pool in the next version.
> Note that GFP_NOWAIT can only be used as a potential optimization in
> case memory happens to be available. GFP_NOWAIT is very likely to fail
> in a reclaim situation and should not be used unless there is a backup
> path. We shouldn't really try to work around lockdep problems with GFP
> flags.
Agreed. I will redesign to avoid allocation in the MMU notifier context
entirely rather than trying to work around it with GFP flags or mempools.
Thanks,
Arvind
> /Thomas
>
>
>
>>
>>>> + u64 start;
>>>> + u64 end;
>>>> +};
>>>> +
>>>> +static void xe_vma_set_default_attributes(struct xe_vma *vma)
>>>> +{
>>>> + vma->attr.preferred_loc.devmem_fd =
>>>> DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE;
>>>> + vma->attr.preferred_loc.migration_policy =
>>>> DRM_XE_MIGRATE_ALL_PAGES;
>>>> + vma->attr.pat_index = vma->attr.default_pat_index;
>>>> + vma->attr.atomic_access = DRM_XE_ATOMIC_UNDEFINED;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_vm_madvise_process_unmap - Process munmap for all VMAs in
>>>> range
>>>> + * @vm: VM
>>>> + * @start: Start of unmap range
>>>> + * @end: End of unmap range
>>>> + *
>>>> + * Processes all VMAs overlapping the unmap range. An unmap can
>>>> span multiple
>>>> + * VMAs, so we need to loop and process each segment.
>>>> + *
>>>> + * Return: 0 on success, negative error otherwise
>>>> + */
>>>> +static int xe_vm_madvise_process_unmap(struct xe_vm *vm, u64
>>>> start, u64 end)
>>>> +{
>>>> + u64 addr = start;
>>>> + int err;
>>>> +
>>>> + lockdep_assert_held_write(&vm->lock);
>>>> +
>>>> + if (xe_vm_is_closed_or_banned(vm))
>>>> + return 0;
>>>> +
>>>> + while (addr < end) {
>>>> + struct xe_vma *vma;
>>>> + u64 seg_start, seg_end;
>>>> + bool has_default_attr;
>>>> +
>>>> + vma = xe_vm_find_overlapping_vma(vm, addr, end);
>>>> + if (!vma)
>>>> + break;
>>>> +
>>>> + /* Skip GPU-touched VMAs - SVM handles them */
>>>> + if (!xe_vma_has_cpu_autoreset_active(vma)) {
>>>> + addr = xe_vma_end(vma);
>>>> + continue;
>>>> + }
>>>> +
>>>> + has_default_attr =
>>>> xe_vma_has_default_mem_attrs(vma);
>>>> + seg_start = max(addr, xe_vma_start(vma));
>>>> + seg_end = min(end, xe_vma_end(vma));
>>>> +
>>>> + /* Expand for merging if VMA already has default
>>>> attrs */
>>>> + if (has_default_attr &&
>>>> + xe_vma_start(vma) >= start &&
>>>> + xe_vma_end(vma) <= end) {
>>>> + seg_start = xe_vma_start(vma);
>>>> + seg_end = xe_vma_end(vma);
>>>> + xe_vm_find_cpu_addr_mirror_vma_range(vm,
>>>> &seg_start, &seg_end);
>>>> + } else if (xe_vma_start(vma) == seg_start &&
>>>> xe_vma_end(vma) == seg_end) {
>>>> + xe_vma_set_default_attributes(vma);
>>>> + addr = seg_end;
>>>> + continue;
>>>> + }
>>>> +
>>>> + if (xe_vma_start(vma) == seg_start &&
>>>> + xe_vma_end(vma) == seg_end &&
>>>> + has_default_attr) {
>>>> + addr = seg_end;
>>>> + continue;
>>>> + }
>>>> +
>>>> + err = xe_vm_alloc_cpu_addr_mirror_vma(vm,
>>>> seg_start, seg_end - seg_start);
>>>> + if (err) {
>>>> + if (err == -ENOENT) {
>>>> + addr = seg_end;
>>>> + continue;
>>>> + }
>>>> + return err;
>>>> + }
>>>> +
>>>> + addr = seg_end;
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_madvise_work_func - Worker to process unmap
>>>> + * @w: work_struct
>>>> + *
>>>> + * Processes a single unmap by taking vm->lock and calling the
>>>> helper.
>>>> + * Each unmap has its own work item, so no interval loss.
>>>> + */
>>>> +static void xe_madvise_work_func(struct work_struct *w)
>>>> +{
>>>> + struct xe_madvise_work_item *item = container_of(w,
>>>> struct xe_madvise_work_item, work);
>>>> + struct xe_vm *vm = item->vm;
>>>> + int err;
>>>> +
>>>> + down_write(&vm->lock);
>>>> + err = xe_vm_madvise_process_unmap(vm, item->start, item-
>>>>> end);
>>>> + if (err)
>>>> + drm_warn(&vm->xe->drm,
>>>> + "madvise autoreset failed [%#llx-
>>>> %#llx]: %d\n",
>>>> + item->start, item->end, err);
>>>> + /*
>>>> + * Best-effort: Log failure and continue.
>>>> + * Core correctness from CPU_AUTORESET_ACTIVE flag.
>>>> + */
>>>> + up_write(&vm->lock);
>>>> + xe_vm_put(vm);
>>>> + mempool_free(item, item->pool);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_madvise_notifier_callback - MMU notifier callback for CPU
>>>> munmap
>>>> + * @mni: mmu_interval_notifier
>>>> + * @range: mmu_notifier_range
>>>> + * @cur_seq: current sequence number
>>>> + *
>>>> + * Queues work to reset VMA attributes. Cannot take vm->lock
>>>> (circular locking),
>>>> + * so uses workqueue. GFP_ATOMIC allocation may fail; drops
>>>> event if so.
>>>> + *
>>>> + * Return: true (never blocks)
>>>> + */
>>>> +static bool xe_madvise_notifier_callback(struct
>>>> mmu_interval_notifier *mni,
>>>> + const struct
>>>> mmu_notifier_range *range,
>>>> + unsigned long cur_seq)
>>>> +{
>>>> + struct xe_madvise_notifier *notifier =
>>>> + container_of(mni, struct xe_madvise_notifier,
>>>> mmu_notifier);
>>>> + struct xe_vm *vm = notifier->vm;
>>>> + struct xe_madvise_work_item *item;
>>>> + struct workqueue_struct *wq;
>>>> + mempool_t *pool;
>>>> + u64 start, end;
>>>> +
>>>> + if (range->event != MMU_NOTIFY_UNMAP)
>>>> + return true;
>>>> +
>>>> + /*
>>>> + * Best-effort: skip in non-blockable contexts to avoid
>>>> building up work.
>>>> + * Correctness does not rely on this notifier -
>>>> CPU_AUTORESET_ACTIVE flag
>>>> + * prevents GPU PTE zaps on CPU-only VMAs in the zap
>>>> path.
>>>> + */
>>>> + if (!mmu_notifier_range_blockable(range))
>>>> + return true;
>>>> +
>>>> + /* Consume seq (interval-notifier convention) */
>>>> + mmu_interval_set_seq(mni, cur_seq);
>>>> +
>>>> + /* Best-effort: core correctness from
>>>> CPU_AUTORESET_ACTIVE check in zap path */
>>>> +
>>>> + start = max_t(u64, range->start, notifier->vma_start);
>>>> + end = min_t(u64, range->end, notifier->vma_end);
>>>> +
>>>> + if (start >= end)
>>>> + return true;
>>>> +
>>>> + pool = READ_ONCE(vm->svm.madvise_work.pool);
>>>> + wq = READ_ONCE(vm->svm.madvise_work.wq);
>>>> + if (!pool || !wq || atomic_read(&vm-
>>>>> svm.madvise_work.closing))
>>> Can you explain the use of READ_ONCE, xchg, and atomics? At first
>>> glance
>>> it seems unnecessary or overly complicated. Let’s start with the
>>> problem
>>> this is trying to solve and see if we can find a simpler approach.
>>>
>>> My initial thought is a VM-wide rwsem, marked as reclaim-safe. The
>>> notifiers would take it in read mode to check whether the VM is
>>> tearing
>>> down, and the fini path would take it in write mode to initiate
>>> teardown...
>>
>> Agreed. This got more complicated than it needs to be. I reworked it
>> to
>> use a VM-wide rw_semaphore for teardown serialization, so the
>> atomic_t,
>> READ_ONCE(), and xchg() go away..
>>
>>>> + return true;
>>>> +
>>>> + /* GFP_ATOMIC to avoid fs_reclaim lockdep in notifier
>>>> context */
>>>> + item = mempool_alloc(pool, GFP_ATOMIC);
>>> Again, probably just use kmalloc. Also s/GFP_ATOMIC/GFP_NOWAIT. We
>>> really shouldn’t be using GFP_ATOMIC in Xe per the DRM docs unless
>>> a
>>> failed memory allocation would take down the device. We likely
>>> abuse
>>> GFP_ATOMIC in several places that we should clean up, but in this
>>> case
>>> it’s pretty clear GFP_NOWAIT is what we want, as failure isn’t
>>> fatal—just sub-optimal.
>>
>> Agreed. This should be |GFP_NOWAIT|, not |GFP_ATOMIC|. Allocation
>> failure here is non-fatal, so |GFP_NOWAIT| is the right fit. I willl
>> switch to |kmalloc(..., GFP_NOWAIT)| and drop the mempool.
>>
>>>> + if (!item)
>>>> + return true;
>>>> +
>>>> + memset(item, 0, sizeof(*item));
>>>> + INIT_WORK(&item->work, xe_madvise_work_func);
>>>> + item->vm = xe_vm_get(vm);
>>>> + item->pool = pool;
>>>> + item->start = start;
>>>> + item->end = end;
>>>> +
>>>> + if (unlikely(atomic_read(&vm-
>>>>> svm.madvise_work.closing))) {
>>> Same as above the atomic usage...
>>
>> Noted, Will remove.
>>
>>>> + xe_vm_put(item->vm);
>>>> + mempool_free(item, pool);
>>>> + return true;
>>>> + }
>>>> +
>>>> + queue_work(wq, &item->work);
>>>> +
>>>> + return true;
>>>> +}
>>>> +
>>>> +static const struct mmu_interval_notifier_ops
>>>> xe_madvise_notifier_ops = {
>>>> + .invalidate = xe_madvise_notifier_callback,
>>>> +};
>>>> +
>>>> +/**
>>>> + * xe_vm_madvise_init - Initialize madvise notifier
>>>> infrastructure
>>>> + * @vm: VM
>>>> + *
>>>> + * Sets up workqueue and mempool for async munmap processing.
>>>> + *
>>>> + * Return: 0 on success, -ENOMEM on failure
>>>> + */
>>>> +int xe_vm_madvise_init(struct xe_vm *vm)
>>>> +{
>>>> + struct workqueue_struct *wq;
>>>> + mempool_t *pool;
>>>> +
>>>> + /* Always initialize list and mutex - fini may be called
>>>> on partial init */
>>>> + INIT_LIST_HEAD(&vm->svm.madvise_notifiers.list);
>>>> + mutex_init(&vm->svm.madvise_notifiers.lock);
>>>> +
>>>> + wq = READ_ONCE(vm->svm.madvise_work.wq);
>>>> + pool = READ_ONCE(vm->svm.madvise_work.pool);
>>>> +
>>>> + /* Guard against double initialization and detect
>>>> partial init */
>>>> + if (wq || pool) {
>>>> + XE_WARN_ON(!wq || !pool);
>>>> + return 0;
>>>> + }
>>>> +
>>>> + WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
>>>> + WRITE_ONCE(vm->svm.madvise_work.pool, NULL);
>>>> + atomic_set(&vm->svm.madvise_work.closing, 1);
>>>> +
>>>> + /*
>>>> + * WQ_UNBOUND: best-effort optimization, not critical
>>>> path.
>>>> + * No WQ_MEM_RECLAIM: worker allocates memory (VMA ops
>>>> with GFP_KERNEL).
>>>> + * Not on reclaim path - merely resets attributes after
>>>> munmap.
>>>> + */
>>>> + vm->svm.madvise_work.wq = alloc_workqueue("xe_madvise",
>>>> WQ_UNBOUND, 0);
>>>> + if (!vm->svm.madvise_work.wq)
>>>> + return -ENOMEM;
>>>> +
>>>> + /* Mempool for GFP_ATOMIC allocs in notifier callback */
>>>> + vm->svm.madvise_work.pool =
>>>> + mempool_create_kmalloc_pool(64,
>>>> + sizeof(struct
>>>> xe_madvise_work_item));
>>>> + if (!vm->svm.madvise_work.pool) {
>>>> + destroy_workqueue(vm->svm.madvise_work.wq);
>>>> + WRITE_ONCE(vm->svm.madvise_work.wq, NULL);
>>>> + return -ENOMEM;
>>>> + }
>>>> +
>>>> + atomic_set(&vm->svm.madvise_work.closing, 0);
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_vm_madvise_fini - Cleanup all madvise notifiers
>>>> + * @vm: VM
>>>> + *
>>>> + * Tears down notifiers and drains workqueue. Safe if init
>>>> partially failed.
>>>> + * Order: closing flag → remove notifiers (SRCU sync) → drain wq
>>>> → destroy.
>>>> + */
>>>> +void xe_vm_madvise_fini(struct xe_vm *vm)
>>>> +{
>>>> + struct xe_madvise_notifier *notifier, *next;
>>>> + struct workqueue_struct *wq;
>>>> + mempool_t *pool;
>>>> + LIST_HEAD(tmp);
>>>> +
>>>> + atomic_set(&vm->svm.madvise_work.closing, 1);
>>>> +
>>>> + /*
>>>> + * Detach notifiers under lock, then remove outside lock
>>>> (SRCU sync can be slow).
>>>> + * Splice avoids holding mutex across
>>>> mmu_interval_notifier_remove() SRCU sync.
>>>> + * Removing notifiers first (before drain) prevents new
>>>> invalidate callbacks.
>>>> + */
>>>> + mutex_lock(&vm->svm.madvise_notifiers.lock);
>>>> + list_splice_init(&vm->svm.madvise_notifiers.list, &tmp);
>>>> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
>>>> +
>>>> + /* Now remove notifiers without holding lock -
>>>> mmu_interval_notifier_remove() SRCU-syncs */
>>>> + list_for_each_entry_safe(notifier, next, &tmp, list) {
>>>> + list_del(¬ifier->list);
>>>> + mmu_interval_notifier_remove(¬ifier-
>>>>> mmu_notifier);
>>>> + xe_vm_put(notifier->vm);
>>>> + kfree(notifier);
>>>> + }
>>>> +
>>>> + /* Drain and destroy workqueue */
>>>> + wq = xchg(&vm->svm.madvise_work.wq, NULL);
>>>> + if (wq) {
>>>> + drain_workqueue(wq);
>>> Work items in wq call xe_madvise_work_func, which takes vm->lock in
>>> write mode. If we try to drain here after the work item executing
>>> xe_madvise_work_func has started or is queued, I think we could
>>> deadlock. Lockdep should complain about this if you run a test that
>>> triggers xe_madvise_work_func at least once — or at least it
>>> should. If
>>> it doesn’t, then workqueues likely have an issue in their lockdep
>>> implementation as 'drain_workqueue' should touch its lockdep map
>>> which
>>> has tainted vm->lock (i.e., is outside of it).
>>>
>>> So perhaps call this function without vm->lock and take as need in
>>> the
>>> this function, then drop it drain the work queue, etc...
>>
>> Good catch. Draining the workqueue while holding |vm->lock| can
>> deadlock
>> against a worker that takes |vm->lock|. I fixed that by dropping
>>> vm->lock| before |xe_vm_madvise_fini()|. In the reworked teardown
>>> path,
>>> drain_workqueue()| runs with neither |vm->lock| nor the teardown
>> semaphore held.
>>
>>
>>>> + destroy_workqueue(wq);
>>>> + }
>>>> +
>>>> + pool = xchg(&vm->svm.madvise_work.pool, NULL);
>>>> + if (pool)
>>>> + mempool_destroy(pool);
>>>> +}
>>>> +
>>>> +/**
>>>> + * xe_vm_madvise_register_notifier_range - Register MMU notifier
>>>> for address range
>>>> + * @vm: VM
>>>> + * @start: Start address (page-aligned)
>>>> + * @end: End address (page-aligned)
>>>> + *
>>>> + * Registers interval notifier for munmap tracking. Uses
>>>> addresses (not VMA pointers)
>>>> + * to avoid UAF after dropping vm->lock. Deduplicates by range.
>>>> + *
>>>> + * Return: 0 on success, negative error code on failure
>>>> + */
>>>> +int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64
>>>> start, u64 end)
>>>> +{
>>>> + struct xe_madvise_notifier *notifier, *existing;
>>>> + int err;
>>>> +
>>> I see this isn’t called under the vm->lock write lock. Is there a
>>> reason
>>> not to? I think taking it under the write lock would help with the
>>> teardown sequence, since you wouldn’t be able to get here if
>>> xe_vm_is_closed_or_banned were stable—and we wouldn’t enter this
>>> function if that helper returned true.
>>
>> I can make the closed/banned check stable at the call site under
>>> vm->lock|, but I don’t think I can hold it across
>>> mmu_interval_notifier_insert()| itself since that may take
>>> |mmap_lock|
>> internally. I’ll restructure this so the state check happens under
>>> vm->lock|, while the actual insert remains outside that lock.
>>>> + if (!IS_ALIGNED(start, PAGE_SIZE) || !IS_ALIGNED(end,
>>>> PAGE_SIZE))
>>>> + return -EINVAL;
>>>> +
>>>> + if (WARN_ON_ONCE(end <= start))
>>>> + return -EINVAL;
>>>> +
>>>> + if (atomic_read(&vm->svm.madvise_work.closing))
>>>> + return -ENOENT;
>>>> +
>>>> + if (!READ_ONCE(vm->svm.madvise_work.wq) ||
>>>> + !READ_ONCE(vm->svm.madvise_work.pool))
>>>> + return -ENOMEM;
>>>> +
>>>> + /* Check mm early to avoid allocation if it's missing */
>>>> + if (!vm->svm.gpusvm.mm)
>>>> + return -EINVAL;
>>>> +
>>>> + /* Dedupe: check if notifier exists for this range */
>>>> + mutex_lock(&vm->svm.madvise_notifiers.lock);
>>> If we had the vm->lock in write mode we could likely just drop
>>> svm.madvise_notifiers.lock for now, but once we move to fine
>>> grained
>>> locking in page faults [1] we'd in fact need a dedicated lock. So
>>> let's
>>> keep this.
>>>
>>> [1]
>>> https://patchwork.freedesktop.org/patch/707238/?series=162167&rev=2
>>
>> Agreed. We should keep a dedicated lock here.
>>
>> I donot think |vm->lock| can cover |mmu_interval_notifier_insert()|
>> itself, since that path may take |mmap_lock| internally and would
>> risk
>> inverting the existing |mmap_lock -> vm->lock| ordering.
>>
>> So I will keep |svm.madvise_notifiers.lock| in place. That also lines
>> up
>> better with the planned fine-grained page-fault locking work.
>>
>>>> + list_for_each_entry(existing, &vm-
>>>>> svm.madvise_notifiers.list, list) {
>>>> + if (existing->vma_start == start && existing-
>>>>> vma_end == end) {
>>> This is O(N) which typically isn't ideal. Better structure here?
>>> mtree?
>>> Does an mtree have its own locking so svm.madvise_notifiers.lock
>>> could
>>> just be dropped? I'd look into this.
>>
>> Agreed. I switched this over to a maple tree, so the exact-range
>> lookup
>> is no longer O(N). That also lets me drop the list walk in the
>> duplicate
>> check.
>>
>>>> + mutex_unlock(&vm-
>>>>> svm.madvise_notifiers.lock);
>>>> + return 0;
>>>> + }
>>>> + }
>>>> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
>>>> +
>>>> + notifier = kzalloc(sizeof(*notifier), GFP_KERNEL);
>>>> + if (!notifier)
>>>> + return -ENOMEM;
>>>> +
>>>> + notifier->vm = xe_vm_get(vm);
>>>> + notifier->vma_start = start;
>>>> + notifier->vma_end = end;
>>>> + INIT_LIST_HEAD(¬ifier->list);
>>>> +
>>>> + err = mmu_interval_notifier_insert(¬ifier-
>>>>> mmu_notifier,
>>>> + vm->svm.gpusvm.mm,
>>>> + start,
>>>> + end - start,
>>>> +
>>>> &xe_madvise_notifier_ops);
>>>> + if (err) {
>>>> + xe_vm_put(notifier->vm);
>>>> + kfree(notifier);
>>>> + return err;
>>>> + }
>>>> +
>>>> + /* Re-check closing to avoid teardown race */
>>>> + if (unlikely(atomic_read(&vm-
>>>>> svm.madvise_work.closing))) {
>>>> + mmu_interval_notifier_remove(¬ifier-
>>>>> mmu_notifier);
>>>> + xe_vm_put(notifier->vm);
>>>> + kfree(notifier);
>>>> + return -ENOENT;
>>>> + }
>>>> +
>>>> + /* Add to list - check again for concurrent registration
>>>> race */
>>>> + mutex_lock(&vm->svm.madvise_notifiers.lock);
>>> If we had the vm->lock in write mode, we couldn't get concurrent
>>> registrations.
>>>
>>> I likely have more comments, but I have enough concerns with the
>>> locking
>>> and structure in this patch that I’m going to pause reviewing the
>>> series
>>> until most of my comments are addressed. It’s hard to focus on
>>> anything
>>> else until we get these issues worked out.
>>
>> I think the main issue is exactly the locking story around notifier
>> insert/remove. We cannot hold |vm->lock| across
>>> mmu_interval_notifier_insert()| because that may take |mmap_lock|
>> internally and invert the existing ordering.
>>
>> I have reworked this to simplify the teardown/registration side: drop
>> the atomic/READ_ONCE/xchg handling, use a single teardown |rwsem|,
>> and
>> replace the list-based dedupe with a maple tree.
>> I will send a cleaned-up version with the locking documented more
>> clearly. Sorry for the churn here.
>>
>>
>> Thanks,
>> Arvind
>>
>>> Matt
>>>
>>>> + list_for_each_entry(existing, &vm-
>>>>> svm.madvise_notifiers.list, list) {
>>>> + if (existing->vma_start == start && existing-
>>>>> vma_end == end) {
>>>> + mutex_unlock(&vm-
>>>>> svm.madvise_notifiers.lock);
>>>> + mmu_interval_notifier_remove(¬ifier-
>>>>> mmu_notifier);
>>>> + xe_vm_put(notifier->vm);
>>>> + kfree(notifier);
>>>> + return 0;
>>>> + }
>>>> + }
>>>> + list_add(¬ifier->list, &vm-
>>>>> svm.madvise_notifiers.list);
>>>> + mutex_unlock(&vm->svm.madvise_notifiers.lock);
>>>> +
>>>> + return 0;
>>>> +}
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>> b/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>> index b0e1fc445f23..ba9cd7912113 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>> +++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
>>>> @@ -6,10 +6,18 @@
>>>> #ifndef _XE_VM_MADVISE_H_
>>>> #define _XE_VM_MADVISE_H_
>>>>
>>>> +#include <linux/types.h>
>>>> +
>>>> struct drm_device;
>>>> struct drm_file;
>>>> +struct xe_vm;
>>>> +struct xe_vma;
>>>>
>>>> int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
>>>> struct drm_file *file);
>>>>
>>>> +int xe_vm_madvise_init(struct xe_vm *vm);
>>>> +void xe_vm_madvise_fini(struct xe_vm *vm);
>>>> +int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64
>>>> start, u64 end);
>>>> +
>>>> #endif
>>>> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
>>>> b/drivers/gpu/drm/xe/xe_vm_types.h
>>>> index 29ff63503d4c..eb978995000c 100644
>>>> --- a/drivers/gpu/drm/xe/xe_vm_types.h
>>>> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
>>>> @@ -12,6 +12,7 @@
>>>>
>>>> #include <linux/dma-resv.h>
>>>> #include <linux/kref.h>
>>>> +#include <linux/mempool.h>
>>>> #include <linux/mmu_notifier.h>
>>>> #include <linux/scatterlist.h>
>>>>
>>>> @@ -29,6 +30,26 @@ struct xe_user_fence;
>>>> struct xe_vm;
>>>> struct xe_vm_pgtable_update_op;
>>>>
>>>> +/**
>>>> + * struct xe_madvise_notifier - CPU madvise notifier for memory
>>>> attribute reset
>>>> + *
>>>> + * Tracks CPU munmap operations on SVM CPU address mirror VMAs.
>>>> + * When userspace unmaps CPU memory, this notifier processes
>>>> attribute reset
>>>> + * via work queue to avoid circular locking (can't take vm->lock
>>>> in callback).
>>>> + */
>>>> +struct xe_madvise_notifier {
>>>> + /** @mmu_notifier: MMU interval notifier */
>>>> + struct mmu_interval_notifier mmu_notifier;
>>>> + /** @vm: VM this notifier belongs to (holds reference
>>>> via xe_vm_get) */
>>>> + struct xe_vm *vm;
>>>> + /** @vma_start: Start address of VMA being tracked */
>>>> + u64 vma_start;
>>>> + /** @vma_end: End address of VMA being tracked */
>>>> + u64 vma_end;
>>>> + /** @list: Link in vm->svm.madvise_notifiers.list */
>>>> + struct list_head list;
>>>> +};
>>>> +
>>>> #if IS_ENABLED(CONFIG_DRM_XE_DEBUG)
>>>> #define TEST_VM_OPS_ERROR
>>>> #define FORCE_OP_ERROR BIT(31)
>>>> @@ -212,6 +233,26 @@ struct xe_vm {
>>>> struct xe_pagemap
>>>> *pagemaps[XE_MAX_TILES_PER_DEVICE];
>>>> /** @svm.peer: Used for pagemap connectivity
>>>> computations. */
>>>> struct drm_pagemap_peer peer;
>>>> +
>>>> + /**
>>>> + * @svm.madvise_notifiers: Active CPU madvise
>>>> notifiers
>>>> + */
>>>> + struct {
>>>> + /** @svm.madvise_notifiers.list: List of
>>>> active notifiers */
>>>> + struct list_head list;
>>>> + /** @svm.madvise_notifiers.lock:
>>>> Protects notifiers list */
>>>> + struct mutex lock;
>>>> + } madvise_notifiers;
>>>> +
>>>> + /** @svm.madvise_work: Workqueue for async
>>>> munmap processing */
>>>> + struct {
>>>> + /** @svm.madvise_work.wq: Workqueue */
>>>> + struct workqueue_struct *wq;
>>>> + /** @svm.madvise_work.pool: Mempool for
>>>> work items */
>>>> + mempool_t *pool;
>>>> + /** @svm.madvise_work.closing: Teardown
>>>> flag */
>>>> + atomic_t closing;
>>>> + } madvise_work;
>>>> } svm;
>>>>
>>>> struct xe_device *xe;
>>>> --
>>>> 2.43.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [RFC 5/7] drm/xe/vm: Deactivate madvise notifier on GPU touch
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
` (3 preceding siblings ...)
2026-02-19 9:13 ` [RFC 4/7] drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure Arvind Yadav
@ 2026-02-19 9:13 ` Arvind Yadav
2026-02-19 9:13 ` [RFC 6/7] drm/xe/vm: Wire MADVISE_AUTORESET notifiers into VM lifecycle Arvind Yadav
` (5 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Arvind Yadav @ 2026-02-19 9:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
MADVISE_AUTORESET notifier is only needed while VMA is CPU-only.
After GPU touch, the existing SVM notifier handles munmap.
Add 'active' flag to xe_madvise_notifier, cleared on first GPU touch.
The callback checks this flag and returns early when inactive.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 5 +++-
drivers/gpu/drm/xe/xe_vm.h | 6 ++--
drivers/gpu/drm/xe/xe_vm_madvise.c | 46 ++++++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_vm_madvise.h | 2 ++
drivers/gpu/drm/xe/xe_vm_types.h | 2 ++
5 files changed, 58 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index b9dbbb245779..3f09f5f6481f 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -21,6 +21,7 @@
#include "xe_tile.h"
#include "xe_ttm_vram_mgr.h"
#include "xe_vm.h"
+#include "xe_vm_madvise.h"
#include "xe_vm_types.h"
#include "xe_vram_types.h"
@@ -1367,8 +1368,10 @@ int xe_svm_handle_pagefault(struct xe_vm *vm, struct xe_vma *vma,
lockdep_assert_held_write(&vm->lock);
/* Transition CPU-only -> GPU-touched before installing PTEs. */
- if (xe_vma_has_cpu_autoreset_active(vma))
+ if (xe_vma_has_cpu_autoreset_active(vma)) {
xe_vma_gpu_touch(vma);
+ xe_vm_madvise_gpu_touch(vm, vma);
+ }
retry:
need_vram = xe_vma_need_vram_for_atomic(vm->xe, vma, atomic);
diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
index 3dc549550c91..f353ab928e4c 100644
--- a/drivers/gpu/drm/xe/xe_vm.h
+++ b/drivers/gpu/drm/xe/xe_vm.h
@@ -426,12 +426,14 @@ void xe_vma_mem_attr_copy(struct xe_vma_mem_attr *to, struct xe_vma_mem_attr *fr
/**
* xe_vma_gpu_touch() - Mark VMA as GPU-touched
- * @vma: VMA to mark
+ * @vma: VMA to transition
*
- * Clear XE_VMA_CPU_AUTORESET_ACTIVE. Must be done before first GPU PTE install.
+ * Clears CPU_AUTORESET_ACTIVE flag. Call xe_vm_madvise_gpu_touch() separately
+ * to deactivate the madvise notifier.
*/
static inline void xe_vma_gpu_touch(struct xe_vma *vma)
{
vma->gpuva.flags &= ~XE_VMA_CPU_AUTORESET_ACTIVE;
}
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 4c0ffb100bcc..98663707d039 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -657,6 +657,9 @@ static bool xe_madvise_notifier_callback(struct mmu_interval_notifier *mni,
if (range->event != MMU_NOTIFY_UNMAP)
return true;
+ if (!atomic_read(¬ifier->active))
+ return true;
+
/*
* Best-effort: skip in non-blockable contexts to avoid building up work.
* Correctness does not rely on this notifier - CPU_AUTORESET_ACTIVE flag
@@ -857,6 +860,7 @@ int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64 start, u64 end)
notifier->vm = xe_vm_get(vm);
notifier->vma_start = start;
notifier->vma_end = end;
+ atomic_set(¬ifier->active, 1);
INIT_LIST_HEAD(¬ifier->list);
err = mmu_interval_notifier_insert(¬ifier->mmu_notifier,
@@ -894,3 +898,45 @@ int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64 start, u64 end)
return 0;
}
+
+/**
+ * xe_vm_deactivate_madvise_notifier_for_range - Deactivate notifier for a range
+ * @vm: VM
+ * @start: Start address (page-aligned)
+ * @end: End address (page-aligned)
+ *
+ * Called when GPU touches a VMA - disables munmap processing for this range.
+ * Notifier remains registered but callback becomes a no-op until VM teardown.
+ */
+void xe_vm_deactivate_madvise_notifier_for_range(struct xe_vm *vm, u64 start, u64 end)
+{
+ struct xe_madvise_notifier *notifier;
+
+ /* Skip if madvise infrastructure not initialized */
+ if (!READ_ONCE(vm->svm.madvise_work.wq))
+ return;
+
+ mutex_lock(&vm->svm.madvise_notifiers.lock);
+ /* Deactivate overlapping notifiers (VMA splits may create multiple) */
+ list_for_each_entry(notifier, &vm->svm.madvise_notifiers.list, list) {
+ if (notifier->vma_start < end && notifier->vma_end > start)
+ atomic_set(¬ifier->active, 0);
+ }
+ mutex_unlock(&vm->svm.madvise_notifiers.lock);
+}
+
+/**
+ * xe_vm_madvise_gpu_touch() - Deactivate madvise notifier on GPU touch
+ * @vm: VM
+ * @vma: VMA that was GPU-touched
+ *
+ * Deactivates the madvise notifier for this VMA's range after GPU touch.
+ * Call after xe_vma_gpu_touch() clears the CPU_AUTORESET_ACTIVE flag.
+ */
+void xe_vm_madvise_gpu_touch(struct xe_vm *vm, struct xe_vma *vma)
+{
+ if (vma->gpuva.flags & XE_VMA_MADV_AUTORESET)
+ xe_vm_deactivate_madvise_notifier_for_range(vm,
+ xe_vma_start(vma),
+ xe_vma_end(vma));
+}
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.h b/drivers/gpu/drm/xe/xe_vm_madvise.h
index ba9cd7912113..91417062a33e 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.h
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.h
@@ -19,5 +19,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data,
int xe_vm_madvise_init(struct xe_vm *vm);
void xe_vm_madvise_fini(struct xe_vm *vm);
int xe_vm_madvise_register_notifier_range(struct xe_vm *vm, u64 start, u64 end);
+void xe_vm_deactivate_madvise_notifier_for_range(struct xe_vm *vm, u64 start, u64 end);
+void xe_vm_madvise_gpu_touch(struct xe_vm *vm, struct xe_vma *vma);
#endif
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index eb978995000c..9cdae3492472 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -48,6 +48,8 @@ struct xe_madvise_notifier {
u64 vma_end;
/** @list: Link in vm->svm.madvise_notifiers.list */
struct list_head list;
+ /** @active: Cleared when GPU touches VMA to avoid callback overhead */
+ atomic_t active;
};
#if IS_ENABLED(CONFIG_DRM_XE_DEBUG)
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [RFC 6/7] drm/xe/vm: Wire MADVISE_AUTORESET notifiers into VM lifecycle
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
` (4 preceding siblings ...)
2026-02-19 9:13 ` [RFC 5/7] drm/xe/vm: Deactivate madvise notifier on GPU touch Arvind Yadav
@ 2026-02-19 9:13 ` Arvind Yadav
2026-02-19 9:13 ` [RFC 7/7] drm/xe/svm: Correct memory attribute reset for partial unmap Arvind Yadav
` (4 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Arvind Yadav @ 2026-02-19 9:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
Initialise the MADVISE_AUTORESET interval notifier infrastructure for
fault-mode VMs and tear it down during VM close.
The notifier callback cannot take vm->lock, so the interval notifier work
is processed from a workqueue. VM close drops vm->lock around teardown
since the worker takes vm->lock.
For the madvise ioctl, collect the cpu_addr_mirror VMA ranges under
vm->lock and register the interval notifiers after dropping vm->lock to
avoid lock ordering issues with mmap_lock.
Also skip SVM PTE zapping for cpu_addr_mirror VMAs that are still marked
CPU_AUTORESET_ACTIVE since they do not have GPU mappings yet.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 9 +++
drivers/gpu/drm/xe/xe_vm.c | 22 ++++++
drivers/gpu/drm/xe/xe_vm_madvise.c | 113 ++++++++++++++++++++++++++++-
3 files changed, 140 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 3f09f5f6481f..8335fdc976b5 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -879,9 +879,18 @@ int xe_svm_init(struct xe_vm *vm)
xe_modparam.svm_notifier_size * SZ_1M,
&gpusvm_ops, fault_chunk_sizes,
ARRAY_SIZE(fault_chunk_sizes));
+ if (err) {
+ xe_svm_put_pagemaps(vm);
+ drm_pagemap_release_owner(&vm->svm.peer);
+ return err;
+ }
+
drm_gpusvm_driver_set_lock(&vm->svm.gpusvm, &vm->lock);
+ /* Initialize madvise notifier infrastructure after gpusvm */
+ err = xe_vm_madvise_init(vm);
if (err) {
+ drm_gpusvm_fini(&vm->svm.gpusvm);
xe_svm_put_pagemaps(vm);
drm_pagemap_release_owner(&vm->svm.peer);
return err;
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 152ee355e5c3..00799e56d089 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -39,6 +39,7 @@
#include "xe_tile.h"
#include "xe_tlb_inval.h"
#include "xe_trace_bo.h"
+#include "xe_vm_madvise.h"
#include "xe_wa.h"
static struct drm_gem_object *xe_vm_obj(struct xe_vm *vm)
@@ -1835,6 +1836,27 @@ void xe_vm_close_and_put(struct xe_vm *vm)
xe_vma_destroy_unlocked(vma);
}
+ /*
+ * xe_vm_madvise_fini() drains the madvise workqueue, and workers take vm->lock.
+ * Drop vm->lock around madvise teardown to avoid deadlock.
+ *
+ * Safe since the VM is already closed, and madvise teardown prevents new work
+ * from being queued.
+ */
+ xe_assert(vm->xe, xe_vm_is_closed_or_banned(vm));
+ up_write(&vm->lock);
+
+ /* Teardown madvise MMU notifiers + drain workers */
+ if (vm->flags & XE_VM_FLAG_FAULT_MODE)
+ xe_vm_madvise_fini(vm);
+
+ /*
+ * Retake vm->lock for SVM cleanup. drm_gpusvm_fini() needs to remove
+ * any remaining GPU SVM ranges, and drm_gpusvm_range_remove() requires
+ * the driver lock (vm->lock) to be held.
+ */
+ down_write(&vm->lock);
+
xe_svm_fini(vm);
up_write(&vm->lock);
diff --git a/drivers/gpu/drm/xe/xe_vm_madvise.c b/drivers/gpu/drm/xe/xe_vm_madvise.c
index 98663707d039..32aecad31a9c 100644
--- a/drivers/gpu/drm/xe/xe_vm_madvise.c
+++ b/drivers/gpu/drm/xe/xe_vm_madvise.c
@@ -23,6 +23,12 @@ struct xe_vmas_in_madvise_range {
int num_vmas;
bool has_bo_vmas;
bool has_svm_userptr_vmas;
+ bool has_cpu_addr_mirror_vmas;
+};
+
+struct xe_madvise_notifier_range {
+ u64 start;
+ u64 end;
};
/**
@@ -61,7 +67,10 @@ static int get_vmas(struct xe_vm *vm, struct xe_vmas_in_madvise_range *madvise_r
if (xe_vma_bo(vma))
madvise_range->has_bo_vmas = true;
- else if (xe_vma_is_cpu_addr_mirror(vma) || xe_vma_is_userptr(vma))
+ else if (xe_vma_is_cpu_addr_mirror(vma)) {
+ madvise_range->has_svm_userptr_vmas = true;
+ madvise_range->has_cpu_addr_mirror_vmas = true;
+ } else if (xe_vma_is_userptr(vma))
madvise_range->has_svm_userptr_vmas = true;
if (madvise_range->num_vmas == max_vmas) {
@@ -213,9 +222,19 @@ static u8 xe_zap_ptes_in_madvise_range(struct xe_vm *vm, u64 start, u64 end)
continue;
if (xe_vma_is_cpu_addr_mirror(vma)) {
- tile_mask |= xe_svm_ranges_zap_ptes_in_range(vm,
- xe_vma_start(vma),
- xe_vma_end(vma));
+ /*
+ * CPU-only VMAs (CPU_AUTORESET_ACTIVE set) have no GPU mappings yet.
+ * Flag MUST be cleared via xe_vma_gpu_touch() before installing GPU PTEs.
+ * Today, CPU_ADDR_MIRROR GPU PTEs are installed via the SVM fault path.
+ * If additional paths are added (prefetch, migration, explicit bind),
+ * they must clear CPU_AUTORESET_ACTIVE before PTE install.
+ *
+ * Once flag is cleared (GPU faulted), SVM handles munmap via its notifier.
+ */
+ if (!xe_vma_has_cpu_autoreset_active(vma))
+ tile_mask |= xe_svm_ranges_zap_ptes_in_range(vm,
+ xe_vma_start(vma),
+ xe_vma_end(vma));
} else {
for_each_tile(tile, vm->xe, id) {
if (xe_pt_zap_ptes(tile, vma)) {
@@ -416,6 +435,8 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
struct xe_madvise_details details;
struct xe_vm *vm;
struct drm_exec exec;
+ struct xe_madvise_notifier_range *notifier_ranges = NULL;
+ int num_notifier_ranges = 0;
int err, attr_type;
vm = xe_vm_lookup(xef, args->vm_id);
@@ -490,6 +511,89 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
if (madvise_range.has_svm_userptr_vmas)
xe_svm_notifier_unlock(vm);
+ if (err)
+ goto err_fini;
+
+ /*
+ * Collect ranges (not VMA pointers) that need madvise notifiers.
+ * Must be done while still holding vm->lock to safely inspect VMAs.
+ * After releasing vm->lock, we'll register notifiers using only
+ * the collected {start,end} ranges, avoiding UAF issues.
+ */
+ if (madvise_range.has_cpu_addr_mirror_vmas) {
+ /* Allocate array for ranges - use kvcalloc for large counts */
+ notifier_ranges = kvcalloc(madvise_range.num_vmas,
+ sizeof(*notifier_ranges),
+ GFP_KERNEL);
+ if (!notifier_ranges) {
+ err = -ENOMEM;
+ goto err_fini;
+ }
+
+ /* Collect ranges for VMAs needing notifiers */
+ for (int i = 0; i < madvise_range.num_vmas; i++) {
+ struct xe_vma *vma = madvise_range.vmas[i];
+
+ if (!xe_vma_is_cpu_addr_mirror(vma))
+ continue;
+
+ /*
+ * Only collect ranges for VMAs with MADV_AUTORESET
+ * that are still CPU-only.
+ */
+ if (!(vma->gpuva.flags & XE_VMA_MADV_AUTORESET))
+ continue;
+
+ if (!(vma->gpuva.flags & XE_VMA_CPU_AUTORESET_ACTIVE))
+ continue;
+
+ /* Skip duplicates (same range already collected) */
+ if (num_notifier_ranges > 0 &&
+ notifier_ranges[num_notifier_ranges - 1].start == xe_vma_start(vma) &&
+ notifier_ranges[num_notifier_ranges - 1].end == xe_vma_end(vma))
+ continue;
+
+ /* Save range - don't hold VMA pointer */
+ notifier_ranges[num_notifier_ranges].start = xe_vma_start(vma);
+ notifier_ranges[num_notifier_ranges].end = xe_vma_end(vma);
+ num_notifier_ranges++;
+ }
+ }
+
+ /* Normal cleanup path - all resources released properly */
+ if (madvise_range.has_bo_vmas)
+ drm_exec_fini(&exec);
+ kfree(madvise_range.vmas);
+ xe_madvise_details_fini(&details);
+ up_write(&vm->lock);
+
+ /*
+ * Register madvise notifiers using collected ranges.
+ * Must be done after dropping vm->lock to avoid lock ordering issues.
+ *
+ * Race window: munmap between lock drop and registration is acceptable.
+ * Auto-reset is best-effort; core correctness comes from CPU_AUTORESET_ACTIVE
+ * preventing GPU PTE zaps on CPU-only VMAs.
+ */
+ for (int i = 0; i < num_notifier_ranges; i++) {
+ int reg_err;
+
+ reg_err = xe_vm_madvise_register_notifier_range(vm,
+ notifier_ranges[i].start,
+ notifier_ranges[i].end);
+ if (reg_err) {
+ /* Expected failures: -ENOMEM, -ENOENT (munmap race), -EINVAL */
+ if (reg_err != -ENOMEM && reg_err != -ENOENT && reg_err != -EINVAL)
+ drm_warn(&vm->xe->drm,
+ "madvise notifier reg failed [%#llx-%#llx]: %d\n",
+ notifier_ranges[i].start, notifier_ranges[i].end, reg_err);
+ }
+ }
+
+ kvfree(notifier_ranges);
+ xe_vm_put(vm);
+ return 0;
+
err_fini:
if (madvise_range.has_bo_vmas)
drm_exec_fini(&exec);
@@ -499,6 +603,7 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
xe_madvise_details_fini(&details);
unlock_vm:
up_write(&vm->lock);
+ kvfree(notifier_ranges);
put_vm:
xe_vm_put(vm);
return err;
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* [RFC 7/7] drm/xe/svm: Correct memory attribute reset for partial unmap
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
` (5 preceding siblings ...)
2026-02-19 9:13 ` [RFC 6/7] drm/xe/vm: Wire MADVISE_AUTORESET notifiers into VM lifecycle Arvind Yadav
@ 2026-02-19 9:13 ` Arvind Yadav
2026-02-19 9:40 ` ✗ CI.checkpatch: warning for drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Patchwork
` (3 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Arvind Yadav @ 2026-02-19 9:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.brost, himal.prasad.ghimiray, thomas.hellstrom
From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
When performing a partial unmap of an SVM range, the memory attributes
were being reset for the entire range instead of just the portion
being unmapped. This could lead to unintended side effects and behaviour.
Fix this by restricting the attribute reset to only the affected subrange
that is being unmapped.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
---
drivers/gpu/drm/xe/xe_svm.c | 56 +++++++++++++++++++++++++++----------
drivers/gpu/drm/xe/xe_svm.h | 10 +++++++
2 files changed, 52 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_svm.c b/drivers/gpu/drm/xe/xe_svm.c
index 8335fdc976b5..3c833e6d6b2c 100644
--- a/drivers/gpu/drm/xe/xe_svm.c
+++ b/drivers/gpu/drm/xe/xe_svm.c
@@ -57,6 +57,8 @@ void *xe_svm_private_page_owner(struct xe_vm *vm, bool force_smem)
return force_smem ? NULL : vm->svm.peer.owner;
}
+#define XE_SVM_ATTR_RETRY_MAX 3
+
static bool xe_svm_range_in_vram(struct xe_svm_range *range)
{
/*
@@ -126,15 +128,23 @@ static void xe_svm_range_free(struct drm_gpusvm_range *range)
kfree(range);
}
+static void xe_svm_range_set_unmapped(struct xe_svm_range *range,
+ const struct mmu_notifier_range *mmu_range)
+{
+ drm_gpusvm_range_set_unmapped(&range->base, mmu_range);
+ if (range->base.pages.flags.partial_unmap) {
+ range->partial_unmap.start = max(xe_svm_range_start(range), mmu_range->start);
+ range->partial_unmap.end = min(xe_svm_range_end(range), mmu_range->end);
+ }
+}
+
static void
xe_svm_garbage_collector_add_range(struct xe_vm *vm, struct xe_svm_range *range,
const struct mmu_notifier_range *mmu_range)
{
struct xe_device *xe = vm->xe;
- range_debug(range, "GARBAGE COLLECTOR ADD");
-
- drm_gpusvm_range_set_unmapped(&range->base, mmu_range);
+ xe_svm_range_set_unmapped(range, mmu_range);
spin_lock(&vm->svm.garbage_collector.lock);
if (list_empty(&range->garbage_collector_link))
@@ -375,9 +385,10 @@ static int xe_svm_range_set_default_attr(struct xe_vm *vm, u64 start, u64 end)
static int xe_svm_garbage_collector(struct xe_vm *vm)
{
struct xe_svm_range *range;
- u64 range_start;
- u64 range_end;
+ u64 unmap_start;
+ u64 unmap_end;
int err, ret = 0;
+ int retry_count;
lockdep_assert_held_write(&vm->lock);
@@ -392,8 +403,13 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
if (!range)
break;
- range_start = xe_svm_range_start(range);
- range_end = xe_svm_range_end(range);
+ if (range->base.pages.flags.partial_unmap) {
+ unmap_start = range->partial_unmap.start;
+ unmap_end = range->partial_unmap.end;
+ } else {
+ unmap_start = xe_svm_range_start(range);
+ unmap_end = xe_svm_range_end(range);
+ }
list_del(&range->garbage_collector_link);
spin_unlock(&vm->svm.garbage_collector.lock);
@@ -407,13 +423,25 @@ static int xe_svm_garbage_collector(struct xe_vm *vm)
return err;
}
- err = xe_svm_range_set_default_attr(vm, range_start, range_end);
- if (err) {
- if (err == -EAGAIN)
- ret = -EAGAIN;
- else
- return err;
- }
+ /*
+ * Retry set_default_attr on -EAGAIN (VMA was recreated).
+ * Limit retries to prevent infinite loop.
+ */
+ retry_count = 0;
+
+ do {
+ err = xe_svm_range_set_default_attr(vm, unmap_start, unmap_end);
+ if (err == -EAGAIN && ++retry_count > XE_SVM_ATTR_RETRY_MAX) {
+ drm_err(&vm->xe->drm,
+ "SET_ATTR retry limit exceeded for [0x%llx-0x%llx]\n",
+ unmap_start, unmap_end);
+ xe_vm_kill(vm, true);
+ return -EIO;
+ }
+ } while (err == -EAGAIN);
+
+ if (err)
+ return err;
}
spin_unlock(&vm->svm.garbage_collector.lock);
diff --git a/drivers/gpu/drm/xe/xe_svm.h b/drivers/gpu/drm/xe/xe_svm.h
index b7b8eeacf196..4651e044cf53 100644
--- a/drivers/gpu/drm/xe/xe_svm.h
+++ b/drivers/gpu/drm/xe/xe_svm.h
@@ -46,6 +46,16 @@ struct xe_svm_range {
* range. Protected by GPU SVM notifier lock.
*/
u8 tile_invalidated;
+ /**
+ * @partial_unmap: Structure to hold partial unmap range info.
+ * Valid only if partial unmap is in effect.
+ */
+ struct {
+ /** @start: Start address of the partial unmap range */
+ u64 start;
+ /** @end: End address of the partial unmap range */
+ u64 end;
+ } partial_unmap;
};
/**
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* ✗ CI.checkpatch: warning for drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
` (6 preceding siblings ...)
2026-02-19 9:13 ` [RFC 7/7] drm/xe/svm: Correct memory attribute reset for partial unmap Arvind Yadav
@ 2026-02-19 9:40 ` Patchwork
2026-02-19 9:42 ` ✓ CI.KUnit: success " Patchwork
` (2 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Patchwork @ 2026-02-19 9:40 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
== Series Details ==
Series: drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap
URL : https://patchwork.freedesktop.org/series/161815/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
1f57ba1afceae32108bd24770069f764d940a0e4
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 393b6abb9054a363e87a0069a949522b9609db75
Author: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Date: Thu Feb 19 14:43:12 2026 +0530
drm/xe/svm: Correct memory attribute reset for partial unmap
When performing a partial unmap of an SVM range, the memory attributes
were being reset for the entire range instead of just the portion
being unmapped. This could lead to unintended side effects and behaviour.
Fix this by restricting the attribute reset to only the affected subrange
that is being unmapped.
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Signed-off-by: Arvind Yadav <arvind.yadav@intel.com>
+ /mt/dim checkpatch c81e41f7aca96f583296a2a875f0179484b7a81f drm-intel
9fb1b93d7755 drm/xe/vm: Add CPU_AUTORESET_ACTIVE VMA flag
86281920bd68 drm/xe/vm: Preserve CPU_AUTORESET_ACTIVE across GPUVA operations
3940f3524f81 drm/xe/svm: Clear CPU_AUTORESET_ACTIVE on first GPU fault
556333ace830 drm/xe/vm: Add madvise autoreset interval notifier worker infrastructure
-:294: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#294: FILE: drivers/gpu/drm/xe/xe_vm_madvise.c:753:
+ mempool_create_kmalloc_pool(64,
+ sizeof(struct xe_madvise_work_item));
-:348: WARNING:NEEDLESS_IF: mempool_destroy(NULL) is safe and this check is probably not required
#348: FILE: drivers/gpu/drm/xe/xe_vm_madvise.c:807:
+ if (pool)
+ mempool_destroy(pool);
total: 0 errors, 1 warnings, 1 checks, 483 lines checked
e5596e3e2f2b drm/xe/vm: Deactivate madvise notifier on GPU touch
ea7b3c47d3a8 drm/xe/vm: Wire MADVISE_AUTORESET notifiers into VM lifecycle
393b6abb9054 drm/xe/svm: Correct memory attribute reset for partial unmap
^ permalink raw reply [flat|nested] 19+ messages in thread* ✓ CI.KUnit: success for drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
` (7 preceding siblings ...)
2026-02-19 9:40 ` ✗ CI.checkpatch: warning for drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Patchwork
@ 2026-02-19 9:42 ` Patchwork
2026-02-19 10:40 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-19 13:04 ` ✗ Xe.CI.FULL: failure " Patchwork
10 siblings, 0 replies; 19+ messages in thread
From: Patchwork @ 2026-02-19 9:42 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
== Series Details ==
Series: drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap
URL : https://patchwork.freedesktop.org/series/161815/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[09:40:55] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[09:40:59] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[09:41:30] Starting KUnit Kernel (1/1)...
[09:41:30] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[09:41:31] ================== guc_buf (11 subtests) ===================
[09:41:31] [PASSED] test_smallest
[09:41:31] [PASSED] test_largest
[09:41:31] [PASSED] test_granular
[09:41:31] [PASSED] test_unique
[09:41:31] [PASSED] test_overlap
[09:41:31] [PASSED] test_reusable
[09:41:31] [PASSED] test_too_big
[09:41:31] [PASSED] test_flush
[09:41:31] [PASSED] test_lookup
[09:41:31] [PASSED] test_data
[09:41:31] [PASSED] test_class
[09:41:31] ===================== [PASSED] guc_buf =====================
[09:41:31] =================== guc_dbm (7 subtests) ===================
[09:41:31] [PASSED] test_empty
[09:41:31] [PASSED] test_default
[09:41:31] ======================== test_size ========================
[09:41:31] [PASSED] 4
[09:41:31] [PASSED] 8
[09:41:31] [PASSED] 32
[09:41:31] [PASSED] 256
[09:41:31] ==================== [PASSED] test_size ====================
[09:41:31] ======================= test_reuse ========================
[09:41:31] [PASSED] 4
[09:41:31] [PASSED] 8
[09:41:31] [PASSED] 32
[09:41:31] [PASSED] 256
[09:41:31] =================== [PASSED] test_reuse ====================
[09:41:31] =================== test_range_overlap ====================
[09:41:31] [PASSED] 4
[09:41:31] [PASSED] 8
[09:41:31] [PASSED] 32
[09:41:31] [PASSED] 256
[09:41:31] =============== [PASSED] test_range_overlap ================
[09:41:31] =================== test_range_compact ====================
[09:41:31] [PASSED] 4
[09:41:31] [PASSED] 8
[09:41:31] [PASSED] 32
[09:41:31] [PASSED] 256
[09:41:31] =============== [PASSED] test_range_compact ================
[09:41:31] ==================== test_range_spare =====================
[09:41:31] [PASSED] 4
[09:41:31] [PASSED] 8
[09:41:31] [PASSED] 32
[09:41:31] [PASSED] 256
[09:41:31] ================ [PASSED] test_range_spare =================
[09:41:31] ===================== [PASSED] guc_dbm =====================
[09:41:31] =================== guc_idm (6 subtests) ===================
[09:41:31] [PASSED] bad_init
[09:41:31] [PASSED] no_init
[09:41:31] [PASSED] init_fini
[09:41:31] [PASSED] check_used
[09:41:31] [PASSED] check_quota
[09:41:31] [PASSED] check_all
[09:41:31] ===================== [PASSED] guc_idm =====================
[09:41:31] ================== no_relay (3 subtests) ===================
[09:41:31] [PASSED] xe_drops_guc2pf_if_not_ready
[09:41:31] [PASSED] xe_drops_guc2vf_if_not_ready
[09:41:31] [PASSED] xe_rejects_send_if_not_ready
[09:41:31] ==================== [PASSED] no_relay =====================
[09:41:31] ================== pf_relay (14 subtests) ==================
[09:41:31] [PASSED] pf_rejects_guc2pf_too_short
[09:41:31] [PASSED] pf_rejects_guc2pf_too_long
[09:41:31] [PASSED] pf_rejects_guc2pf_no_payload
[09:41:31] [PASSED] pf_fails_no_payload
[09:41:31] [PASSED] pf_fails_bad_origin
[09:41:31] [PASSED] pf_fails_bad_type
[09:41:31] [PASSED] pf_txn_reports_error
[09:41:31] [PASSED] pf_txn_sends_pf2guc
[09:41:31] [PASSED] pf_sends_pf2guc
[09:41:31] [SKIPPED] pf_loopback_nop
[09:41:31] [SKIPPED] pf_loopback_echo
[09:41:31] [SKIPPED] pf_loopback_fail
[09:41:31] [SKIPPED] pf_loopback_busy
[09:41:31] [SKIPPED] pf_loopback_retry
[09:41:31] ==================== [PASSED] pf_relay =====================
[09:41:31] ================== vf_relay (3 subtests) ===================
[09:41:31] [PASSED] vf_rejects_guc2vf_too_short
[09:41:31] [PASSED] vf_rejects_guc2vf_too_long
[09:41:31] [PASSED] vf_rejects_guc2vf_no_payload
[09:41:31] ==================== [PASSED] vf_relay =====================
[09:41:31] ================ pf_gt_config (6 subtests) =================
[09:41:31] [PASSED] fair_contexts_1vf
[09:41:31] [PASSED] fair_doorbells_1vf
[09:41:31] [PASSED] fair_ggtt_1vf
[09:41:31] ====================== fair_contexts ======================
[09:41:31] [PASSED] 1 VF
[09:41:31] [PASSED] 2 VFs
[09:41:31] [PASSED] 3 VFs
[09:41:31] [PASSED] 4 VFs
[09:41:31] [PASSED] 5 VFs
[09:41:31] [PASSED] 6 VFs
[09:41:31] [PASSED] 7 VFs
[09:41:31] [PASSED] 8 VFs
[09:41:31] [PASSED] 9 VFs
[09:41:31] [PASSED] 10 VFs
[09:41:31] [PASSED] 11 VFs
[09:41:31] [PASSED] 12 VFs
[09:41:31] [PASSED] 13 VFs
[09:41:31] [PASSED] 14 VFs
[09:41:31] [PASSED] 15 VFs
[09:41:31] [PASSED] 16 VFs
[09:41:31] [PASSED] 17 VFs
[09:41:31] [PASSED] 18 VFs
[09:41:31] [PASSED] 19 VFs
[09:41:31] [PASSED] 20 VFs
[09:41:31] [PASSED] 21 VFs
[09:41:31] [PASSED] 22 VFs
[09:41:31] [PASSED] 23 VFs
[09:41:31] [PASSED] 24 VFs
[09:41:31] [PASSED] 25 VFs
[09:41:31] [PASSED] 26 VFs
[09:41:31] [PASSED] 27 VFs
[09:41:31] [PASSED] 28 VFs
[09:41:31] [PASSED] 29 VFs
[09:41:31] [PASSED] 30 VFs
[09:41:31] [PASSED] 31 VFs
[09:41:31] [PASSED] 32 VFs
[09:41:31] [PASSED] 33 VFs
[09:41:31] [PASSED] 34 VFs
[09:41:31] [PASSED] 35 VFs
[09:41:31] [PASSED] 36 VFs
[09:41:31] [PASSED] 37 VFs
[09:41:31] [PASSED] 38 VFs
[09:41:31] [PASSED] 39 VFs
[09:41:31] [PASSED] 40 VFs
[09:41:31] [PASSED] 41 VFs
[09:41:31] [PASSED] 42 VFs
[09:41:31] [PASSED] 43 VFs
[09:41:31] [PASSED] 44 VFs
[09:41:31] [PASSED] 45 VFs
[09:41:31] [PASSED] 46 VFs
[09:41:31] [PASSED] 47 VFs
[09:41:31] [PASSED] 48 VFs
[09:41:31] [PASSED] 49 VFs
[09:41:31] [PASSED] 50 VFs
[09:41:31] [PASSED] 51 VFs
[09:41:31] [PASSED] 52 VFs
[09:41:31] [PASSED] 53 VFs
[09:41:31] [PASSED] 54 VFs
[09:41:31] [PASSED] 55 VFs
[09:41:31] [PASSED] 56 VFs
[09:41:31] [PASSED] 57 VFs
[09:41:31] [PASSED] 58 VFs
[09:41:31] [PASSED] 59 VFs
[09:41:31] [PASSED] 60 VFs
[09:41:31] [PASSED] 61 VFs
[09:41:31] [PASSED] 62 VFs
[09:41:31] [PASSED] 63 VFs
[09:41:31] ================== [PASSED] fair_contexts ==================
[09:41:31] ===================== fair_doorbells ======================
[09:41:31] [PASSED] 1 VF
[09:41:31] [PASSED] 2 VFs
[09:41:31] [PASSED] 3 VFs
[09:41:31] [PASSED] 4 VFs
[09:41:31] [PASSED] 5 VFs
[09:41:31] [PASSED] 6 VFs
[09:41:31] [PASSED] 7 VFs
[09:41:31] [PASSED] 8 VFs
[09:41:31] [PASSED] 9 VFs
[09:41:31] [PASSED] 10 VFs
[09:41:31] [PASSED] 11 VFs
[09:41:31] [PASSED] 12 VFs
[09:41:31] [PASSED] 13 VFs
[09:41:31] [PASSED] 14 VFs
[09:41:31] [PASSED] 15 VFs
[09:41:31] [PASSED] 16 VFs
[09:41:31] [PASSED] 17 VFs
[09:41:31] [PASSED] 18 VFs
[09:41:31] [PASSED] 19 VFs
[09:41:31] [PASSED] 20 VFs
[09:41:31] [PASSED] 21 VFs
[09:41:31] [PASSED] 22 VFs
[09:41:31] [PASSED] 23 VFs
[09:41:31] [PASSED] 24 VFs
[09:41:31] [PASSED] 25 VFs
[09:41:31] [PASSED] 26 VFs
[09:41:31] [PASSED] 27 VFs
[09:41:31] [PASSED] 28 VFs
[09:41:31] [PASSED] 29 VFs
[09:41:31] [PASSED] 30 VFs
[09:41:31] [PASSED] 31 VFs
[09:41:31] [PASSED] 32 VFs
[09:41:31] [PASSED] 33 VFs
[09:41:31] [PASSED] 34 VFs
[09:41:31] [PASSED] 35 VFs
[09:41:31] [PASSED] 36 VFs
[09:41:31] [PASSED] 37 VFs
[09:41:31] [PASSED] 38 VFs
[09:41:31] [PASSED] 39 VFs
[09:41:31] [PASSED] 40 VFs
[09:41:31] [PASSED] 41 VFs
[09:41:31] [PASSED] 42 VFs
[09:41:31] [PASSED] 43 VFs
[09:41:31] [PASSED] 44 VFs
[09:41:31] [PASSED] 45 VFs
[09:41:31] [PASSED] 46 VFs
[09:41:31] [PASSED] 47 VFs
[09:41:31] [PASSED] 48 VFs
[09:41:31] [PASSED] 49 VFs
[09:41:31] [PASSED] 50 VFs
[09:41:31] [PASSED] 51 VFs
[09:41:31] [PASSED] 52 VFs
[09:41:31] [PASSED] 53 VFs
[09:41:31] [PASSED] 54 VFs
[09:41:31] [PASSED] 55 VFs
[09:41:31] [PASSED] 56 VFs
[09:41:31] [PASSED] 57 VFs
[09:41:31] [PASSED] 58 VFs
[09:41:31] [PASSED] 59 VFs
[09:41:31] [PASSED] 60 VFs
[09:41:31] [PASSED] 61 VFs
[09:41:31] [PASSED] 62 VFs
[09:41:31] [PASSED] 63 VFs
[09:41:31] ================= [PASSED] fair_doorbells ==================
[09:41:31] ======================== fair_ggtt ========================
[09:41:31] [PASSED] 1 VF
[09:41:31] [PASSED] 2 VFs
[09:41:31] [PASSED] 3 VFs
[09:41:31] [PASSED] 4 VFs
[09:41:31] [PASSED] 5 VFs
[09:41:31] [PASSED] 6 VFs
[09:41:31] [PASSED] 7 VFs
[09:41:31] [PASSED] 8 VFs
[09:41:31] [PASSED] 9 VFs
[09:41:31] [PASSED] 10 VFs
[09:41:31] [PASSED] 11 VFs
[09:41:31] [PASSED] 12 VFs
[09:41:31] [PASSED] 13 VFs
[09:41:31] [PASSED] 14 VFs
[09:41:31] [PASSED] 15 VFs
[09:41:31] [PASSED] 16 VFs
[09:41:31] [PASSED] 17 VFs
[09:41:31] [PASSED] 18 VFs
[09:41:31] [PASSED] 19 VFs
[09:41:31] [PASSED] 20 VFs
[09:41:31] [PASSED] 21 VFs
[09:41:31] [PASSED] 22 VFs
[09:41:31] [PASSED] 23 VFs
[09:41:31] [PASSED] 24 VFs
[09:41:31] [PASSED] 25 VFs
[09:41:31] [PASSED] 26 VFs
[09:41:31] [PASSED] 27 VFs
[09:41:31] [PASSED] 28 VFs
[09:41:31] [PASSED] 29 VFs
[09:41:31] [PASSED] 30 VFs
[09:41:31] [PASSED] 31 VFs
[09:41:31] [PASSED] 32 VFs
[09:41:31] [PASSED] 33 VFs
[09:41:31] [PASSED] 34 VFs
[09:41:31] [PASSED] 35 VFs
[09:41:31] [PASSED] 36 VFs
[09:41:31] [PASSED] 37 VFs
[09:41:31] [PASSED] 38 VFs
[09:41:31] [PASSED] 39 VFs
[09:41:31] [PASSED] 40 VFs
[09:41:31] [PASSED] 41 VFs
[09:41:31] [PASSED] 42 VFs
[09:41:31] [PASSED] 43 VFs
[09:41:31] [PASSED] 44 VFs
[09:41:31] [PASSED] 45 VFs
[09:41:31] [PASSED] 46 VFs
[09:41:31] [PASSED] 47 VFs
[09:41:31] [PASSED] 48 VFs
[09:41:31] [PASSED] 49 VFs
[09:41:31] [PASSED] 50 VFs
[09:41:31] [PASSED] 51 VFs
[09:41:31] [PASSED] 52 VFs
[09:41:31] [PASSED] 53 VFs
[09:41:31] [PASSED] 54 VFs
[09:41:31] [PASSED] 55 VFs
[09:41:31] [PASSED] 56 VFs
[09:41:31] [PASSED] 57 VFs
[09:41:31] [PASSED] 58 VFs
[09:41:31] [PASSED] 59 VFs
[09:41:31] [PASSED] 60 VFs
[09:41:31] [PASSED] 61 VFs
[09:41:31] [PASSED] 62 VFs
[09:41:31] [PASSED] 63 VFs
[09:41:31] ==================== [PASSED] fair_ggtt ====================
[09:41:31] ================== [PASSED] pf_gt_config ===================
[09:41:31] ===================== lmtt (1 subtest) =====================
[09:41:31] ======================== test_ops =========================
[09:41:31] [PASSED] 2-level
[09:41:31] [PASSED] multi-level
[09:41:31] ==================== [PASSED] test_ops =====================
[09:41:31] ====================== [PASSED] lmtt =======================
[09:41:31] ================= pf_service (11 subtests) =================
[09:41:31] [PASSED] pf_negotiate_any
[09:41:31] [PASSED] pf_negotiate_base_match
[09:41:31] [PASSED] pf_negotiate_base_newer
[09:41:31] [PASSED] pf_negotiate_base_next
[09:41:31] [SKIPPED] pf_negotiate_base_older
[09:41:31] [PASSED] pf_negotiate_base_prev
[09:41:31] [PASSED] pf_negotiate_latest_match
[09:41:31] [PASSED] pf_negotiate_latest_newer
[09:41:31] [PASSED] pf_negotiate_latest_next
[09:41:31] [SKIPPED] pf_negotiate_latest_older
[09:41:31] [SKIPPED] pf_negotiate_latest_prev
[09:41:31] =================== [PASSED] pf_service ====================
[09:41:31] ================= xe_guc_g2g (2 subtests) ==================
[09:41:31] ============== xe_live_guc_g2g_kunit_default ==============
[09:41:31] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[09:41:31] ============== xe_live_guc_g2g_kunit_allmem ===============
[09:41:31] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[09:41:31] =================== [SKIPPED] xe_guc_g2g ===================
[09:41:31] =================== xe_mocs (2 subtests) ===================
[09:41:31] ================ xe_live_mocs_kernel_kunit ================
[09:41:31] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[09:41:31] ================ xe_live_mocs_reset_kunit =================
[09:41:31] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[09:41:31] ==================== [SKIPPED] xe_mocs =====================
[09:41:31] ================= xe_migrate (2 subtests) ==================
[09:41:31] ================= xe_migrate_sanity_kunit =================
[09:41:31] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[09:41:31] ================== xe_validate_ccs_kunit ==================
[09:41:31] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[09:41:31] =================== [SKIPPED] xe_migrate ===================
[09:41:31] ================== xe_dma_buf (1 subtest) ==================
[09:41:31] ==================== xe_dma_buf_kunit =====================
[09:41:31] ================ [SKIPPED] xe_dma_buf_kunit ================
[09:41:31] =================== [SKIPPED] xe_dma_buf ===================
[09:41:31] ================= xe_bo_shrink (1 subtest) =================
[09:41:31] =================== xe_bo_shrink_kunit ====================
[09:41:31] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[09:41:31] ================== [SKIPPED] xe_bo_shrink ==================
[09:41:31] ==================== xe_bo (2 subtests) ====================
[09:41:31] ================== xe_ccs_migrate_kunit ===================
[09:41:31] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[09:41:31] ==================== xe_bo_evict_kunit ====================
[09:41:31] =============== [SKIPPED] xe_bo_evict_kunit ================
[09:41:31] ===================== [SKIPPED] xe_bo ======================
[09:41:31] ==================== args (13 subtests) ====================
[09:41:31] [PASSED] count_args_test
[09:41:31] [PASSED] call_args_example
[09:41:31] [PASSED] call_args_test
[09:41:31] [PASSED] drop_first_arg_example
[09:41:31] [PASSED] drop_first_arg_test
[09:41:31] [PASSED] first_arg_example
[09:41:31] [PASSED] first_arg_test
[09:41:31] [PASSED] last_arg_example
[09:41:31] [PASSED] last_arg_test
[09:41:31] [PASSED] pick_arg_example
[09:41:31] [PASSED] if_args_example
[09:41:31] [PASSED] if_args_test
[09:41:31] [PASSED] sep_comma_example
[09:41:31] ====================== [PASSED] args =======================
[09:41:31] =================== xe_pci (3 subtests) ====================
[09:41:31] ==================== check_graphics_ip ====================
[09:41:31] [PASSED] 12.00 Xe_LP
[09:41:31] [PASSED] 12.10 Xe_LP+
[09:41:31] [PASSED] 12.55 Xe_HPG
[09:41:31] [PASSED] 12.60 Xe_HPC
[09:41:31] [PASSED] 12.70 Xe_LPG
[09:41:31] [PASSED] 12.71 Xe_LPG
[09:41:31] [PASSED] 12.74 Xe_LPG+
[09:41:31] [PASSED] 20.01 Xe2_HPG
[09:41:31] [PASSED] 20.02 Xe2_HPG
[09:41:31] [PASSED] 20.04 Xe2_LPG
[09:41:31] [PASSED] 30.00 Xe3_LPG
[09:41:31] [PASSED] 30.01 Xe3_LPG
[09:41:31] [PASSED] 30.03 Xe3_LPG
[09:41:31] [PASSED] 30.04 Xe3_LPG
[09:41:31] [PASSED] 30.05 Xe3_LPG
[09:41:31] [PASSED] 35.10 Xe3p_LPG
[09:41:31] [PASSED] 35.11 Xe3p_XPC
[09:41:31] ================ [PASSED] check_graphics_ip ================
[09:41:31] ===================== check_media_ip ======================
[09:41:31] [PASSED] 12.00 Xe_M
[09:41:31] [PASSED] 12.55 Xe_HPM
[09:41:31] [PASSED] 13.00 Xe_LPM+
[09:41:31] [PASSED] 13.01 Xe2_HPM
[09:41:31] [PASSED] 20.00 Xe2_LPM
[09:41:31] [PASSED] 30.00 Xe3_LPM
[09:41:31] [PASSED] 30.02 Xe3_LPM
[09:41:31] [PASSED] 35.00 Xe3p_LPM
[09:41:31] [PASSED] 35.03 Xe3p_HPM
[09:41:31] ================= [PASSED] check_media_ip ==================
[09:41:31] =================== check_platform_desc ===================
[09:41:31] [PASSED] 0x9A60 (TIGERLAKE)
[09:41:31] [PASSED] 0x9A68 (TIGERLAKE)
[09:41:31] [PASSED] 0x9A70 (TIGERLAKE)
[09:41:31] [PASSED] 0x9A40 (TIGERLAKE)
[09:41:31] [PASSED] 0x9A49 (TIGERLAKE)
[09:41:31] [PASSED] 0x9A59 (TIGERLAKE)
[09:41:31] [PASSED] 0x9A78 (TIGERLAKE)
[09:41:31] [PASSED] 0x9AC0 (TIGERLAKE)
[09:41:31] [PASSED] 0x9AC9 (TIGERLAKE)
[09:41:31] [PASSED] 0x9AD9 (TIGERLAKE)
[09:41:31] [PASSED] 0x9AF8 (TIGERLAKE)
[09:41:31] [PASSED] 0x4C80 (ROCKETLAKE)
[09:41:31] [PASSED] 0x4C8A (ROCKETLAKE)
[09:41:31] [PASSED] 0x4C8B (ROCKETLAKE)
[09:41:31] [PASSED] 0x4C8C (ROCKETLAKE)
[09:41:31] [PASSED] 0x4C90 (ROCKETLAKE)
[09:41:31] [PASSED] 0x4C9A (ROCKETLAKE)
[09:41:31] [PASSED] 0x4680 (ALDERLAKE_S)
[09:41:31] [PASSED] 0x4682 (ALDERLAKE_S)
[09:41:31] [PASSED] 0x4688 (ALDERLAKE_S)
[09:41:31] [PASSED] 0x468A (ALDERLAKE_S)
[09:41:31] [PASSED] 0x468B (ALDERLAKE_S)
[09:41:31] [PASSED] 0x4690 (ALDERLAKE_S)
[09:41:31] [PASSED] 0x4692 (ALDERLAKE_S)
[09:41:31] [PASSED] 0x4693 (ALDERLAKE_S)
[09:41:31] [PASSED] 0x46A0 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46A1 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46A2 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46A3 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46A6 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46A8 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46AA (ALDERLAKE_P)
[09:41:31] [PASSED] 0x462A (ALDERLAKE_P)
[09:41:31] [PASSED] 0x4626 (ALDERLAKE_P)
stty: 'standard input': Inappropriate ioctl for device
[09:41:31] [PASSED] 0x4628 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46B0 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46B1 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46B2 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46B3 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46C0 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46C1 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46C2 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46C3 (ALDERLAKE_P)
[09:41:31] [PASSED] 0x46D0 (ALDERLAKE_N)
[09:41:31] [PASSED] 0x46D1 (ALDERLAKE_N)
[09:41:31] [PASSED] 0x46D2 (ALDERLAKE_N)
[09:41:31] [PASSED] 0x46D3 (ALDERLAKE_N)
[09:41:31] [PASSED] 0x46D4 (ALDERLAKE_N)
[09:41:31] [PASSED] 0xA721 (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA7A1 (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA7A9 (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA7AC (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA7AD (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA720 (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA7A0 (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA7A8 (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA7AA (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA7AB (ALDERLAKE_P)
[09:41:31] [PASSED] 0xA780 (ALDERLAKE_S)
[09:41:31] [PASSED] 0xA781 (ALDERLAKE_S)
[09:41:31] [PASSED] 0xA782 (ALDERLAKE_S)
[09:41:31] [PASSED] 0xA783 (ALDERLAKE_S)
[09:41:31] [PASSED] 0xA788 (ALDERLAKE_S)
[09:41:31] [PASSED] 0xA789 (ALDERLAKE_S)
[09:41:31] [PASSED] 0xA78A (ALDERLAKE_S)
[09:41:31] [PASSED] 0xA78B (ALDERLAKE_S)
[09:41:31] [PASSED] 0x4905 (DG1)
[09:41:31] [PASSED] 0x4906 (DG1)
[09:41:31] [PASSED] 0x4907 (DG1)
[09:41:31] [PASSED] 0x4908 (DG1)
[09:41:31] [PASSED] 0x4909 (DG1)
[09:41:31] [PASSED] 0x56C0 (DG2)
[09:41:31] [PASSED] 0x56C2 (DG2)
[09:41:31] [PASSED] 0x56C1 (DG2)
[09:41:31] [PASSED] 0x7D51 (METEORLAKE)
[09:41:31] [PASSED] 0x7DD1 (METEORLAKE)
[09:41:31] [PASSED] 0x7D41 (METEORLAKE)
[09:41:31] [PASSED] 0x7D67 (METEORLAKE)
[09:41:31] [PASSED] 0xB640 (METEORLAKE)
[09:41:31] [PASSED] 0x56A0 (DG2)
[09:41:31] [PASSED] 0x56A1 (DG2)
[09:41:31] [PASSED] 0x56A2 (DG2)
[09:41:31] [PASSED] 0x56BE (DG2)
[09:41:31] [PASSED] 0x56BF (DG2)
[09:41:31] [PASSED] 0x5690 (DG2)
[09:41:31] [PASSED] 0x5691 (DG2)
[09:41:31] [PASSED] 0x5692 (DG2)
[09:41:31] [PASSED] 0x56A5 (DG2)
[09:41:31] [PASSED] 0x56A6 (DG2)
[09:41:31] [PASSED] 0x56B0 (DG2)
[09:41:31] [PASSED] 0x56B1 (DG2)
[09:41:31] [PASSED] 0x56BA (DG2)
[09:41:31] [PASSED] 0x56BB (DG2)
[09:41:31] [PASSED] 0x56BC (DG2)
[09:41:31] [PASSED] 0x56BD (DG2)
[09:41:31] [PASSED] 0x5693 (DG2)
[09:41:31] [PASSED] 0x5694 (DG2)
[09:41:31] [PASSED] 0x5695 (DG2)
[09:41:31] [PASSED] 0x56A3 (DG2)
[09:41:31] [PASSED] 0x56A4 (DG2)
[09:41:31] [PASSED] 0x56B2 (DG2)
[09:41:31] [PASSED] 0x56B3 (DG2)
[09:41:31] [PASSED] 0x5696 (DG2)
[09:41:31] [PASSED] 0x5697 (DG2)
[09:41:31] [PASSED] 0xB69 (PVC)
[09:41:31] [PASSED] 0xB6E (PVC)
[09:41:31] [PASSED] 0xBD4 (PVC)
[09:41:31] [PASSED] 0xBD5 (PVC)
[09:41:31] [PASSED] 0xBD6 (PVC)
[09:41:31] [PASSED] 0xBD7 (PVC)
[09:41:31] [PASSED] 0xBD8 (PVC)
[09:41:31] [PASSED] 0xBD9 (PVC)
[09:41:31] [PASSED] 0xBDA (PVC)
[09:41:31] [PASSED] 0xBDB (PVC)
[09:41:31] [PASSED] 0xBE0 (PVC)
[09:41:31] [PASSED] 0xBE1 (PVC)
[09:41:31] [PASSED] 0xBE5 (PVC)
[09:41:31] [PASSED] 0x7D40 (METEORLAKE)
[09:41:31] [PASSED] 0x7D45 (METEORLAKE)
[09:41:31] [PASSED] 0x7D55 (METEORLAKE)
[09:41:31] [PASSED] 0x7D60 (METEORLAKE)
[09:41:31] [PASSED] 0x7DD5 (METEORLAKE)
[09:41:31] [PASSED] 0x6420 (LUNARLAKE)
[09:41:31] [PASSED] 0x64A0 (LUNARLAKE)
[09:41:31] [PASSED] 0x64B0 (LUNARLAKE)
[09:41:31] [PASSED] 0xE202 (BATTLEMAGE)
[09:41:31] [PASSED] 0xE209 (BATTLEMAGE)
[09:41:31] [PASSED] 0xE20B (BATTLEMAGE)
[09:41:31] [PASSED] 0xE20C (BATTLEMAGE)
[09:41:31] [PASSED] 0xE20D (BATTLEMAGE)
[09:41:31] [PASSED] 0xE210 (BATTLEMAGE)
[09:41:31] [PASSED] 0xE211 (BATTLEMAGE)
[09:41:31] [PASSED] 0xE212 (BATTLEMAGE)
[09:41:31] [PASSED] 0xE216 (BATTLEMAGE)
[09:41:31] [PASSED] 0xE220 (BATTLEMAGE)
[09:41:31] [PASSED] 0xE221 (BATTLEMAGE)
[09:41:31] [PASSED] 0xE222 (BATTLEMAGE)
[09:41:31] [PASSED] 0xE223 (BATTLEMAGE)
[09:41:31] [PASSED] 0xB080 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB081 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB082 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB083 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB084 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB085 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB086 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB087 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB08F (PANTHERLAKE)
[09:41:31] [PASSED] 0xB090 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB0A0 (PANTHERLAKE)
[09:41:31] [PASSED] 0xB0B0 (PANTHERLAKE)
[09:41:31] [PASSED] 0xFD80 (PANTHERLAKE)
[09:41:31] [PASSED] 0xFD81 (PANTHERLAKE)
[09:41:31] [PASSED] 0xD740 (NOVALAKE_S)
[09:41:31] [PASSED] 0xD741 (NOVALAKE_S)
[09:41:31] [PASSED] 0xD742 (NOVALAKE_S)
[09:41:31] [PASSED] 0xD743 (NOVALAKE_S)
[09:41:31] [PASSED] 0xD744 (NOVALAKE_S)
[09:41:31] [PASSED] 0xD745 (NOVALAKE_S)
[09:41:31] [PASSED] 0x674C (CRESCENTISLAND)
[09:41:31] [PASSED] 0xD750 (NOVALAKE_P)
[09:41:31] [PASSED] 0xD751 (NOVALAKE_P)
[09:41:31] [PASSED] 0xD752 (NOVALAKE_P)
[09:41:31] [PASSED] 0xD753 (NOVALAKE_P)
[09:41:31] [PASSED] 0xD754 (NOVALAKE_P)
[09:41:31] [PASSED] 0xD755 (NOVALAKE_P)
[09:41:31] [PASSED] 0xD756 (NOVALAKE_P)
[09:41:31] [PASSED] 0xD757 (NOVALAKE_P)
[09:41:31] [PASSED] 0xD75F (NOVALAKE_P)
[09:41:31] =============== [PASSED] check_platform_desc ===============
[09:41:31] ===================== [PASSED] xe_pci ======================
[09:41:31] =================== xe_rtp (2 subtests) ====================
[09:41:31] =============== xe_rtp_process_to_sr_tests ================
[09:41:31] [PASSED] coalesce-same-reg
[09:41:31] [PASSED] no-match-no-add
[09:41:31] [PASSED] match-or
[09:41:31] [PASSED] match-or-xfail
[09:41:31] [PASSED] no-match-no-add-multiple-rules
[09:41:31] [PASSED] two-regs-two-entries
[09:41:31] [PASSED] clr-one-set-other
[09:41:31] [PASSED] set-field
[09:41:31] [PASSED] conflict-duplicate
[09:41:31] [PASSED] conflict-not-disjoint
[09:41:31] [PASSED] conflict-reg-type
[09:41:31] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[09:41:31] ================== xe_rtp_process_tests ===================
[09:41:31] [PASSED] active1
[09:41:31] [PASSED] active2
[09:41:31] [PASSED] active-inactive
[09:41:31] [PASSED] inactive-active
[09:41:31] [PASSED] inactive-1st_or_active-inactive
[09:41:31] [PASSED] inactive-2nd_or_active-inactive
[09:41:31] [PASSED] inactive-last_or_active-inactive
[09:41:31] [PASSED] inactive-no_or_active-inactive
[09:41:31] ============== [PASSED] xe_rtp_process_tests ===============
[09:41:31] ===================== [PASSED] xe_rtp ======================
[09:41:31] ==================== xe_wa (1 subtest) =====================
[09:41:31] ======================== xe_wa_gt =========================
[09:41:31] [PASSED] TIGERLAKE B0
[09:41:31] [PASSED] DG1 A0
[09:41:31] [PASSED] DG1 B0
[09:41:31] [PASSED] ALDERLAKE_S A0
[09:41:31] [PASSED] ALDERLAKE_S B0
[09:41:31] [PASSED] ALDERLAKE_S C0
[09:41:31] [PASSED] ALDERLAKE_S D0
[09:41:31] [PASSED] ALDERLAKE_P A0
[09:41:31] [PASSED] ALDERLAKE_P B0
[09:41:31] [PASSED] ALDERLAKE_P C0
[09:41:31] [PASSED] ALDERLAKE_S RPLS D0
[09:41:31] [PASSED] ALDERLAKE_P RPLU E0
[09:41:31] [PASSED] DG2 G10 C0
[09:41:31] [PASSED] DG2 G11 B1
[09:41:31] [PASSED] DG2 G12 A1
[09:41:31] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[09:41:31] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[09:41:31] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[09:41:31] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[09:41:31] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[09:41:31] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[09:41:31] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[09:41:31] ==================== [PASSED] xe_wa_gt =====================
[09:41:31] ====================== [PASSED] xe_wa ======================
[09:41:31] ============================================================
[09:41:31] Testing complete. Ran 522 tests: passed: 504, skipped: 18
[09:41:31] Elapsed time: 36.408s total, 4.233s configuring, 31.658s building, 0.480s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[09:41:31] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[09:41:33] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[09:41:58] Starting KUnit Kernel (1/1)...
[09:41:58] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[09:41:58] ============ drm_test_pick_cmdline (2 subtests) ============
[09:41:58] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[09:41:58] =============== drm_test_pick_cmdline_named ===============
[09:41:58] [PASSED] NTSC
[09:41:58] [PASSED] NTSC-J
[09:41:58] [PASSED] PAL
[09:41:58] [PASSED] PAL-M
[09:41:58] =========== [PASSED] drm_test_pick_cmdline_named ===========
[09:41:58] ============== [PASSED] drm_test_pick_cmdline ==============
[09:41:58] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[09:41:58] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[09:41:58] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[09:41:58] =========== drm_validate_clone_mode (2 subtests) ===========
[09:41:58] ============== drm_test_check_in_clone_mode ===============
[09:41:58] [PASSED] in_clone_mode
[09:41:58] [PASSED] not_in_clone_mode
[09:41:58] ========== [PASSED] drm_test_check_in_clone_mode ===========
[09:41:58] =============== drm_test_check_valid_clones ===============
[09:41:58] [PASSED] not_in_clone_mode
[09:41:58] [PASSED] valid_clone
[09:41:58] [PASSED] invalid_clone
[09:41:58] =========== [PASSED] drm_test_check_valid_clones ===========
[09:41:58] ============= [PASSED] drm_validate_clone_mode =============
[09:41:58] ============= drm_validate_modeset (1 subtest) =============
[09:41:58] [PASSED] drm_test_check_connector_changed_modeset
[09:41:58] ============== [PASSED] drm_validate_modeset ===============
[09:41:58] ====== drm_test_bridge_get_current_state (2 subtests) ======
[09:41:58] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[09:41:58] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[09:41:58] ======== [PASSED] drm_test_bridge_get_current_state ========
[09:41:58] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[09:41:58] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[09:41:58] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[09:41:58] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[09:41:58] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[09:41:58] ============== drm_bridge_alloc (2 subtests) ===============
[09:41:58] [PASSED] drm_test_drm_bridge_alloc_basic
[09:41:58] [PASSED] drm_test_drm_bridge_alloc_get_put
[09:41:58] ================ [PASSED] drm_bridge_alloc =================
[09:41:58] ============= drm_cmdline_parser (40 subtests) =============
[09:41:58] [PASSED] drm_test_cmdline_force_d_only
[09:41:58] [PASSED] drm_test_cmdline_force_D_only_dvi
[09:41:58] [PASSED] drm_test_cmdline_force_D_only_hdmi
[09:41:58] [PASSED] drm_test_cmdline_force_D_only_not_digital
[09:41:58] [PASSED] drm_test_cmdline_force_e_only
[09:41:58] [PASSED] drm_test_cmdline_res
[09:41:58] [PASSED] drm_test_cmdline_res_vesa
[09:41:58] [PASSED] drm_test_cmdline_res_vesa_rblank
[09:41:58] [PASSED] drm_test_cmdline_res_rblank
[09:41:58] [PASSED] drm_test_cmdline_res_bpp
[09:41:58] [PASSED] drm_test_cmdline_res_refresh
[09:41:58] [PASSED] drm_test_cmdline_res_bpp_refresh
[09:41:58] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[09:41:58] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[09:41:58] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[09:41:58] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[09:41:58] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[09:41:58] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[09:41:58] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[09:41:58] [PASSED] drm_test_cmdline_res_margins_force_on
[09:41:58] [PASSED] drm_test_cmdline_res_vesa_margins
[09:41:58] [PASSED] drm_test_cmdline_name
[09:41:58] [PASSED] drm_test_cmdline_name_bpp
[09:41:58] [PASSED] drm_test_cmdline_name_option
[09:41:58] [PASSED] drm_test_cmdline_name_bpp_option
[09:41:58] [PASSED] drm_test_cmdline_rotate_0
[09:41:58] [PASSED] drm_test_cmdline_rotate_90
[09:41:58] [PASSED] drm_test_cmdline_rotate_180
[09:41:58] [PASSED] drm_test_cmdline_rotate_270
[09:41:58] [PASSED] drm_test_cmdline_hmirror
[09:41:58] [PASSED] drm_test_cmdline_vmirror
[09:41:58] [PASSED] drm_test_cmdline_margin_options
[09:41:58] [PASSED] drm_test_cmdline_multiple_options
[09:41:58] [PASSED] drm_test_cmdline_bpp_extra_and_option
[09:41:58] [PASSED] drm_test_cmdline_extra_and_option
[09:41:58] [PASSED] drm_test_cmdline_freestanding_options
[09:41:58] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[09:41:58] [PASSED] drm_test_cmdline_panel_orientation
[09:41:58] ================ drm_test_cmdline_invalid =================
[09:41:58] [PASSED] margin_only
[09:41:58] [PASSED] interlace_only
[09:41:58] [PASSED] res_missing_x
[09:41:58] [PASSED] res_missing_y
[09:41:58] [PASSED] res_bad_y
[09:41:58] [PASSED] res_missing_y_bpp
[09:41:58] [PASSED] res_bad_bpp
[09:41:58] [PASSED] res_bad_refresh
[09:41:58] [PASSED] res_bpp_refresh_force_on_off
[09:41:58] [PASSED] res_invalid_mode
[09:41:58] [PASSED] res_bpp_wrong_place_mode
[09:41:58] [PASSED] name_bpp_refresh
[09:41:58] [PASSED] name_refresh
[09:41:58] [PASSED] name_refresh_wrong_mode
[09:41:58] [PASSED] name_refresh_invalid_mode
[09:41:58] [PASSED] rotate_multiple
[09:41:58] [PASSED] rotate_invalid_val
[09:41:58] [PASSED] rotate_truncated
[09:41:58] [PASSED] invalid_option
[09:41:58] [PASSED] invalid_tv_option
[09:41:58] [PASSED] truncated_tv_option
[09:41:58] ============ [PASSED] drm_test_cmdline_invalid =============
[09:41:58] =============== drm_test_cmdline_tv_options ===============
[09:41:58] [PASSED] NTSC
[09:41:58] [PASSED] NTSC_443
[09:41:58] [PASSED] NTSC_J
[09:41:58] [PASSED] PAL
[09:41:58] [PASSED] PAL_M
[09:41:58] [PASSED] PAL_N
[09:41:58] [PASSED] SECAM
[09:41:58] [PASSED] MONO_525
[09:41:58] [PASSED] MONO_625
[09:41:58] =========== [PASSED] drm_test_cmdline_tv_options ===========
[09:41:58] =============== [PASSED] drm_cmdline_parser ================
[09:41:58] ========== drmm_connector_hdmi_init (20 subtests) ==========
[09:41:58] [PASSED] drm_test_connector_hdmi_init_valid
[09:41:58] [PASSED] drm_test_connector_hdmi_init_bpc_8
[09:41:58] [PASSED] drm_test_connector_hdmi_init_bpc_10
[09:41:58] [PASSED] drm_test_connector_hdmi_init_bpc_12
[09:41:58] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[09:41:58] [PASSED] drm_test_connector_hdmi_init_bpc_null
[09:41:58] [PASSED] drm_test_connector_hdmi_init_formats_empty
[09:41:58] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[09:41:58] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[09:41:58] [PASSED] supported_formats=0x9 yuv420_allowed=1
[09:41:58] [PASSED] supported_formats=0x9 yuv420_allowed=0
[09:41:58] [PASSED] supported_formats=0x3 yuv420_allowed=1
[09:41:58] [PASSED] supported_formats=0x3 yuv420_allowed=0
[09:41:58] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[09:41:58] [PASSED] drm_test_connector_hdmi_init_null_ddc
[09:41:58] [PASSED] drm_test_connector_hdmi_init_null_product
[09:41:58] [PASSED] drm_test_connector_hdmi_init_null_vendor
[09:41:58] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[09:41:58] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[09:41:58] [PASSED] drm_test_connector_hdmi_init_product_valid
[09:41:58] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[09:41:58] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[09:41:58] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[09:41:58] ========= drm_test_connector_hdmi_init_type_valid =========
[09:41:58] [PASSED] HDMI-A
[09:41:58] [PASSED] HDMI-B
[09:41:58] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[09:41:58] ======== drm_test_connector_hdmi_init_type_invalid ========
[09:41:58] [PASSED] Unknown
[09:41:58] [PASSED] VGA
[09:41:58] [PASSED] DVI-I
[09:41:58] [PASSED] DVI-D
[09:41:58] [PASSED] DVI-A
[09:41:58] [PASSED] Composite
[09:41:58] [PASSED] SVIDEO
[09:41:58] [PASSED] LVDS
[09:41:58] [PASSED] Component
[09:41:58] [PASSED] DIN
[09:41:58] [PASSED] DP
[09:41:58] [PASSED] TV
[09:41:58] [PASSED] eDP
[09:41:58] [PASSED] Virtual
[09:41:58] [PASSED] DSI
[09:41:58] [PASSED] DPI
[09:41:58] [PASSED] Writeback
[09:41:58] [PASSED] SPI
[09:41:58] [PASSED] USB
[09:41:58] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[09:41:58] ============ [PASSED] drmm_connector_hdmi_init =============
[09:41:58] ============= drmm_connector_init (3 subtests) =============
[09:41:58] [PASSED] drm_test_drmm_connector_init
[09:41:58] [PASSED] drm_test_drmm_connector_init_null_ddc
[09:41:58] ========= drm_test_drmm_connector_init_type_valid =========
[09:41:58] [PASSED] Unknown
[09:41:58] [PASSED] VGA
[09:41:58] [PASSED] DVI-I
[09:41:58] [PASSED] DVI-D
[09:41:58] [PASSED] DVI-A
[09:41:58] [PASSED] Composite
[09:41:58] [PASSED] SVIDEO
[09:41:58] [PASSED] LVDS
[09:41:58] [PASSED] Component
[09:41:58] [PASSED] DIN
[09:41:58] [PASSED] DP
[09:41:58] [PASSED] HDMI-A
[09:41:58] [PASSED] HDMI-B
[09:41:58] [PASSED] TV
[09:41:58] [PASSED] eDP
[09:41:58] [PASSED] Virtual
[09:41:58] [PASSED] DSI
[09:41:58] [PASSED] DPI
[09:41:58] [PASSED] Writeback
[09:41:58] [PASSED] SPI
[09:41:58] [PASSED] USB
[09:41:58] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[09:41:58] =============== [PASSED] drmm_connector_init ===============
[09:41:58] ========= drm_connector_dynamic_init (6 subtests) ==========
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_init
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_init_properties
[09:41:58] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[09:41:58] [PASSED] Unknown
[09:41:58] [PASSED] VGA
[09:41:58] [PASSED] DVI-I
[09:41:58] [PASSED] DVI-D
[09:41:58] [PASSED] DVI-A
[09:41:58] [PASSED] Composite
[09:41:58] [PASSED] SVIDEO
[09:41:58] [PASSED] LVDS
[09:41:58] [PASSED] Component
[09:41:58] [PASSED] DIN
[09:41:58] [PASSED] DP
[09:41:58] [PASSED] HDMI-A
[09:41:58] [PASSED] HDMI-B
[09:41:58] [PASSED] TV
[09:41:58] [PASSED] eDP
[09:41:58] [PASSED] Virtual
[09:41:58] [PASSED] DSI
[09:41:58] [PASSED] DPI
[09:41:58] [PASSED] Writeback
[09:41:58] [PASSED] SPI
[09:41:58] [PASSED] USB
[09:41:58] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[09:41:58] ======== drm_test_drm_connector_dynamic_init_name =========
[09:41:58] [PASSED] Unknown
[09:41:58] [PASSED] VGA
[09:41:58] [PASSED] DVI-I
[09:41:58] [PASSED] DVI-D
[09:41:58] [PASSED] DVI-A
[09:41:58] [PASSED] Composite
[09:41:58] [PASSED] SVIDEO
[09:41:58] [PASSED] LVDS
[09:41:58] [PASSED] Component
[09:41:58] [PASSED] DIN
[09:41:58] [PASSED] DP
[09:41:58] [PASSED] HDMI-A
[09:41:58] [PASSED] HDMI-B
[09:41:58] [PASSED] TV
[09:41:58] [PASSED] eDP
[09:41:58] [PASSED] Virtual
[09:41:58] [PASSED] DSI
[09:41:58] [PASSED] DPI
[09:41:58] [PASSED] Writeback
[09:41:58] [PASSED] SPI
[09:41:58] [PASSED] USB
[09:41:58] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[09:41:58] =========== [PASSED] drm_connector_dynamic_init ============
[09:41:58] ==== drm_connector_dynamic_register_early (4 subtests) =====
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[09:41:58] ====== [PASSED] drm_connector_dynamic_register_early =======
[09:41:58] ======= drm_connector_dynamic_register (7 subtests) ========
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[09:41:58] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[09:41:58] ========= [PASSED] drm_connector_dynamic_register ==========
[09:41:58] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[09:41:58] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[09:41:58] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[09:41:58] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[09:41:58] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[09:41:58] ========== drm_test_get_tv_mode_from_name_valid ===========
[09:41:58] [PASSED] NTSC
[09:41:58] [PASSED] NTSC-443
[09:41:58] [PASSED] NTSC-J
[09:41:58] [PASSED] PAL
[09:41:58] [PASSED] PAL-M
[09:41:58] [PASSED] PAL-N
[09:41:58] [PASSED] SECAM
[09:41:58] [PASSED] Mono
[09:41:58] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[09:41:58] [PASSED] drm_test_get_tv_mode_from_name_truncated
[09:41:58] ============ [PASSED] drm_get_tv_mode_from_name ============
[09:41:58] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[09:41:58] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[09:41:58] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[09:41:58] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[09:41:58] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[09:41:58] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[09:41:58] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[09:41:58] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[09:41:58] [PASSED] VIC 96
[09:41:58] [PASSED] VIC 97
[09:41:58] [PASSED] VIC 101
[09:41:58] [PASSED] VIC 102
[09:41:58] [PASSED] VIC 106
[09:41:58] [PASSED] VIC 107
[09:41:58] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[09:41:58] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[09:41:58] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[09:41:58] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[09:41:58] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[09:41:58] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[09:41:58] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[09:41:58] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[09:41:58] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[09:41:58] [PASSED] Automatic
[09:41:58] [PASSED] Full
[09:41:58] [PASSED] Limited 16:235
[09:41:58] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[09:41:58] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[09:41:58] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[09:41:58] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[09:41:58] === drm_test_drm_hdmi_connector_get_output_format_name ====
[09:41:58] [PASSED] RGB
[09:41:58] [PASSED] YUV 4:2:0
[09:41:58] [PASSED] YUV 4:2:2
[09:41:58] [PASSED] YUV 4:4:4
[09:41:58] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[09:41:58] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[09:41:58] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[09:41:58] ============= drm_damage_helper (21 subtests) ==============
[09:41:58] [PASSED] drm_test_damage_iter_no_damage
[09:41:58] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[09:41:58] [PASSED] drm_test_damage_iter_no_damage_src_moved
[09:41:58] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[09:41:58] [PASSED] drm_test_damage_iter_no_damage_not_visible
[09:41:58] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[09:41:58] [PASSED] drm_test_damage_iter_no_damage_no_fb
[09:41:58] [PASSED] drm_test_damage_iter_simple_damage
[09:41:58] [PASSED] drm_test_damage_iter_single_damage
[09:41:58] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[09:41:58] [PASSED] drm_test_damage_iter_single_damage_outside_src
[09:41:58] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[09:41:58] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[09:41:58] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[09:41:58] [PASSED] drm_test_damage_iter_single_damage_src_moved
[09:41:58] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[09:41:58] [PASSED] drm_test_damage_iter_damage
[09:41:58] [PASSED] drm_test_damage_iter_damage_one_intersect
[09:41:58] [PASSED] drm_test_damage_iter_damage_one_outside
[09:41:58] [PASSED] drm_test_damage_iter_damage_src_moved
[09:41:58] [PASSED] drm_test_damage_iter_damage_not_visible
[09:41:58] ================ [PASSED] drm_damage_helper ================
[09:41:58] ============== drm_dp_mst_helper (3 subtests) ==============
[09:41:58] ============== drm_test_dp_mst_calc_pbn_mode ==============
[09:41:58] [PASSED] Clock 154000 BPP 30 DSC disabled
[09:41:58] [PASSED] Clock 234000 BPP 30 DSC disabled
[09:41:58] [PASSED] Clock 297000 BPP 24 DSC disabled
[09:41:58] [PASSED] Clock 332880 BPP 24 DSC enabled
[09:41:58] [PASSED] Clock 324540 BPP 24 DSC enabled
[09:41:58] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[09:41:58] ============== drm_test_dp_mst_calc_pbn_div ===============
[09:41:58] [PASSED] Link rate 2000000 lane count 4
[09:41:58] [PASSED] Link rate 2000000 lane count 2
[09:41:58] [PASSED] Link rate 2000000 lane count 1
[09:41:58] [PASSED] Link rate 1350000 lane count 4
[09:41:58] [PASSED] Link rate 1350000 lane count 2
[09:41:58] [PASSED] Link rate 1350000 lane count 1
[09:41:58] [PASSED] Link rate 1000000 lane count 4
[09:41:58] [PASSED] Link rate 1000000 lane count 2
[09:41:58] [PASSED] Link rate 1000000 lane count 1
[09:41:58] [PASSED] Link rate 810000 lane count 4
[09:41:58] [PASSED] Link rate 810000 lane count 2
[09:41:58] [PASSED] Link rate 810000 lane count 1
[09:41:58] [PASSED] Link rate 540000 lane count 4
[09:41:58] [PASSED] Link rate 540000 lane count 2
[09:41:58] [PASSED] Link rate 540000 lane count 1
[09:41:58] [PASSED] Link rate 270000 lane count 4
[09:41:58] [PASSED] Link rate 270000 lane count 2
[09:41:58] [PASSED] Link rate 270000 lane count 1
[09:41:58] [PASSED] Link rate 162000 lane count 4
[09:41:58] [PASSED] Link rate 162000 lane count 2
[09:41:58] [PASSED] Link rate 162000 lane count 1
[09:41:58] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[09:41:58] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[09:41:58] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[09:41:58] [PASSED] DP_POWER_UP_PHY with port number
[09:41:58] [PASSED] DP_POWER_DOWN_PHY with port number
[09:41:58] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[09:41:58] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[09:41:58] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[09:41:58] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[09:41:58] [PASSED] DP_QUERY_PAYLOAD with port number
[09:41:58] [PASSED] DP_QUERY_PAYLOAD with VCPI
[09:41:58] [PASSED] DP_REMOTE_DPCD_READ with port number
[09:41:58] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[09:41:58] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[09:41:58] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[09:41:58] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[09:41:58] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[09:41:58] [PASSED] DP_REMOTE_I2C_READ with port number
[09:41:58] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[09:41:58] [PASSED] DP_REMOTE_I2C_READ with transactions array
[09:41:58] [PASSED] DP_REMOTE_I2C_WRITE with port number
[09:41:58] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[09:41:58] [PASSED] DP_REMOTE_I2C_WRITE with data array
[09:41:58] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[09:41:58] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[09:41:58] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[09:41:58] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[09:41:58] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[09:41:58] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[09:41:58] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[09:41:58] ================ [PASSED] drm_dp_mst_helper ================
[09:41:58] ================== drm_exec (7 subtests) ===================
[09:41:58] [PASSED] sanitycheck
[09:41:58] [PASSED] test_lock
[09:41:58] [PASSED] test_lock_unlock
[09:41:58] [PASSED] test_duplicates
[09:41:58] [PASSED] test_prepare
[09:41:58] [PASSED] test_prepare_array
[09:41:58] [PASSED] test_multiple_loops
[09:41:58] ==================== [PASSED] drm_exec =====================
[09:41:58] =========== drm_format_helper_test (17 subtests) ===========
[09:41:58] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[09:41:58] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[09:41:58] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[09:41:58] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[09:41:58] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[09:41:58] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[09:41:58] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[09:41:58] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[09:41:58] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[09:41:58] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[09:41:58] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[09:41:58] ============== drm_test_fb_xrgb8888_to_mono ===============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[09:41:58] ==================== drm_test_fb_swab =====================
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ================ [PASSED] drm_test_fb_swab =================
[09:41:58] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[09:41:58] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[09:41:58] [PASSED] single_pixel_source_buffer
[09:41:58] [PASSED] single_pixel_clip_rectangle
[09:41:58] [PASSED] well_known_colors
[09:41:58] [PASSED] destination_pitch
[09:41:58] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[09:41:58] ================= drm_test_fb_clip_offset =================
[09:41:58] [PASSED] pass through
[09:41:58] [PASSED] horizontal offset
[09:41:58] [PASSED] vertical offset
[09:41:58] [PASSED] horizontal and vertical offset
[09:41:58] [PASSED] horizontal offset (custom pitch)
[09:41:58] [PASSED] vertical offset (custom pitch)
[09:41:58] [PASSED] horizontal and vertical offset (custom pitch)
[09:41:58] ============= [PASSED] drm_test_fb_clip_offset =============
[09:41:58] =================== drm_test_fb_memcpy ====================
[09:41:58] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[09:41:58] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[09:41:58] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[09:41:58] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[09:41:58] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[09:41:58] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[09:41:58] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[09:41:58] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[09:41:58] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[09:41:58] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[09:41:58] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[09:41:58] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[09:41:58] =============== [PASSED] drm_test_fb_memcpy ================
[09:41:58] ============= [PASSED] drm_format_helper_test ==============
[09:41:58] ================= drm_format (18 subtests) =================
[09:41:58] [PASSED] drm_test_format_block_width_invalid
[09:41:58] [PASSED] drm_test_format_block_width_one_plane
[09:41:58] [PASSED] drm_test_format_block_width_two_plane
[09:41:58] [PASSED] drm_test_format_block_width_three_plane
[09:41:58] [PASSED] drm_test_format_block_width_tiled
[09:41:58] [PASSED] drm_test_format_block_height_invalid
[09:41:58] [PASSED] drm_test_format_block_height_one_plane
[09:41:58] [PASSED] drm_test_format_block_height_two_plane
[09:41:58] [PASSED] drm_test_format_block_height_three_plane
[09:41:58] [PASSED] drm_test_format_block_height_tiled
[09:41:58] [PASSED] drm_test_format_min_pitch_invalid
[09:41:58] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[09:41:58] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[09:41:58] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[09:41:58] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[09:41:58] [PASSED] drm_test_format_min_pitch_two_plane
[09:41:58] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[09:41:58] [PASSED] drm_test_format_min_pitch_tiled
[09:41:58] =================== [PASSED] drm_format ====================
[09:41:58] ============== drm_framebuffer (10 subtests) ===============
[09:41:58] ========== drm_test_framebuffer_check_src_coords ==========
[09:41:58] [PASSED] Success: source fits into fb
[09:41:58] [PASSED] Fail: overflowing fb with x-axis coordinate
[09:41:58] [PASSED] Fail: overflowing fb with y-axis coordinate
[09:41:58] [PASSED] Fail: overflowing fb with source width
[09:41:58] [PASSED] Fail: overflowing fb with source height
[09:41:58] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[09:41:58] [PASSED] drm_test_framebuffer_cleanup
[09:41:58] =============== drm_test_framebuffer_create ===============
[09:41:58] [PASSED] ABGR8888 normal sizes
[09:41:58] [PASSED] ABGR8888 max sizes
[09:41:58] [PASSED] ABGR8888 pitch greater than min required
[09:41:58] [PASSED] ABGR8888 pitch less than min required
[09:41:58] [PASSED] ABGR8888 Invalid width
[09:41:58] [PASSED] ABGR8888 Invalid buffer handle
[09:41:58] [PASSED] No pixel format
[09:41:58] [PASSED] ABGR8888 Width 0
[09:41:58] [PASSED] ABGR8888 Height 0
[09:41:58] [PASSED] ABGR8888 Out of bound height * pitch combination
[09:41:58] [PASSED] ABGR8888 Large buffer offset
[09:41:58] [PASSED] ABGR8888 Buffer offset for inexistent plane
[09:41:58] [PASSED] ABGR8888 Invalid flag
[09:41:58] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[09:41:58] [PASSED] ABGR8888 Valid buffer modifier
[09:41:58] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[09:41:58] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[09:41:58] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[09:41:58] [PASSED] NV12 Normal sizes
[09:41:58] [PASSED] NV12 Max sizes
[09:41:58] [PASSED] NV12 Invalid pitch
[09:41:58] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[09:41:58] [PASSED] NV12 different modifier per-plane
[09:41:58] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[09:41:58] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[09:41:58] [PASSED] NV12 Modifier for inexistent plane
[09:41:58] [PASSED] NV12 Handle for inexistent plane
[09:41:58] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[09:41:58] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[09:41:58] [PASSED] YVU420 Normal sizes
[09:41:58] [PASSED] YVU420 Max sizes
[09:41:58] [PASSED] YVU420 Invalid pitch
[09:41:58] [PASSED] YVU420 Different pitches
[09:41:58] [PASSED] YVU420 Different buffer offsets/pitches
[09:41:58] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[09:41:58] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[09:41:58] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[09:41:58] [PASSED] YVU420 Valid modifier
[09:41:58] [PASSED] YVU420 Different modifiers per plane
[09:41:58] [PASSED] YVU420 Modifier for inexistent plane
[09:41:58] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[09:41:58] [PASSED] X0L2 Normal sizes
[09:41:58] [PASSED] X0L2 Max sizes
[09:41:58] [PASSED] X0L2 Invalid pitch
[09:41:58] [PASSED] X0L2 Pitch greater than minimum required
[09:41:58] [PASSED] X0L2 Handle for inexistent plane
[09:41:58] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[09:41:58] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[09:41:58] [PASSED] X0L2 Valid modifier
[09:41:58] [PASSED] X0L2 Modifier for inexistent plane
[09:41:58] =========== [PASSED] drm_test_framebuffer_create ===========
[09:41:58] [PASSED] drm_test_framebuffer_free
[09:41:58] [PASSED] drm_test_framebuffer_init
[09:41:58] [PASSED] drm_test_framebuffer_init_bad_format
[09:41:58] [PASSED] drm_test_framebuffer_init_dev_mismatch
[09:41:58] [PASSED] drm_test_framebuffer_lookup
[09:41:58] [PASSED] drm_test_framebuffer_lookup_inexistent
[09:41:58] [PASSED] drm_test_framebuffer_modifiers_not_supported
[09:41:58] ================= [PASSED] drm_framebuffer =================
[09:41:58] ================ drm_gem_shmem (8 subtests) ================
[09:41:58] [PASSED] drm_gem_shmem_test_obj_create
[09:41:58] [PASSED] drm_gem_shmem_test_obj_create_private
[09:41:58] [PASSED] drm_gem_shmem_test_pin_pages
[09:41:58] [PASSED] drm_gem_shmem_test_vmap
[09:41:58] [PASSED] drm_gem_shmem_test_get_sg_table
[09:41:58] [PASSED] drm_gem_shmem_test_get_pages_sgt
[09:41:58] [PASSED] drm_gem_shmem_test_madvise
[09:41:58] [PASSED] drm_gem_shmem_test_purge
[09:41:58] ================== [PASSED] drm_gem_shmem ==================
[09:41:58] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[09:41:58] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[09:41:58] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[09:41:58] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[09:41:58] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[09:41:58] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[09:41:58] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[09:41:58] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[09:41:58] [PASSED] Automatic
[09:41:58] [PASSED] Full
[09:41:58] [PASSED] Limited 16:235
[09:41:58] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[09:41:58] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[09:41:58] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[09:41:58] [PASSED] drm_test_check_disable_connector
[09:41:58] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[09:41:58] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[09:41:58] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[09:41:58] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[09:41:58] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[09:41:58] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[09:41:58] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[09:41:58] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[09:41:58] [PASSED] drm_test_check_output_bpc_dvi
[09:41:58] [PASSED] drm_test_check_output_bpc_format_vic_1
[09:41:58] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[09:41:58] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[09:41:58] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[09:41:58] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[09:41:58] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[09:41:58] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[09:41:58] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[09:41:58] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[09:41:58] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[09:41:58] [PASSED] drm_test_check_broadcast_rgb_value
[09:41:58] [PASSED] drm_test_check_bpc_8_value
[09:41:58] [PASSED] drm_test_check_bpc_10_value
[09:41:58] [PASSED] drm_test_check_bpc_12_value
[09:41:58] [PASSED] drm_test_check_format_value
[09:41:58] [PASSED] drm_test_check_tmds_char_value
[09:41:58] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[09:41:58] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[09:41:58] [PASSED] drm_test_check_mode_valid
[09:41:58] [PASSED] drm_test_check_mode_valid_reject
[09:41:58] [PASSED] drm_test_check_mode_valid_reject_rate
[09:41:58] [PASSED] drm_test_check_mode_valid_reject_max_clock
[09:41:58] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[09:41:58] = drm_atomic_helper_connector_hdmi_infoframes (5 subtests) =
[09:41:58] [PASSED] drm_test_check_infoframes
[09:41:58] [PASSED] drm_test_check_reject_avi_infoframe
[09:41:58] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_8
[09:41:58] [PASSED] drm_test_check_reject_hdr_infoframe_bpc_10
[09:41:58] [PASSED] drm_test_check_reject_audio_infoframe
[09:41:58] === [PASSED] drm_atomic_helper_connector_hdmi_infoframes ===
[09:41:58] ================= drm_managed (2 subtests) =================
[09:41:58] [PASSED] drm_test_managed_release_action
[09:41:58] [PASSED] drm_test_managed_run_action
[09:41:58] =================== [PASSED] drm_managed ===================
[09:41:58] =================== drm_mm (6 subtests) ====================
[09:41:58] [PASSED] drm_test_mm_init
[09:41:58] [PASSED] drm_test_mm_debug
[09:41:58] [PASSED] drm_test_mm_align32
[09:41:58] [PASSED] drm_test_mm_align64
[09:41:58] [PASSED] drm_test_mm_lowest
[09:41:58] [PASSED] drm_test_mm_highest
[09:41:58] ===================== [PASSED] drm_mm ======================
[09:41:58] ============= drm_modes_analog_tv (5 subtests) =============
[09:41:58] [PASSED] drm_test_modes_analog_tv_mono_576i
[09:41:58] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[09:41:58] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[09:41:58] [PASSED] drm_test_modes_analog_tv_pal_576i
[09:41:58] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[09:41:58] =============== [PASSED] drm_modes_analog_tv ===============
[09:41:58] ============== drm_plane_helper (2 subtests) ===============
[09:41:58] =============== drm_test_check_plane_state ================
[09:41:58] [PASSED] clipping_simple
[09:41:58] [PASSED] clipping_rotate_reflect
[09:41:58] [PASSED] positioning_simple
[09:41:58] [PASSED] upscaling
[09:41:58] [PASSED] downscaling
[09:41:58] [PASSED] rounding1
[09:41:58] [PASSED] rounding2
[09:41:58] [PASSED] rounding3
[09:41:58] [PASSED] rounding4
[09:41:58] =========== [PASSED] drm_test_check_plane_state ============
[09:41:58] =========== drm_test_check_invalid_plane_state ============
[09:41:58] [PASSED] positioning_invalid
[09:41:58] [PASSED] upscaling_invalid
[09:41:58] [PASSED] downscaling_invalid
[09:41:58] ======= [PASSED] drm_test_check_invalid_plane_state ========
[09:41:58] ================ [PASSED] drm_plane_helper =================
[09:41:58] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[09:41:58] ====== drm_test_connector_helper_tv_get_modes_check =======
[09:41:58] [PASSED] None
[09:41:58] [PASSED] PAL
[09:41:58] [PASSED] NTSC
[09:41:58] [PASSED] Both, NTSC Default
[09:41:58] [PASSED] Both, PAL Default
[09:41:58] [PASSED] Both, NTSC Default, with PAL on command-line
[09:41:58] [PASSED] Both, PAL Default, with NTSC on command-line
[09:41:58] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[09:41:58] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[09:41:58] ================== drm_rect (9 subtests) ===================
[09:41:58] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[09:41:58] [PASSED] drm_test_rect_clip_scaled_not_clipped
[09:41:58] [PASSED] drm_test_rect_clip_scaled_clipped
[09:41:58] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[09:41:58] ================= drm_test_rect_intersect =================
[09:41:58] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[09:41:58] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[09:41:58] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[09:41:58] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[09:41:58] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[09:41:58] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[09:41:58] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[09:41:58] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[09:41:58] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[09:41:58] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[09:41:58] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[09:41:58] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[09:41:58] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[09:41:58] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[09:41:58] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[09:41:58] ============= [PASSED] drm_test_rect_intersect =============
[09:41:58] ================ drm_test_rect_calc_hscale ================
[09:41:58] [PASSED] normal use
[09:41:58] [PASSED] out of max range
[09:41:58] [PASSED] out of min range
[09:41:58] [PASSED] zero dst
[09:41:58] [PASSED] negative src
[09:41:58] [PASSED] negative dst
[09:41:58] ============ [PASSED] drm_test_rect_calc_hscale ============
[09:41:58] ================ drm_test_rect_calc_vscale ================
[09:41:58] [PASSED] normal use
[09:41:58] [PASSED] out of max range
[09:41:58] [PASSED] out of min range
[09:41:58] [PASSED] zero dst
[09:41:58] [PASSED] negative src
[09:41:58] [PASSED] negative dst
stty: 'standard input': Inappropriate ioctl for device
[09:41:58] ============ [PASSED] drm_test_rect_calc_vscale ============
[09:41:58] ================== drm_test_rect_rotate ===================
[09:41:58] [PASSED] reflect-x
[09:41:58] [PASSED] reflect-y
[09:41:58] [PASSED] rotate-0
[09:41:58] [PASSED] rotate-90
[09:41:58] [PASSED] rotate-180
[09:41:58] [PASSED] rotate-270
[09:41:58] ============== [PASSED] drm_test_rect_rotate ===============
[09:41:58] ================ drm_test_rect_rotate_inv =================
[09:41:58] [PASSED] reflect-x
[09:41:58] [PASSED] reflect-y
[09:41:58] [PASSED] rotate-0
[09:41:58] [PASSED] rotate-90
[09:41:58] [PASSED] rotate-180
[09:41:58] [PASSED] rotate-270
[09:41:58] ============ [PASSED] drm_test_rect_rotate_inv =============
[09:41:58] ==================== [PASSED] drm_rect =====================
[09:41:58] ============ drm_sysfb_modeset_test (1 subtest) ============
[09:41:58] ============ drm_test_sysfb_build_fourcc_list =============
[09:41:58] [PASSED] no native formats
[09:41:58] [PASSED] XRGB8888 as native format
[09:41:58] [PASSED] remove duplicates
[09:41:58] [PASSED] convert alpha formats
[09:41:58] [PASSED] random formats
[09:41:58] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[09:41:58] ============= [PASSED] drm_sysfb_modeset_test ==============
[09:41:58] ================== drm_fixp (2 subtests) ===================
[09:41:58] [PASSED] drm_test_int2fixp
[09:41:58] [PASSED] drm_test_sm2fixp
[09:41:58] ==================== [PASSED] drm_fixp =====================
[09:41:58] ============================================================
[09:41:58] Testing complete. Ran 621 tests: passed: 621
[09:41:58] Elapsed time: 27.229s total, 1.677s configuring, 25.384s building, 0.135s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[09:41:58] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[09:42:00] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[09:42:10] Starting KUnit Kernel (1/1)...
[09:42:10] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[09:42:10] ================= ttm_device (5 subtests) ==================
[09:42:10] [PASSED] ttm_device_init_basic
[09:42:10] [PASSED] ttm_device_init_multiple
[09:42:10] [PASSED] ttm_device_fini_basic
[09:42:10] [PASSED] ttm_device_init_no_vma_man
[09:42:10] ================== ttm_device_init_pools ==================
[09:42:10] [PASSED] No DMA allocations, no DMA32 required
[09:42:10] [PASSED] DMA allocations, DMA32 required
[09:42:10] [PASSED] No DMA allocations, DMA32 required
[09:42:10] [PASSED] DMA allocations, no DMA32 required
[09:42:10] ============== [PASSED] ttm_device_init_pools ==============
[09:42:10] =================== [PASSED] ttm_device ====================
[09:42:10] ================== ttm_pool (8 subtests) ===================
[09:42:10] ================== ttm_pool_alloc_basic ===================
[09:42:10] [PASSED] One page
[09:42:10] [PASSED] More than one page
[09:42:10] [PASSED] Above the allocation limit
[09:42:10] [PASSED] One page, with coherent DMA mappings enabled
[09:42:10] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[09:42:10] ============== [PASSED] ttm_pool_alloc_basic ===============
[09:42:10] ============== ttm_pool_alloc_basic_dma_addr ==============
[09:42:10] [PASSED] One page
[09:42:10] [PASSED] More than one page
[09:42:10] [PASSED] Above the allocation limit
[09:42:10] [PASSED] One page, with coherent DMA mappings enabled
[09:42:10] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[09:42:10] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[09:42:10] [PASSED] ttm_pool_alloc_order_caching_match
[09:42:10] [PASSED] ttm_pool_alloc_caching_mismatch
[09:42:10] [PASSED] ttm_pool_alloc_order_mismatch
[09:42:10] [PASSED] ttm_pool_free_dma_alloc
[09:42:10] [PASSED] ttm_pool_free_no_dma_alloc
[09:42:10] [PASSED] ttm_pool_fini_basic
[09:42:10] ==================== [PASSED] ttm_pool =====================
[09:42:10] ================ ttm_resource (8 subtests) =================
[09:42:10] ================= ttm_resource_init_basic =================
[09:42:10] [PASSED] Init resource in TTM_PL_SYSTEM
[09:42:10] [PASSED] Init resource in TTM_PL_VRAM
[09:42:10] [PASSED] Init resource in a private placement
[09:42:10] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[09:42:10] ============= [PASSED] ttm_resource_init_basic =============
[09:42:10] [PASSED] ttm_resource_init_pinned
[09:42:10] [PASSED] ttm_resource_fini_basic
[09:42:10] [PASSED] ttm_resource_manager_init_basic
[09:42:10] [PASSED] ttm_resource_manager_usage_basic
[09:42:10] [PASSED] ttm_resource_manager_set_used_basic
[09:42:10] [PASSED] ttm_sys_man_alloc_basic
[09:42:10] [PASSED] ttm_sys_man_free_basic
[09:42:10] ================== [PASSED] ttm_resource ===================
[09:42:10] =================== ttm_tt (15 subtests) ===================
[09:42:10] ==================== ttm_tt_init_basic ====================
[09:42:10] [PASSED] Page-aligned size
[09:42:10] [PASSED] Extra pages requested
[09:42:10] ================ [PASSED] ttm_tt_init_basic ================
[09:42:10] [PASSED] ttm_tt_init_misaligned
[09:42:10] [PASSED] ttm_tt_fini_basic
[09:42:10] [PASSED] ttm_tt_fini_sg
[09:42:10] [PASSED] ttm_tt_fini_shmem
[09:42:10] [PASSED] ttm_tt_create_basic
[09:42:10] [PASSED] ttm_tt_create_invalid_bo_type
[09:42:10] [PASSED] ttm_tt_create_ttm_exists
[09:42:10] [PASSED] ttm_tt_create_failed
[09:42:10] [PASSED] ttm_tt_destroy_basic
[09:42:10] [PASSED] ttm_tt_populate_null_ttm
[09:42:10] [PASSED] ttm_tt_populate_populated_ttm
[09:42:10] [PASSED] ttm_tt_unpopulate_basic
[09:42:10] [PASSED] ttm_tt_unpopulate_empty_ttm
[09:42:10] [PASSED] ttm_tt_swapin_basic
[09:42:10] ===================== [PASSED] ttm_tt ======================
[09:42:10] =================== ttm_bo (14 subtests) ===================
[09:42:10] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[09:42:10] [PASSED] Cannot be interrupted and sleeps
[09:42:10] [PASSED] Cannot be interrupted, locks straight away
[09:42:10] [PASSED] Can be interrupted, sleeps
[09:42:10] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[09:42:10] [PASSED] ttm_bo_reserve_locked_no_sleep
[09:42:10] [PASSED] ttm_bo_reserve_no_wait_ticket
[09:42:10] [PASSED] ttm_bo_reserve_double_resv
[09:42:10] [PASSED] ttm_bo_reserve_interrupted
[09:42:10] [PASSED] ttm_bo_reserve_deadlock
[09:42:10] [PASSED] ttm_bo_unreserve_basic
[09:42:10] [PASSED] ttm_bo_unreserve_pinned
[09:42:10] [PASSED] ttm_bo_unreserve_bulk
[09:42:10] [PASSED] ttm_bo_fini_basic
[09:42:10] [PASSED] ttm_bo_fini_shared_resv
[09:42:10] [PASSED] ttm_bo_pin_basic
[09:42:10] [PASSED] ttm_bo_pin_unpin_resource
[09:42:10] [PASSED] ttm_bo_multiple_pin_one_unpin
[09:42:10] ===================== [PASSED] ttm_bo ======================
[09:42:10] ============== ttm_bo_validate (21 subtests) ===============
[09:42:10] ============== ttm_bo_init_reserved_sys_man ===============
[09:42:10] [PASSED] Buffer object for userspace
[09:42:10] [PASSED] Kernel buffer object
[09:42:10] [PASSED] Shared buffer object
[09:42:10] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[09:42:10] ============== ttm_bo_init_reserved_mock_man ==============
[09:42:10] [PASSED] Buffer object for userspace
[09:42:10] [PASSED] Kernel buffer object
[09:42:10] [PASSED] Shared buffer object
[09:42:10] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[09:42:10] [PASSED] ttm_bo_init_reserved_resv
[09:42:10] ================== ttm_bo_validate_basic ==================
[09:42:10] [PASSED] Buffer object for userspace
[09:42:10] [PASSED] Kernel buffer object
[09:42:10] [PASSED] Shared buffer object
[09:42:10] ============== [PASSED] ttm_bo_validate_basic ==============
[09:42:10] [PASSED] ttm_bo_validate_invalid_placement
[09:42:10] ============= ttm_bo_validate_same_placement ==============
[09:42:10] [PASSED] System manager
[09:42:10] [PASSED] VRAM manager
[09:42:10] ========= [PASSED] ttm_bo_validate_same_placement ==========
[09:42:10] [PASSED] ttm_bo_validate_failed_alloc
[09:42:10] [PASSED] ttm_bo_validate_pinned
[09:42:10] [PASSED] ttm_bo_validate_busy_placement
[09:42:10] ================ ttm_bo_validate_multihop =================
[09:42:10] [PASSED] Buffer object for userspace
[09:42:10] [PASSED] Kernel buffer object
[09:42:10] [PASSED] Shared buffer object
[09:42:10] ============ [PASSED] ttm_bo_validate_multihop =============
[09:42:10] ========== ttm_bo_validate_no_placement_signaled ==========
[09:42:10] [PASSED] Buffer object in system domain, no page vector
[09:42:10] [PASSED] Buffer object in system domain with an existing page vector
[09:42:10] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[09:42:10] ======== ttm_bo_validate_no_placement_not_signaled ========
[09:42:10] [PASSED] Buffer object for userspace
[09:42:10] [PASSED] Kernel buffer object
[09:42:10] [PASSED] Shared buffer object
[09:42:10] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[09:42:10] [PASSED] ttm_bo_validate_move_fence_signaled
[09:42:10] ========= ttm_bo_validate_move_fence_not_signaled =========
[09:42:10] [PASSED] Waits for GPU
[09:42:10] [PASSED] Tries to lock straight away
[09:42:10] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[09:42:10] [PASSED] ttm_bo_validate_happy_evict
[09:42:10] [PASSED] ttm_bo_validate_all_pinned_evict
[09:42:10] [PASSED] ttm_bo_validate_allowed_only_evict
[09:42:10] [PASSED] ttm_bo_validate_deleted_evict
[09:42:10] [PASSED] ttm_bo_validate_busy_domain_evict
[09:42:10] [PASSED] ttm_bo_validate_evict_gutting
[09:42:10] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[09:42:10] ================= [PASSED] ttm_bo_validate =================
[09:42:10] ============================================================
[09:42:10] Testing complete. Ran 101 tests: passed: 101
[09:42:10] Elapsed time: 11.393s total, 1.700s configuring, 9.476s building, 0.187s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 19+ messages in thread* ✓ Xe.CI.BAT: success for drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
` (8 preceding siblings ...)
2026-02-19 9:42 ` ✓ CI.KUnit: success " Patchwork
@ 2026-02-19 10:40 ` Patchwork
2026-02-19 13:04 ` ✗ Xe.CI.FULL: failure " Patchwork
10 siblings, 0 replies; 19+ messages in thread
From: Patchwork @ 2026-02-19 10:40 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 2395 bytes --]
== Series Details ==
Series: drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap
URL : https://patchwork.freedesktop.org/series/161815/
State : success
== Summary ==
CI Bug Log - changes from xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857_BAT -> xe-pw-161815v1_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (14 -> 14)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-161815v1_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@xe_pat@pat-index-xe2@render:
- bat-ptl-vm: [PASS][1] -> [DMESG-WARN][2] ([Intel XE#7110]) +1 other test dmesg-warn
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/bat-ptl-vm/igt@xe_pat@pat-index-xe2@render.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/bat-ptl-vm/igt@xe_pat@pat-index-xe2@render.html
#### Possible fixes ####
* igt@xe_waitfence@engine:
- bat-dg2-oem2: [FAIL][3] ([Intel XE#6519]) -> [PASS][4]
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/bat-dg2-oem2/igt@xe_waitfence@engine.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/bat-dg2-oem2/igt@xe_waitfence@engine.html
* igt@xe_waitfence@reltime:
- bat-dg2-oem2: [FAIL][5] ([Intel XE#6520]) -> [PASS][6]
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/bat-dg2-oem2/igt@xe_waitfence@reltime.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/bat-dg2-oem2/igt@xe_waitfence@reltime.html
[Intel XE#6519]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6519
[Intel XE#6520]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6520
[Intel XE#7110]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7110
Build changes
-------------
* IGT: IGT_8760 -> IGT_8761
* Linux: xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857 -> xe-pw-161815v1
IGT_8760: 8760
IGT_8761: 8761
xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857: e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857
xe-pw-161815v1: 161815v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/index.html
[-- Attachment #2: Type: text/html, Size: 3030 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread* ✗ Xe.CI.FULL: failure for drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap
2026-02-19 9:13 [RFC 0/7] drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap Arvind Yadav
` (9 preceding siblings ...)
2026-02-19 10:40 ` ✓ Xe.CI.BAT: " Patchwork
@ 2026-02-19 13:04 ` Patchwork
10 siblings, 0 replies; 19+ messages in thread
From: Patchwork @ 2026-02-19 13:04 UTC (permalink / raw)
To: Arvind Yadav; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 53750 bytes --]
== Series Details ==
Series: drm/xe/svm: Add MMU notifier-based madvise autoreset on munmap
URL : https://patchwork.freedesktop.org/series/161815/
State : failure
== Summary ==
CI Bug Log - changes from xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857_FULL -> xe-pw-161815v1_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-161815v1_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-161815v1_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (2 -> 2)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-161815v1_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-modifier:
- shard-lnl: NOTRUN -> [SKIP][1]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-modifier.html
* igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-modifier-source-clamping:
- shard-bmg: NOTRUN -> [SKIP][2] +4 other tests skip
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@kms_plane@pixel-format-4-tiled-mtl-rc-ccs-modifier-source-clamping.html
* igt@kms_pm_dc@dc6-psr:
- shard-lnl: [PASS][3] -> [FAIL][4]
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-lnl-3/igt@kms_pm_dc@dc6-psr.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@kms_pm_dc@dc6-psr.html
Known issues
------------
Here are the changes found in xe-pw-161815v1_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_atomic_transition@plane-all-modeset-transition-fencing:
- shard-lnl: NOTRUN -> [SKIP][5] ([Intel XE#3279])
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html
* igt@kms_big_fb@4-tiled-32bpp-rotate-270:
- shard-bmg: NOTRUN -> [SKIP][6] ([Intel XE#2327]) +1 other test skip
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@kms_big_fb@4-tiled-32bpp-rotate-270.html
* igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip:
- shard-lnl: NOTRUN -> [SKIP][7] ([Intel XE#3658])
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-4/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0-hflip-async-flip.html
* igt@kms_big_fb@linear-max-hw-stride-64bpp-rotate-0-hflip:
- shard-bmg: NOTRUN -> [SKIP][8] ([Intel XE#7059])
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@kms_big_fb@linear-max-hw-stride-64bpp-rotate-0-hflip.html
* igt@kms_big_fb@x-tiled-32bpp-rotate-90:
- shard-lnl: NOTRUN -> [SKIP][9] ([Intel XE#1407])
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@kms_big_fb@x-tiled-32bpp-rotate-90.html
* igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip:
- shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#1124]) +9 other tests skip
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-9/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-180-async-flip.html
* igt@kms_big_fb@yf-tiled-addfb:
- shard-bmg: NOTRUN -> [SKIP][11] ([Intel XE#2328])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@kms_big_fb@yf-tiled-addfb.html
* igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow:
- shard-bmg: NOTRUN -> [SKIP][12] ([Intel XE#607])
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@kms_big_fb@yf-tiled-addfb-size-offset-overflow.html
* igt@kms_big_fb@yf-tiled-addfb-size-overflow:
- shard-lnl: NOTRUN -> [SKIP][13] ([Intel XE#1428])
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html
- shard-bmg: NOTRUN -> [SKIP][14] ([Intel XE#610])
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-9/igt@kms_big_fb@yf-tiled-addfb-size-overflow.html
* igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
- shard-lnl: NOTRUN -> [SKIP][15] ([Intel XE#1124]) +4 other tests skip
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-6/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
* igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p:
- shard-lnl: NOTRUN -> [SKIP][16] ([Intel XE#2191])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-3/igt@kms_bw@connected-linear-tiling-3-displays-3840x2160p.html
* igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p:
- shard-bmg: NOTRUN -> [SKIP][17] ([Intel XE#2314] / [Intel XE#2894]) +1 other test skip
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@kms_bw@connected-linear-tiling-4-displays-3840x2160p.html
* igt@kms_bw@linear-tiling-1-displays-2560x1440p:
- shard-bmg: NOTRUN -> [SKIP][18] ([Intel XE#367]) +1 other test skip
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@kms_bw@linear-tiling-1-displays-2560x1440p.html
* igt@kms_bw@linear-tiling-4-displays-1920x1080p:
- shard-lnl: NOTRUN -> [SKIP][19] ([Intel XE#1512])
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-4/igt@kms_bw@linear-tiling-4-displays-1920x1080p.html
* igt@kms_ccs@crc-primary-suspend-y-tiled-ccs:
- shard-bmg: NOTRUN -> [SKIP][20] ([Intel XE#3432]) +2 other tests skip
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-9/igt@kms_ccs@crc-primary-suspend-y-tiled-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs:
- shard-bmg: NOTRUN -> [SKIP][21] ([Intel XE#2887]) +9 other tests skip
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-yf-tiled-ccs:
- shard-lnl: NOTRUN -> [SKIP][22] ([Intel XE#2887]) +3 other tests skip
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-6/igt@kms_ccs@crc-sprite-planes-basic-yf-tiled-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs@pipe-c-dp-2:
- shard-bmg: NOTRUN -> [SKIP][23] ([Intel XE#2652] / [Intel XE#787]) +8 other tests skip
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@kms_ccs@random-ccs-data-4-tiled-lnl-ccs@pipe-c-dp-2.html
* igt@kms_chamelium_color@ctm-blue-to-red:
- shard-bmg: NOTRUN -> [SKIP][24] ([Intel XE#2325]) +2 other tests skip
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@kms_chamelium_color@ctm-blue-to-red.html
* igt@kms_chamelium_color@ctm-max:
- shard-lnl: NOTRUN -> [SKIP][25] ([Intel XE#306]) +1 other test skip
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-1/igt@kms_chamelium_color@ctm-max.html
* igt@kms_chamelium_hpd@hdmi-hpd-fast:
- shard-lnl: NOTRUN -> [SKIP][26] ([Intel XE#373]) +2 other tests skip
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-4/igt@kms_chamelium_hpd@hdmi-hpd-fast.html
* igt@kms_chamelium_hpd@hdmi-hpd-storm-disable:
- shard-bmg: NOTRUN -> [SKIP][27] ([Intel XE#2252]) +9 other tests skip
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@kms_chamelium_hpd@hdmi-hpd-storm-disable.html
* igt@kms_content_protection@atomic:
- shard-bmg: NOTRUN -> [FAIL][28] ([Intel XE#1178] / [Intel XE#3304]) +3 other tests fail
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@kms_content_protection@atomic.html
* igt@kms_content_protection@atomic-dpms@pipe-a-dp-1:
- shard-bmg: NOTRUN -> [FAIL][29] ([Intel XE#3304]) +1 other test fail
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@kms_content_protection@atomic-dpms@pipe-a-dp-1.html
* igt@kms_content_protection@dp-mst-lic-type-0:
- shard-bmg: NOTRUN -> [SKIP][30] ([Intel XE#2390] / [Intel XE#6974])
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-1/igt@kms_content_protection@dp-mst-lic-type-0.html
* igt@kms_content_protection@dp-mst-type-0-hdcp14:
- shard-bmg: NOTRUN -> [SKIP][31] ([Intel XE#6974]) +1 other test skip
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@kms_content_protection@dp-mst-type-0-hdcp14.html
* igt@kms_content_protection@srm:
- shard-lnl: NOTRUN -> [SKIP][32] ([Intel XE#3278])
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@kms_content_protection@srm.html
* igt@kms_content_protection@uevent:
- shard-bmg: NOTRUN -> [FAIL][33] ([Intel XE#6707]) +1 other test fail
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@kms_content_protection@uevent.html
* igt@kms_cursor_crc@cursor-rapid-movement-128x42:
- shard-lnl: NOTRUN -> [SKIP][34] ([Intel XE#1424]) +1 other test skip
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-4/igt@kms_cursor_crc@cursor-rapid-movement-128x42.html
- shard-bmg: NOTRUN -> [SKIP][35] ([Intel XE#2320]) +2 other tests skip
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-1/igt@kms_cursor_crc@cursor-rapid-movement-128x42.html
* igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size:
- shard-bmg: [PASS][36] -> [DMESG-WARN][37] ([Intel XE#5354])
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-2/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size.html
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-10/igt@kms_cursor_legacy@cursor-vs-flip-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-toggle:
- shard-lnl: NOTRUN -> [SKIP][38] ([Intel XE#309]) +1 other test skip
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-1/igt@kms_cursor_legacy@cursorb-vs-flipb-toggle.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions:
- shard-lnl: NOTRUN -> [SKIP][39] ([Intel XE#323])
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-4/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle:
- shard-bmg: NOTRUN -> [SKIP][40] ([Intel XE#2286])
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@kms_cursor_legacy@short-busy-flip-before-cursor-toggle.html
* igt@kms_dirtyfb@drrs-dirtyfb-ioctl:
- shard-bmg: NOTRUN -> [SKIP][41] ([Intel XE#1508])
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@kms_dirtyfb@drrs-dirtyfb-ioctl.html
* igt@kms_dirtyfb@fbc-dirtyfb-ioctl:
- shard-bmg: NOTRUN -> [SKIP][42] ([Intel XE#4210])
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-8/igt@kms_dirtyfb@fbc-dirtyfb-ioctl.html
* igt@kms_dp_link_training@non-uhbr-sst:
- shard-lnl: NOTRUN -> [SKIP][43] ([Intel XE#4354])
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-5/igt@kms_dp_link_training@non-uhbr-sst.html
* igt@kms_dp_linktrain_fallback@dp-fallback:
- shard-lnl: NOTRUN -> [SKIP][44] ([Intel XE#4294])
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-1/igt@kms_dp_linktrain_fallback@dp-fallback.html
* igt@kms_dsc@dsc-basic:
- shard-bmg: NOTRUN -> [SKIP][45] ([Intel XE#2244])
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@kms_dsc@dsc-basic.html
* igt@kms_feature_discovery@display-3x:
- shard-lnl: NOTRUN -> [SKIP][46] ([Intel XE#703])
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-5/igt@kms_feature_discovery@display-3x.html
* igt@kms_flip@2x-plain-flip-interruptible:
- shard-lnl: NOTRUN -> [SKIP][47] ([Intel XE#1421]) +1 other test skip
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-6/igt@kms_flip@2x-plain-flip-interruptible.html
* igt@kms_flip@plain-flip-fb-recreate-interruptible@c-dp2:
- shard-bmg: [PASS][48] -> [ABORT][49] ([Intel XE#5545] / [Intel XE#6652]) +1 other test abort
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-1/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-dp2.html
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@kms_flip@plain-flip-fb-recreate-interruptible@c-dp2.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-upscaling:
- shard-bmg: NOTRUN -> [SKIP][50] ([Intel XE#7178]) +4 other tests skip
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@kms_flip_scaled_crc@flip-32bpp-yftileccs-to-64bpp-yftile-upscaling.html
* igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-downscaling:
- shard-lnl: NOTRUN -> [SKIP][51] ([Intel XE#7178]) +1 other test skip
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling:
- shard-lnl: NOTRUN -> [SKIP][52] ([Intel XE#1397] / [Intel XE#1745])
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-3/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling@pipe-a-default-mode:
- shard-lnl: NOTRUN -> [SKIP][53] ([Intel XE#1397])
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-3/igt@kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling@pipe-a-default-mode.html
* igt@kms_flip_scaled_crc@flip-p016-linear-to-p016-linear-reflect-x:
- shard-bmg: NOTRUN -> [SKIP][54] ([Intel XE#7179])
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@kms_flip_scaled_crc@flip-p016-linear-to-p016-linear-reflect-x.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-msflip-blt:
- shard-lnl: NOTRUN -> [SKIP][55] ([Intel XE#651]) +4 other tests skip
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-shrfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@drrs-argb161616f-draw-render:
- shard-lnl: NOTRUN -> [SKIP][56] ([Intel XE#7061])
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-1/igt@kms_frontbuffer_tracking@drrs-argb161616f-draw-render.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render:
- shard-bmg: NOTRUN -> [SKIP][57] ([Intel XE#4141]) +13 other tests skip
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-indfb-draw-mmap-wc:
- shard-lnl: NOTRUN -> [SKIP][58] ([Intel XE#6312])
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-6/igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscreen-pri-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-pri-indfb-draw-blt:
- shard-bmg: NOTRUN -> [SKIP][59] ([Intel XE#2311]) +20 other tests skip
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-10/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-argb161616f-draw-render:
- shard-bmg: NOTRUN -> [SKIP][60] ([Intel XE#7061]) +6 other tests skip
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-argb161616f-draw-render.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-mmap-wc:
- shard-lnl: NOTRUN -> [SKIP][61] ([Intel XE#656]) +15 other tests skip
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@plane-fbc-rte:
- shard-bmg: NOTRUN -> [SKIP][62] ([Intel XE#2350])
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-6/igt@kms_frontbuffer_tracking@plane-fbc-rte.html
* igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-pgflip-blt:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#2313]) +31 other tests skip
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-8/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-shrfb-pgflip-blt.html
* igt@kms_joiner@basic-force-big-joiner:
- shard-lnl: NOTRUN -> [SKIP][64] ([Intel XE#7086]) +1 other test skip
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-3/igt@kms_joiner@basic-force-big-joiner.html
* igt@kms_plane_lowres@tiling-yf:
- shard-bmg: NOTRUN -> [SKIP][65] ([Intel XE#2393])
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-6/igt@kms_plane_lowres@tiling-yf.html
* igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-75@pipe-b:
- shard-lnl: NOTRUN -> [SKIP][66] ([Intel XE#6886]) +3 other tests skip
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-6/igt@kms_plane_scaling@planes-upscale-20x20-downscale-factor-0-75@pipe-b.html
* igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5:
- shard-bmg: NOTRUN -> [SKIP][67] ([Intel XE#6886]) +9 other tests skip
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5.html
* igt@kms_pm_backlight@brightness-with-dpms:
- shard-bmg: NOTRUN -> [SKIP][68] ([Intel XE#2938])
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-9/igt@kms_pm_backlight@brightness-with-dpms.html
* igt@kms_pm_backlight@fade:
- shard-bmg: NOTRUN -> [SKIP][69] ([Intel XE#870])
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-1/igt@kms_pm_backlight@fade.html
* igt@kms_pm_dc@dc3co-vpb-simulation:
- shard-bmg: NOTRUN -> [SKIP][70] ([Intel XE#2391])
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-6/igt@kms_pm_dc@dc3co-vpb-simulation.html
* igt@kms_pm_dc@dc5-retention-flops:
- shard-lnl: NOTRUN -> [SKIP][71] ([Intel XE#3309])
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@kms_pm_dc@dc5-retention-flops.html
* igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait:
- shard-lnl: NOTRUN -> [SKIP][72] ([Intel XE#1439] / [Intel XE#3141])
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-6/igt@kms_pm_rpm@modeset-non-lpsp-stress-no-wait.html
* igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-sf:
- shard-bmg: NOTRUN -> [SKIP][73] ([Intel XE#1406] / [Intel XE#1489]) +7 other tests skip
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-sf.html
* igt@kms_psr2_sf@fbc-psr2-overlay-primary-update-sf-dmg-area:
- shard-lnl: NOTRUN -> [SKIP][74] ([Intel XE#1406] / [Intel XE#2893] / [Intel XE#4608])
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@kms_psr2_sf@fbc-psr2-overlay-primary-update-sf-dmg-area.html
* igt@kms_psr2_sf@fbc-psr2-overlay-primary-update-sf-dmg-area@pipe-b-edp-1:
- shard-lnl: NOTRUN -> [SKIP][75] ([Intel XE#1406] / [Intel XE#4608]) +1 other test skip
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@kms_psr2_sf@fbc-psr2-overlay-primary-update-sf-dmg-area@pipe-b-edp-1.html
* igt@kms_psr2_sf@pr-cursor-plane-update-sf:
- shard-lnl: NOTRUN -> [SKIP][76] ([Intel XE#1406] / [Intel XE#2893])
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@kms_psr2_sf@pr-cursor-plane-update-sf.html
* igt@kms_psr2_su@page_flip-nv12:
- shard-bmg: NOTRUN -> [SKIP][77] ([Intel XE#1406] / [Intel XE#2387])
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@kms_psr2_su@page_flip-nv12.html
* igt@kms_psr@fbc-psr2-no-drrs@edp-1:
- shard-lnl: NOTRUN -> [SKIP][78] ([Intel XE#1406] / [Intel XE#4609])
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-3/igt@kms_psr@fbc-psr2-no-drrs@edp-1.html
* igt@kms_psr@pr-primary-blt:
- shard-lnl: NOTRUN -> [SKIP][79] ([Intel XE#1406]) +3 other tests skip
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@kms_psr@pr-primary-blt.html
* igt@kms_psr@psr2-primary-page-flip:
- shard-bmg: NOTRUN -> [SKIP][80] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850]) +13 other tests skip
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@kms_psr@psr2-primary-page-flip.html
* igt@kms_psr_stress_test@flip-primary-invalidate-overlay:
- shard-bmg: NOTRUN -> [SKIP][81] ([Intel XE#1406] / [Intel XE#2414])
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-8/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html
* igt@kms_rotation_crc@sprite-rotation-90:
- shard-bmg: NOTRUN -> [SKIP][82] ([Intel XE#3414] / [Intel XE#3904]) +3 other tests skip
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@kms_rotation_crc@sprite-rotation-90.html
* igt@kms_setmode@invalid-clone-single-crtc:
- shard-lnl: NOTRUN -> [SKIP][83] ([Intel XE#1435]) +1 other test skip
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-3/igt@kms_setmode@invalid-clone-single-crtc.html
* igt@kms_sharpness_filter@filter-toggle:
- shard-bmg: NOTRUN -> [SKIP][84] ([Intel XE#6503])
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-6/igt@kms_sharpness_filter@filter-toggle.html
* igt@kms_vrr@cmrr@pipe-a-edp-1:
- shard-lnl: [PASS][85] -> [FAIL][86] ([Intel XE#4459]) +1 other test fail
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-lnl-6/igt@kms_vrr@cmrr@pipe-a-edp-1.html
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-5/igt@kms_vrr@cmrr@pipe-a-edp-1.html
* igt@kms_vrr@seamless-rr-switch-drrs:
- shard-bmg: NOTRUN -> [SKIP][87] ([Intel XE#1499]) +2 other tests skip
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-1/igt@kms_vrr@seamless-rr-switch-drrs.html
* igt@xe_compute@eu-busy-10s:
- shard-bmg: NOTRUN -> [SKIP][88] ([Intel XE#6599])
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@xe_compute@eu-busy-10s.html
* igt@xe_eudebug@basic-vm-bind-discovery:
- shard-lnl: NOTRUN -> [SKIP][89] ([Intel XE#4837])
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@xe_eudebug@basic-vm-bind-discovery.html
* igt@xe_eudebug@multigpu-basic-client:
- shard-bmg: NOTRUN -> [SKIP][90] ([Intel XE#4837]) +8 other tests skip
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-6/igt@xe_eudebug@multigpu-basic-client.html
* igt@xe_eudebug_online@breakpoint-not-in-debug-mode:
- shard-bmg: NOTRUN -> [SKIP][91] ([Intel XE#4837] / [Intel XE#6665]) +2 other tests skip
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@xe_eudebug_online@breakpoint-not-in-debug-mode.html
* igt@xe_eudebug_online@interrupt-all-set-breakpoint:
- shard-lnl: NOTRUN -> [SKIP][92] ([Intel XE#4837] / [Intel XE#6665]) +1 other test skip
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@xe_eudebug_online@interrupt-all-set-breakpoint.html
* igt@xe_eudebug_online@pagefault-one-of-many:
- shard-lnl: NOTRUN -> [SKIP][93] ([Intel XE#6665])
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@xe_eudebug_online@pagefault-one-of-many.html
- shard-bmg: NOTRUN -> [SKIP][94] ([Intel XE#6665])
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@xe_eudebug_online@pagefault-one-of-many.html
* igt@xe_eudebug_online@pagefault-read-stress:
- shard-bmg: NOTRUN -> [SKIP][95] ([Intel XE#6665] / [Intel XE#6681])
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@xe_eudebug_online@pagefault-read-stress.html
* igt@xe_eudebug_sriov@deny-sriov:
- shard-lnl: NOTRUN -> [SKIP][96] ([Intel XE#4518])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@xe_eudebug_sriov@deny-sriov.html
- shard-bmg: NOTRUN -> [SKIP][97] ([Intel XE#5793])
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-1/igt@xe_eudebug_sriov@deny-sriov.html
* igt@xe_evict@evict-mixed-many-threads-small:
- shard-bmg: [PASS][98] -> [INCOMPLETE][99] ([Intel XE#6321])
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-2/igt@xe_evict@evict-mixed-many-threads-small.html
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-8/igt@xe_evict@evict-mixed-many-threads-small.html
* igt@xe_evict@evict-mixed-threads-large-multi-vm:
- shard-lnl: NOTRUN -> [SKIP][100] ([Intel XE#688]) +4 other tests skip
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@xe_evict@evict-mixed-threads-large-multi-vm.html
* igt@xe_evict@evict-small-external-multi-queue:
- shard-bmg: NOTRUN -> [SKIP][101] ([Intel XE#7140])
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@xe_evict@evict-small-external-multi-queue.html
* igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap:
- shard-lnl: NOTRUN -> [SKIP][102] ([Intel XE#1392]) +3 other tests skip
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-4/igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap.html
* igt@xe_exec_basic@multigpu-once-basic-defer-bind:
- shard-bmg: NOTRUN -> [SKIP][103] ([Intel XE#2322]) +6 other tests skip
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@xe_exec_basic@multigpu-once-basic-defer-bind.html
* igt@xe_exec_fault_mode@many-execqueues-multi-queue-userptr:
- shard-bmg: NOTRUN -> [SKIP][104] ([Intel XE#7136]) +13 other tests skip
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-8/igt@xe_exec_fault_mode@many-execqueues-multi-queue-userptr.html
* igt@xe_exec_fault_mode@twice-multi-queue-rebind-prefetch:
- shard-lnl: NOTRUN -> [SKIP][105] ([Intel XE#7136]) +4 other tests skip
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-3/igt@xe_exec_fault_mode@twice-multi-queue-rebind-prefetch.html
* igt@xe_exec_multi_queue@few-execs-preempt-mode-dyn-priority-smem:
- shard-bmg: NOTRUN -> [SKIP][106] ([Intel XE#6874]) +27 other tests skip
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-10/igt@xe_exec_multi_queue@few-execs-preempt-mode-dyn-priority-smem.html
* igt@xe_exec_multi_queue@max-queues-preempt-mode-fault-basic-smem:
- shard-lnl: NOTRUN -> [SKIP][107] ([Intel XE#6874]) +11 other tests skip
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@xe_exec_multi_queue@max-queues-preempt-mode-fault-basic-smem.html
* igt@xe_exec_system_allocator@many-stride-new-prefetch:
- shard-bmg: NOTRUN -> [INCOMPLETE][108] ([Intel XE#7098])
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@xe_exec_system_allocator@many-stride-new-prefetch.html
* igt@xe_exec_threads@threads-many-queues:
- shard-lnl: NOTRUN -> [FAIL][109] ([Intel XE#7166])
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-6/igt@xe_exec_threads@threads-many-queues.html
* igt@xe_exec_threads@threads-multi-queue-mixed-userptr-invalidate-race:
- shard-bmg: NOTRUN -> [SKIP][110] ([Intel XE#7138]) +11 other tests skip
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@xe_exec_threads@threads-multi-queue-mixed-userptr-invalidate-race.html
* igt@xe_exec_threads@threads-multi-queue-rebind:
- shard-lnl: NOTRUN -> [SKIP][111] ([Intel XE#7138]) +3 other tests skip
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-1/igt@xe_exec_threads@threads-multi-queue-rebind.html
* igt@xe_live_ktest@xe_eudebug:
- shard-lnl: NOTRUN -> [SKIP][112] ([Intel XE#2833])
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-6/igt@xe_live_ktest@xe_eudebug.html
* igt@xe_module_load@load:
- shard-bmg: ([PASS][113], [PASS][114], [PASS][115], [PASS][116], [PASS][117], [PASS][118], [PASS][119], [PASS][120], [PASS][121], [PASS][122], [PASS][123], [PASS][124], [PASS][125], [PASS][126], [PASS][127], [PASS][128], [PASS][129], [PASS][130], [PASS][131], [PASS][132], [PASS][133], [PASS][134], [PASS][135], [PASS][136]) -> ([PASS][137], [PASS][138], [PASS][139], [PASS][140], [PASS][141], [PASS][142], [PASS][143], [PASS][144], [PASS][145], [PASS][146], [PASS][147], [PASS][148], [PASS][149], [PASS][150], [PASS][151], [PASS][152], [PASS][153], [PASS][154], [PASS][155], [SKIP][156], [PASS][157], [PASS][158], [PASS][159], [PASS][160], [PASS][161], [PASS][162]) ([Intel XE#2457])
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-5/igt@xe_module_load@load.html
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-5/igt@xe_module_load@load.html
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-6/igt@xe_module_load@load.html
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-4/igt@xe_module_load@load.html
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-8/igt@xe_module_load@load.html
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-7/igt@xe_module_load@load.html
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-3/igt@xe_module_load@load.html
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-3/igt@xe_module_load@load.html
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-9/igt@xe_module_load@load.html
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-9/igt@xe_module_load@load.html
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-7/igt@xe_module_load@load.html
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-2/igt@xe_module_load@load.html
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-4/igt@xe_module_load@load.html
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-3/igt@xe_module_load@load.html
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-4/igt@xe_module_load@load.html
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-1/igt@xe_module_load@load.html
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-1/igt@xe_module_load@load.html
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-7/igt@xe_module_load@load.html
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-2/igt@xe_module_load@load.html
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-10/igt@xe_module_load@load.html
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-10/igt@xe_module_load@load.html
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-2/igt@xe_module_load@load.html
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-6/igt@xe_module_load@load.html
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-8/igt@xe_module_load@load.html
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@xe_module_load@load.html
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@xe_module_load@load.html
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-8/igt@xe_module_load@load.html
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-8/igt@xe_module_load@load.html
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-10/igt@xe_module_load@load.html
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-10/igt@xe_module_load@load.html
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-6/igt@xe_module_load@load.html
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@xe_module_load@load.html
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@xe_module_load@load.html
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@xe_module_load@load.html
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@xe_module_load@load.html
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-1/igt@xe_module_load@load.html
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-1/igt@xe_module_load@load.html
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@xe_module_load@load.html
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-6/igt@xe_module_load@load.html
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-1/igt@xe_module_load@load.html
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@xe_module_load@load.html
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@xe_module_load@load.html
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@xe_module_load@load.html
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@xe_module_load@load.html
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@xe_module_load@load.html
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@xe_module_load@load.html
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@xe_module_load@load.html
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-9/igt@xe_module_load@load.html
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-9/igt@xe_module_load@load.html
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-6/igt@xe_module_load@load.html
* igt@xe_multigpu_svm@mgpu-coherency-prefetch:
- shard-bmg: NOTRUN -> [SKIP][163] ([Intel XE#6964]) +2 other tests skip
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@xe_multigpu_svm@mgpu-coherency-prefetch.html
* igt@xe_pmu@all-fn-engine-activity-load:
- shard-lnl: NOTRUN -> [SKIP][164] ([Intel XE#4650])
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-8/igt@xe_pmu@all-fn-engine-activity-load.html
* igt@xe_pxp@pxp-termination-key-update-post-suspend:
- shard-bmg: NOTRUN -> [SKIP][165] ([Intel XE#4733]) +1 other test skip
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-9/igt@xe_pxp@pxp-termination-key-update-post-suspend.html
* igt@xe_query@multigpu-query-hwconfig:
- shard-lnl: NOTRUN -> [SKIP][166] ([Intel XE#944])
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-6/igt@xe_query@multigpu-query-hwconfig.html
* igt@xe_query@multigpu-query-invalid-uc-fw-version-mbz:
- shard-bmg: NOTRUN -> [SKIP][167] ([Intel XE#944]) +2 other tests skip
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@xe_query@multigpu-query-invalid-uc-fw-version-mbz.html
* igt@xe_sriov_admin@exec-quantum-write-readback-vfs-disabled:
- shard-lnl: NOTRUN -> [SKIP][168] ([Intel XE#7174])
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-2/igt@xe_sriov_admin@exec-quantum-write-readback-vfs-disabled.html
* igt@xe_sriov_flr@flr-twice:
- shard-bmg: [PASS][169] -> [FAIL][170] ([Intel XE#6569])
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-9/igt@xe_sriov_flr@flr-twice.html
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-10/igt@xe_sriov_flr@flr-twice.html
#### Possible fixes ####
* igt@kms_async_flips@async-flip-with-page-flip-events-linear:
- shard-lnl: [FAIL][171] ([Intel XE#5993]) -> [PASS][172] +3 other tests pass
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-lnl-2/igt@kms_async_flips@async-flip-with-page-flip-events-linear.html
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-5/igt@kms_async_flips@async-flip-with-page-flip-events-linear.html
* igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1:
- shard-lnl: [FAIL][173] ([Intel XE#6054]) -> [PASS][174] +3 other tests pass
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-lnl-7/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-3/igt@kms_async_flips@async-flip-with-page-flip-events-linear-atomic@pipe-c-edp-1.html
* igt@kms_cursor_legacy@flip-vs-cursor-atomic:
- shard-bmg: [FAIL][175] ([Intel XE#6715]) -> [PASS][176]
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-10/igt@kms_cursor_legacy@flip-vs-cursor-atomic.html
* igt@kms_flip@flip-vs-expired-vblank-interruptible:
- shard-lnl: [FAIL][177] ([Intel XE#301] / [Intel XE#3149]) -> [PASS][178] +1 other test pass
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-lnl-6/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-lnl-4/igt@kms_flip@flip-vs-expired-vblank-interruptible.html
* igt@kms_flip@flip-vs-suspend@c-hdmi-a3:
- shard-bmg: [INCOMPLETE][179] ([Intel XE#2049] / [Intel XE#2597]) -> [PASS][180] +1 other test pass
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-1/igt@kms_flip@flip-vs-suspend@c-hdmi-a3.html
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-4/igt@kms_flip@flip-vs-suspend@c-hdmi-a3.html
* igt@kms_hdmi_inject@inject-audio:
- shard-bmg: [SKIP][181] -> [PASS][182]
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-4/igt@kms_hdmi_inject@inject-audio.html
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-9/igt@kms_hdmi_inject@inject-audio.html
* igt@kms_hdr@invalid-hdr:
- shard-bmg: [SKIP][183] ([Intel XE#1503]) -> [PASS][184]
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-4/igt@kms_hdr@invalid-hdr.html
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-3/igt@kms_hdr@invalid-hdr.html
* igt@xe_exec_system_allocator@partial-atomic-middle-remap-no-cpu-fault:
- shard-bmg: [FAIL][185] ([Intel XE#5625]) -> [PASS][186]
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-2/igt@xe_exec_system_allocator@partial-atomic-middle-remap-no-cpu-fault.html
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-2/igt@xe_exec_system_allocator@partial-atomic-middle-remap-no-cpu-fault.html
* igt@xe_fault_injection@inject-fault-probe-function-xe_pcode_probe_early:
- shard-bmg: [ABORT][187] -> [PASS][188]
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-3/igt@xe_fault_injection@inject-fault-probe-function-xe_pcode_probe_early.html
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-8/igt@xe_fault_injection@inject-fault-probe-function-xe_pcode_probe_early.html
#### Warnings ####
* igt@kms_tiled_display@basic-test-pattern:
- shard-bmg: [SKIP][189] ([Intel XE#2426]) -> [FAIL][190] ([Intel XE#1729])
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-7/igt@kms_tiled_display@basic-test-pattern.html
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-6/igt@kms_tiled_display@basic-test-pattern.html
* igt@kms_tiled_display@basic-test-pattern-with-chamelium:
- shard-bmg: [SKIP][191] ([Intel XE#2509]) -> [SKIP][192] ([Intel XE#2426])
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-10/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-7/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html
* igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv:
- shard-bmg: [ABORT][193] ([Intel XE#5466] / [Intel XE#6652]) -> [ABORT][194] ([Intel XE#5466])
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857/shard-bmg-8/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/shard-bmg-5/igt@xe_fault_injection@probe-fail-guc-xe_guc_ct_send_recv.html
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1397]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1397
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
[Intel XE#1421]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1421
[Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
[Intel XE#1428]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1428
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1499]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1499
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1508]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1508
[Intel XE#1512]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1512
[Intel XE#1729]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1729
[Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
[Intel XE#2049]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2049
[Intel XE#2191]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2191
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2286]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2286
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2322]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2322
[Intel XE#2325]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2325
[Intel XE#2327]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2327
[Intel XE#2328]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2328
[Intel XE#2350]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2350
[Intel XE#2387]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2387
[Intel XE#2390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2390
[Intel XE#2391]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2391
[Intel XE#2393]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2393
[Intel XE#2414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2414
[Intel XE#2426]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2426
[Intel XE#2457]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2457
[Intel XE#2509]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2509
[Intel XE#2597]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2597
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2833]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2833
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#2938]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2938
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#3141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3141
[Intel XE#3149]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3149
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#3278]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3278
[Intel XE#3279]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3279
[Intel XE#3304]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3304
[Intel XE#3309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3309
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3432]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3432
[Intel XE#3658]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3658
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#4141]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4141
[Intel XE#4210]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4210
[Intel XE#4294]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4294
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4459]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4459
[Intel XE#4518]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4518
[Intel XE#4608]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4608
[Intel XE#4609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4609
[Intel XE#4650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4650
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#5354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5354
[Intel XE#5466]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5466
[Intel XE#5545]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5545
[Intel XE#5625]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5625
[Intel XE#5793]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5793
[Intel XE#5993]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5993
[Intel XE#6054]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6054
[Intel XE#607]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/607
[Intel XE#610]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/610
[Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312
[Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321
[Intel XE#6503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6503
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#6569]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6569
[Intel XE#6599]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6599
[Intel XE#6652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6652
[Intel XE#6665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6665
[Intel XE#6681]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6681
[Intel XE#6707]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6707
[Intel XE#6715]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6715
[Intel XE#6874]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6874
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#6886]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6886
[Intel XE#6964]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6964
[Intel XE#6974]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6974
[Intel XE#703]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/703
[Intel XE#7059]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7059
[Intel XE#7061]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7061
[Intel XE#7086]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7086
[Intel XE#7098]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7098
[Intel XE#7136]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7136
[Intel XE#7138]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7138
[Intel XE#7140]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7140
[Intel XE#7166]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7166
[Intel XE#7174]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7174
[Intel XE#7178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7178
[Intel XE#7179]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/7179
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
Build changes
-------------
* IGT: IGT_8760 -> IGT_8761
* Linux: xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857 -> xe-pw-161815v1
IGT_8760: 8760
IGT_8761: 8761
xe-4574-e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857: e1032fc6a7b99e9b2c4c97829d1fb0dde9d61857
xe-pw-161815v1: 161815v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-161815v1/index.html
[-- Attachment #2: Type: text/html, Size: 59900 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread