* [PATCH i-g-t v8 0/5] Madvise Tests in IGT
@ 2025-09-02 16:30 nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 1/5] DO-NOT-MERGE: include/drm-uapi: Add drm_xe_madvise structure nishit.sharma
` (7 more replies)
0 siblings, 8 replies; 10+ messages in thread
From: nishit.sharma @ 2025-09-02 16:30 UTC (permalink / raw)
To: igt-dev, pravalika.gurram, himal.prasad.ghimiray, matthew.brost,
nishit.sharma
From: Nishit Sharma <nishit.sharma@intel.com>
Revision 1:
Added madvise tests in IGT which validate different features related
to attributes passed. Madvise tests related to Atomic operation,
Preferred Loc have been added and validated. Madvise tests are called as
part of different struct section and available as madvise-<test-name> in
list of subtests.
ver2:
- added back subtest which was deleted due to rebasing
ver3:
- added variable deleted during rebase.
ver4:
- Removed redundant loop for multi-vma test. Instead added multi-vma check
in which is manipulating address, batch addreess only and the
remaining execution is done as per default flow.
- Passed region DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC in prefetch tests.
ver5:
- Incorporated review comments
- Removed section from test which was not required
- Added subtests description
- Tests executed on latest drm tip
ver6:
- Incorporated review comments
- Removed dead code checked-in due to rebasing
- Added new subtests in section
- Modified madvise subtests by adding flags
- Added description of new sub-tests
- Called helper functions from test_exec which performs
ver7:
- Fixed code which was calling madvise op only for device mem
- Fixed warnings
ver8:
- Done Code cleanup
Nishit Sharma (5):
DO-NOT-MERGE: include/drm-uapi: Add drm_xe_madvise structure
lib/xe: Add xe_vm_madvise ioctl support
lib/xe: Add Helper to get memory attributes
tests/intel/xe_exec_system_allocator: Add madvise-swizzle test
tests/intel/xe_exec_system_allocator: Add atomic_batch test in IGT
include/drm-uapi/xe_drm.h | 289 ++++++++++++++-
lib/xe/xe_ioctl.c | 149 ++++++++
lib/xe/xe_ioctl.h | 9 +-
tests/intel/xe_exec_system_allocator.c | 465 +++++++++++++++++++++++--
4 files changed, 867 insertions(+), 45 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH i-g-t v8 1/5] DO-NOT-MERGE: include/drm-uapi: Add drm_xe_madvise structure
2025-09-02 16:30 [PATCH i-g-t v8 0/5] Madvise Tests in IGT nishit.sharma
@ 2025-09-02 16:30 ` nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 2/5] lib/xe: Add xe_vm_madvise ioctl support nishit.sharma
` (6 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: nishit.sharma @ 2025-09-02 16:30 UTC (permalink / raw)
To: igt-dev, pravalika.gurram, himal.prasad.ghimiray, matthew.brost,
nishit.sharma
From: Nishit Sharma <nishit.sharma@intel.com>
Defined IOCTL number for madvise operation. Added drm_xe_madvise
which is passed as Input to MADVISE IOCTL.
Signed-off-by: Nishit Sharma <nishit.sharma@intel.com>
---
include/drm-uapi/xe_drm.h | 289 ++++++++++++++++++++++++++++++++++++--
1 file changed, 281 insertions(+), 8 deletions(-)
diff --git a/include/drm-uapi/xe_drm.h b/include/drm-uapi/xe_drm.h
index a52f95593..e9a27a844 100644
--- a/include/drm-uapi/xe_drm.h
+++ b/include/drm-uapi/xe_drm.h
@@ -3,8 +3,8 @@
* Copyright © 2023 Intel Corporation
*/
-#ifndef _XE_DRM_H_
-#define _XE_DRM_H_
+#ifndef _UAPI_XE_DRM_H_
+#define _UAPI_XE_DRM_H_
#include "drm.h"
@@ -81,6 +81,8 @@ extern "C" {
* - &DRM_IOCTL_XE_EXEC
* - &DRM_IOCTL_XE_WAIT_USER_FENCE
* - &DRM_IOCTL_XE_OBSERVATION
+ * - &DRM_IOCTL_XE_MADVISE
+ * - &DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS
*/
/*
@@ -102,6 +104,8 @@ extern "C" {
#define DRM_XE_EXEC 0x09
#define DRM_XE_WAIT_USER_FENCE 0x0a
#define DRM_XE_OBSERVATION 0x0b
+#define DRM_XE_MADVISE 0x0c
+#define DRM_XE_VM_QUERY_MEM_REGION_ATTRS 0x0d
/* Must be kept compact -- no holes */
@@ -117,6 +121,8 @@ extern "C" {
#define DRM_IOCTL_XE_EXEC DRM_IOW(DRM_COMMAND_BASE + DRM_XE_EXEC, struct drm_xe_exec)
#define DRM_IOCTL_XE_WAIT_USER_FENCE DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_WAIT_USER_FENCE, struct drm_xe_wait_user_fence)
#define DRM_IOCTL_XE_OBSERVATION DRM_IOW(DRM_COMMAND_BASE + DRM_XE_OBSERVATION, struct drm_xe_observation_param)
+#define DRM_IOCTL_XE_MADVISE DRM_IOW(DRM_COMMAND_BASE + DRM_XE_MADVISE, struct drm_xe_madvise)
+#define DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS DRM_IOWR(DRM_COMMAND_BASE + DRM_XE_VM_QUERY_MEM_REGION_ATTRS, struct drm_xe_vm_query_mem_range_attr)
/**
* DOC: Xe IOCTL Extensions
@@ -134,7 +140,7 @@ extern "C" {
* redefine the interface more easily than an ever growing struct of
* increasing complexity, and for large parts of that interface to be
* entirely optional. The downside is more pointer chasing; chasing across
- * the boundary with pointers encapsulated inside u64.
+ * the __user boundary with pointers encapsulated inside u64.
*
* Example chaining:
*
@@ -925,9 +931,9 @@ struct drm_xe_gem_mmap_offset {
* - %DRM_XE_VM_CREATE_FLAG_LR_MODE - An LR, or Long Running VM accepts
* exec submissions to its exec_queues that don't have an upper time
* limit on the job execution time. But exec submissions to these
- * don't allow any of the flags DRM_XE_SYNC_FLAG_SYNCOBJ,
- * DRM_XE_SYNC_FLAG_TIMELINE_SYNCOBJ, DRM_XE_SYNC_FLAG_DMA_BUF,
- * used as out-syncobjs, that is, together with DRM_XE_SYNC_FLAG_SIGNAL.
+ * don't allow any of the sync types DRM_XE_SYNC_TYPE_SYNCOBJ,
+ * DRM_XE_SYNC_TYPE_TIMELINE_SYNCOBJ, used as out-syncobjs, that is,
+ * together with sync flag DRM_XE_SYNC_FLAG_SIGNAL.
* LR VMs can be created in recoverable page-fault mode using
* DRM_XE_VM_CREATE_FLAG_FAULT_MODE, if the device supports it.
* If that flag is omitted, the UMD can not rely on the slightly
@@ -1003,6 +1009,10 @@ struct drm_xe_vm_destroy {
* valid on VMs with DRM_XE_VM_CREATE_FLAG_FAULT_MODE set. The CPU address
* mirror flag are only valid for DRM_XE_VM_BIND_OP_MAP operations, the BO
* handle MBZ, and the BO offset MBZ.
+ *
+ * The @prefetch_mem_region_instance for %DRM_XE_VM_BIND_OP_PREFETCH can also be:
+ * - %DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, which ensures prefetching occurs in
+ * the memory region advised by madvise.
*/
struct drm_xe_vm_bind_op {
/** @extensions: Pointer to the first extension struct, if any */
@@ -1108,6 +1118,7 @@ struct drm_xe_vm_bind_op {
/** @flags: Bind flags */
__u32 flags;
+#define DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC -1
/**
* @prefetch_mem_region_instance: Memory region to prefetch VMA to.
* It is a region instance, not a mask.
@@ -1394,7 +1405,7 @@ struct drm_xe_sync {
/**
* @timeline_value: Input for the timeline sync object. Needs to be
- * different than 0 when used with %DRM_XE_SYNC_FLAG_TIMELINE_SYNCOBJ.
+ * different than 0 when used with %DRM_XE_SYNC_TYPE_TIMELINE_SYNCOBJ.
*/
__u64 timeline_value;
@@ -1974,8 +1985,270 @@ struct drm_xe_query_eu_stall {
__u64 sampling_rates[];
};
+/**
+ * struct drm_xe_madvise - Input of &DRM_IOCTL_XE_MADVISE
+ *
+ * This structure is used to set memory attributes for a virtual address range
+ * in a VM. The type of attribute is specified by @type, and the corresponding
+ * union member is used to provide additional parameters for @type.
+ *
+ * Supported attribute types:
+ * - DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC: Set preferred memory location.
+ * - DRM_XE_MEM_RANGE_ATTR_ATOMIC: Set atomic access policy.
+ * - DRM_XE_MEM_RANGE_ATTR_PAT: Set page attribute table index.
+ *
+ * Example:
+ *
+ * .. code-block:: C
+ *
+ * struct drm_xe_madvise madvise = {
+ * .vm_id = vm_id,
+ * .start = 0x100000,
+ * .range = 0x2000,
+ * .type = DRM_XE_MEM_RANGE_ATTR_ATOMIC,
+ * .atomic_val = DRM_XE_ATOMIC_DEVICE,
+ * .pad = 0,
+ * };
+ *
+ * ioctl(fd, DRM_IOCTL_XE_MADVISE, &madvise);
+ *
+ */
+struct drm_xe_madvise {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @start: start of the virtual address range */
+ __u64 start;
+
+ /** @range: size of the virtual address range */
+ __u64 range;
+
+ /** @vm_id: vm_id of the virtual range */
+ __u32 vm_id;
+
+#define DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC 0
+#define DRM_XE_MEM_RANGE_ATTR_ATOMIC 1
+#define DRM_XE_MEM_RANGE_ATTR_PAT 2
+ /** @type: type of attribute */
+ __u32 type;
+
+ union {
+ /**
+ * @preferred_mem_loc: preferred memory location
+ *
+ * Used when @type == DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC
+ *
+ * Supported values for @preferred_mem_loc.devmem_fd:
+ * - DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE: set vram of faulting tile as preferred loc
+ * - DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM: set smem as preferred loc
+ *
+ * Supported values for @preferred_mem_loc.migration_policy:
+ * - DRM_XE_MIGRATE_ALL_PAGES
+ * - DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES
+ */
+ struct {
+#define DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE 0
+#define DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM -1
+ /** @preferred_mem_loc.devmem_fd: fd for preferred loc */
+ __u32 devmem_fd;
+
+#define DRM_XE_MIGRATE_ALL_PAGES 0
+#define DRM_XE_MIGRATE_ONLY_SYSTEM_PAGES 1
+ /** @preferred_mem_loc.migration_policy: Page migration policy */
+ __u16 migration_policy;
+
+ /** @preferred_mem_loc.pad : MBZ */
+ __u16 pad;
+
+ /** @preferred_mem_loc.reserved : Reserved */
+ __u64 reserved;
+ } preferred_mem_loc;
+
+ /**
+ * @atomic: Atomic access policy
+ *
+ * Used when @type == DRM_XE_MEM_RANGE_ATTR_ATOMIC.
+ *
+ * Supported values for @atomic.val:
+ * - DRM_XE_ATOMIC_UNDEFINED: Undefined or default behaviour
+ * Support both GPU and CPU atomic operations for system allocator
+ * Support GPU atomic operations for normal(bo) allocator
+ * - DRM_XE_ATOMIC_DEVICE: Support GPU atomic operations
+ * - DRM_XE_ATOMIC_GLOBAL: Support both GPU and CPU atomic operations
+ * - DRM_XE_ATOMIC_CPU: Support CPU atomic
+ */
+ struct {
+#define DRM_XE_ATOMIC_UNDEFINED 0
+#define DRM_XE_ATOMIC_DEVICE 1
+#define DRM_XE_ATOMIC_GLOBAL 2
+#define DRM_XE_ATOMIC_CPU 3
+ /** @atomic.val: value of atomic operation */
+ __u32 val;
+
+ /** @atomic.pad: MBZ */
+ __u32 pad;
+
+ /** @atomic.reserved: Reserved */
+ __u64 reserved;
+ } atomic;
+
+ /**
+ * @pat_index: Page attribute table index
+ *
+ * Used when @type == DRM_XE_MEM_RANGE_ATTR_PAT.
+ */
+ struct {
+ /** @pat_index.val: PAT index value */
+ __u32 val;
+
+ /** @pat_index.pad: MBZ */
+ __u32 pad;
+
+ /** @pat_index.reserved: Reserved */
+ __u64 reserved;
+ } pat_index;
+ };
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
+/**
+ * struct drm_xe_mem_range_attr - Output of &DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS
+ *
+ * This structure is provided by userspace and filled by KMD in response to the
+ * DRM_IOCTL_XE_VM_QUERY_MEM_RANGES_ATTRS ioctl. It describes memory attributes of
+ * a memory ranges within a user specified address range in a VM.
+ *
+ * The structure includes information such as atomic access policy,
+ * page attribute table (PAT) index, and preferred memory location.
+ * Userspace allocates an array of these structures and passes a pointer to the
+ * ioctl to retrieve attributes for each memory ranges
+ *
+ * @extensions: Pointer to the first extension struct, if any
+ * @start: Start address of the memory range
+ * @end: End address of the virtual memory range
+ *
+ */
+struct drm_xe_mem_range_attr {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @start: start of the memory range */
+ __u64 start;
+
+ /** @end: end of the memory range */
+ __u64 end;
+
+ /** @preferred_mem_loc: preferred memory location */
+ struct {
+ /** @preferred_mem_loc.devmem_fd: fd for preferred loc */
+ __u32 devmem_fd;
+
+ /** @preferred_mem_loc.migration_policy: Page migration policy */
+ __u32 migration_policy;
+ } preferred_mem_loc;
+
+ struct {
+ /** @atomic.val: atomic attribute */
+ __u32 val;
+
+ /** @atomic.reserved: Reserved */
+ __u32 reserved;
+ } atomic;
+
+ struct {
+ /** @pat_index.val: PAT index */
+ __u32 val;
+
+ /** @pat_index.reserved: Reserved */
+ __u32 reserved;
+ } pat_index;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+};
+
+/**
+ * struct drm_xe_vm_query_mem_range_attr - Input of &DRM_IOCTL_XE_VM_QUERY_MEM_ATTRIBUTES
+ *
+ * This structure is used to query memory attributes of memory regions
+ * within a user specified address range in a VM. It provides detailed
+ * information about each memory range, including atomic access policy,
+ * page attribute table (PAT) index, and preferred memory location.
+ *
+ * Userspace first calls the ioctl with @num_mem_ranges = 0,
+ * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL to retrieve
+ * the number of memory regions and size of each memory range attribute.
+ * Then, it allocates a buffer of that size and calls the ioctl again to fill
+ * the buffer with memory range attributes.
+ *
+ * If second call fails with -ENOSPC, it means memory ranges changed between
+ * first call and now, retry IOCTL again with @num_mem_ranges = 0,
+ * @sizeof_mem_ranges_attr = 0 and @vector_of_vma_mem_attr = NULL followed by
+ * Second ioctl call.
+ *
+ * Example:
+ *
+ * .. code-block:: C
+ * struct drm_xe_vm_query_mem_range_attr query = {
+ * .vm_id = vm_id,
+ * .start = 0x100000,
+ * .range = 0x2000,
+ * };
+ *
+ * // First ioctl call to get num of mem regions and sizeof each attribute
+ * ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
+ *
+ * // Allocate buffer for the memory region attributes
+ * void *ptr = malloc(query.num_mem_ranges * query.sizeof_mem_range_attr);
+ *
+ * query.vector_of_mem_attr = (uintptr_t)ptr;
+ *
+ * // Second ioctl call to actually fill the memory attributes
+ * ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, &query);
+ *
+ * // Iterate over the returned memory region attributes
+ * for (unsigned int i = 0; i < query.num_mem_ranges; ++i) {
+ * struct drm_xe_mem_range_attr *attr = (struct drm_xe_mem_range_attr *)ptr;
+ *
+ * // Do something with attr
+ *
+ * // Move pointer by one entry
+ * ptr += query.sizeof_mem_range_attr;
+ * }
+ *
+ * free(ptr);
+ */
+struct drm_xe_vm_query_mem_range_attr {
+ /** @extensions: Pointer to the first extension struct, if any */
+ __u64 extensions;
+
+ /** @vm_id: vm_id of the virtual range */
+ __u32 vm_id;
+
+ /** @num_mem_ranges: number of mem_ranges in range */
+ __u32 num_mem_ranges;
+
+ /** @start: start of the virtual address range */
+ __u64 start;
+
+ /** @range: size of the virtual address range */
+ __u64 range;
+
+ /** @sizeof_mem_range_attr: size of struct drm_xe_mem_range_attr */
+ __u64 sizeof_mem_range_attr;
+
+ /** @vector_of_ops: userptr to array of struct drm_xe_mem_range_attr */
+ __u64 vector_of_mem_attr;
+
+ /** @reserved: Reserved */
+ __u64 reserved[2];
+
+};
+
#if defined(__cplusplus)
}
#endif
-#endif /* _XE_DRM_H_ */
+#endif /* _UAPI_XE_DRM_H_ */
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH i-g-t v8 2/5] lib/xe: Add xe_vm_madvise ioctl support
2025-09-02 16:30 [PATCH i-g-t v8 0/5] Madvise Tests in IGT nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 1/5] DO-NOT-MERGE: include/drm-uapi: Add drm_xe_madvise structure nishit.sharma
@ 2025-09-02 16:30 ` nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 3/5] lib/xe: Add Helper to get memory attributes nishit.sharma
` (5 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: nishit.sharma @ 2025-09-02 16:30 UTC (permalink / raw)
To: igt-dev, pravalika.gurram, himal.prasad.ghimiray, matthew.brost,
nishit.sharma
From: Nishit Sharma <nishit.sharma@intel.com>
xe_vm_madvise() defined which issues madvise ioctl DRM_IOCTL_XE_MADVISE for
VM region advising the driver about expected usage or memory policy for
specified address range. MADVISE ioctl requires pointer to drm_xe_madvise
structure as one of the inputs. Depending upon type of madvise operation
like Atomic, Preferred LOC or PAT required members of drm_xe_madvise
structure are initialized and passed in MADVISE ioctl.
Signed-off-by: Nishit Sharma <nishit.sharma@intel.com>
---
lib/xe/xe_ioctl.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++
lib/xe/xe_ioctl.h | 5 ++++-
2 files changed, 61 insertions(+), 1 deletion(-)
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 1e95af409..5608c8780 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -585,3 +585,60 @@ int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
igt_assert_eq(__xe_wait_ufence(fd, addr, value, exec_queue, &timeout), 0);
return timeout;
}
+
+int __xe_vm_madvise(int fd, uint32_t vm, uint64_t addr, uint64_t range,
+ uint64_t ext, uint32_t type, uint32_t op_val, uint16_t policy)
+{
+ struct drm_xe_madvise madvise = {
+ .type = type,
+ .extensions = ext,
+ .vm_id = vm,
+ .start = addr,
+ .range = range,
+ };
+
+ switch (type) {
+ case DRM_XE_MEM_RANGE_ATTR_ATOMIC:
+ madvise.atomic.val = op_val;
+ break;
+ case DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC:
+ madvise.preferred_mem_loc.devmem_fd = op_val;
+ madvise.preferred_mem_loc.migration_policy = policy;
+ igt_debug("madvise.preferred_mem_loc.devmem_fd = %d\n",
+ madvise.preferred_mem_loc.devmem_fd);
+ break;
+ case DRM_XE_MEM_RANGE_ATTR_PAT:
+ madvise.pat_index.val = op_val;
+ break;
+ default:
+ igt_warn("Unknown attribute\n");
+ return -EINVAL;
+ }
+
+ if (igt_ioctl(fd, DRM_IOCTL_XE_MADVISE, &madvise))
+ return -errno;
+
+ return 0;
+}
+
+/**
+ * xe_vm_madvise:
+ * @fd: xe device fd
+ * @vm: vm_id of the virtual range
+ * @addr: start of the virtual address range
+ * @range: size of the virtual address range
+ * @ext: Pointer to the first extension struct, if any
+ * @type: type of attribute
+ * @op_val: fd/atomic value/pat index, depending upon type of operation
+ * @policy: Page migration policy
+ *
+ * Function initializes different members of struct drm_xe_madvise and calls
+ * MADVISE IOCTL .
+ *
+ * Asserts in case of error returned by DRM_IOCTL_XE_MADVISE.
+ */
+void xe_vm_madvise(int fd, uint32_t vm, uint64_t addr, uint64_t range,
+ uint64_t ext, uint32_t type, uint32_t op_val, uint16_t policy)
+{
+ igt_assert_eq(__xe_vm_madvise(fd, vm, addr, range, ext, type, op_val, policy), 0);
+}
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index 6302d1a7d..8e13ffe65 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -99,5 +99,8 @@ int __xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
uint32_t exec_queue, int64_t *timeout);
int64_t xe_wait_ufence(int fd, uint64_t *addr, uint64_t value,
uint32_t exec_queue, int64_t timeout);
-
+int __xe_vm_madvise(int fd, uint32_t vm, uint64_t addr, uint64_t range, uint64_t ext,
+ uint32_t type, uint32_t op_val, uint16_t policy);
+void xe_vm_madvise(int fd, uint32_t vm, uint64_t addr, uint64_t range, uint64_t ext,
+ uint32_t type, uint32_t op_val, uint16_t policy);
#endif /* XE_IOCTL_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH i-g-t v8 3/5] lib/xe: Add Helper to get memory attributes
2025-09-02 16:30 [PATCH i-g-t v8 0/5] Madvise Tests in IGT nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 1/5] DO-NOT-MERGE: include/drm-uapi: Add drm_xe_madvise structure nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 2/5] lib/xe: Add xe_vm_madvise ioctl support nishit.sharma
@ 2025-09-02 16:30 ` nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 4/5] tests/intel/xe_exec_system_allocator: Add madvise-swizzle test nishit.sharma
` (4 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: nishit.sharma @ 2025-09-02 16:30 UTC (permalink / raw)
To: igt-dev, pravalika.gurram, himal.prasad.ghimiray, matthew.brost,
nishit.sharma
From: Nishit Sharma <nishit.sharma@intel.com>
xe_vm_print_mem_attr_values_in_range() function added which calls
QUERY_MEM_RANGES_ATTRS ioctl to get different memory attributes from KMD
and then prints memory attributes returned by KMD for different access
policies like atomic access, preferred loc and pat index.
Signed-off-by: Nishit Sharma <nishit.sharma@intel.com>
---
lib/xe/xe_ioctl.c | 92 +++++++++++++++++++++++++++++++++++++++++++++++
lib/xe/xe_ioctl.h | 4 +++
2 files changed, 96 insertions(+)
diff --git a/lib/xe/xe_ioctl.c b/lib/xe/xe_ioctl.c
index 5608c8780..28b7d5bec 100644
--- a/lib/xe/xe_ioctl.c
+++ b/lib/xe/xe_ioctl.c
@@ -57,6 +57,98 @@ uint64_t xe_bb_size(int fd, uint64_t reqsize)
xe_get_default_alignment(fd));
}
+int xe_vm_number_vmas_in_range(int fd, struct drm_xe_vm_query_mem_range_attr *vmas_attr)
+{
+ if (igt_ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, vmas_attr))
+ return -errno;
+ return 0;
+}
+
+int xe_vm_vma_attrs(int fd, struct drm_xe_vm_query_mem_range_attr *vmas_attr,
+ struct drm_xe_mem_range_attr *mem_attr)
+{
+ if (!mem_attr)
+ return -EINVAL;
+
+ vmas_attr->vector_of_mem_attr = (uintptr_t)mem_attr;
+
+ if (igt_ioctl(fd, DRM_IOCTL_XE_VM_QUERY_MEM_RANGE_ATTRS, vmas_attr))
+ return -errno;
+
+ return 0;
+}
+
+/**
+ * xe_vm_print_mem_attr_values_in_range:
+ * @fd: xe device fd
+ * @vm: vm_id of the virtual range
+ * @start: start of the virtual address range
+ * @range: size of the virtual address range
+ *
+ * Calls QUERY_MEM_RANGES_ATTRS ioctl to get memory attributes for different
+ * memory ranges from KMD. prints memory attributes as returned by KMD for
+ * atomic, prefrred loc and pat index types.
+ *
+ * Returns 0 for success or error for failure
+ */
+
+int xe_vm_print_mem_attr_values_in_range(int fd, uint32_t vm, uint64_t start, uint64_t range)
+{
+
+ void *ptr_start, *ptr;
+ int err;
+ struct drm_xe_vm_query_mem_range_attr query = {
+ .vm_id = vm,
+ .start = start,
+ .range = range,
+ .num_mem_ranges = 0,
+ .sizeof_mem_range_attr = 0,
+ .vector_of_mem_attr = (uintptr_t)NULL,
+ };
+
+ igt_debug("mem_attr_values_in_range called start = %"PRIu64"\n range = %"PRIu64"\n",
+ start, range);
+
+ err = xe_vm_number_vmas_in_range(fd, &query);
+ if (err || !query.num_mem_ranges || !query.sizeof_mem_range_attr) {
+ igt_warn("ioctl failed for xe_vm_number_vmas_in_range\n");
+ igt_debug("vmas_in_range err = %d query.num_mem_ranges = %u query.sizeof_mem_range_attr=%lld\n",
+ err, query.num_mem_ranges, query.sizeof_mem_range_attr);
+ return err;
+ }
+
+ /* Allocate buffer for the memory region attributes */
+ ptr = malloc(query.num_mem_ranges * query.sizeof_mem_range_attr);
+ ptr_start = ptr;
+
+ if (!ptr)
+ return -ENOMEM;
+
+ err = xe_vm_vma_attrs(fd, &query, ptr);
+ if (err) {
+ igt_warn("ioctl failed for vma_attrs err = %d\n", err);
+ return err;
+ }
+
+ /* Iterate over the returned memory region attributes */
+ for (unsigned int i = 0; i < query.num_mem_ranges; ++i) {
+ struct drm_xe_mem_range_attr *mem_attrs = (struct drm_xe_mem_range_attr *)ptr;
+
+ igt_debug("vma_id = %d\nvma_start = 0x%016llx\nvma_end = 0x%016llx\n"
+ "vma:atomic = %d\nvma:pat_index = %d\nvma:preferred_loc_region = %d\n"
+ "vma:preferred_loc_devmem_fd = %d\n\n\n", i, mem_attrs->start,
+ mem_attrs->end,
+ mem_attrs->atomic.val, mem_attrs->pat_index.val,
+ mem_attrs->preferred_mem_loc.migration_policy,
+ mem_attrs->preferred_mem_loc.devmem_fd);
+
+ ptr += query.sizeof_mem_range_attr;
+ }
+
+ free(ptr_start);
+ return 0;
+}
+
uint32_t xe_vm_create(int fd, uint32_t flags, uint64_t ext)
{
struct drm_xe_vm_create create = {
diff --git a/lib/xe/xe_ioctl.h b/lib/xe/xe_ioctl.h
index 8e13ffe65..14245eeec 100644
--- a/lib/xe/xe_ioctl.h
+++ b/lib/xe/xe_ioctl.h
@@ -103,4 +103,8 @@ int __xe_vm_madvise(int fd, uint32_t vm, uint64_t addr, uint64_t range, uint64_t
uint32_t type, uint32_t op_val, uint16_t policy);
void xe_vm_madvise(int fd, uint32_t vm, uint64_t addr, uint64_t range, uint64_t ext,
uint32_t type, uint32_t op_val, uint16_t policy);
+int xe_vm_number_vmas_in_range(int fd, struct drm_xe_vm_query_mem_range_attr *vmas_attr);
+int xe_vm_vma_attrs(int fd, struct drm_xe_vm_query_mem_range_attr *vmas_attr,
+ struct drm_xe_mem_range_attr *mem_attr);
+int xe_vm_print_mem_attr_values_in_range(int fd, uint32_t vm, uint64_t start, uint64_t range);
#endif /* XE_IOCTL_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH i-g-t v8 4/5] tests/intel/xe_exec_system_allocator: Add madvise-swizzle test
2025-09-02 16:30 [PATCH i-g-t v8 0/5] Madvise Tests in IGT nishit.sharma
` (2 preceding siblings ...)
2025-09-02 16:30 ` [PATCH i-g-t v8 3/5] lib/xe: Add Helper to get memory attributes nishit.sharma
@ 2025-09-02 16:30 ` nishit.sharma
2025-09-03 5:50 ` Matthew Brost
2025-09-02 16:30 ` [PATCH i-g-t v8 5/5] tests/intel/xe_exec_system_allocator: Add atomic_batch test in IGT nishit.sharma
` (3 subsequent siblings)
7 siblings, 1 reply; 10+ messages in thread
From: nishit.sharma @ 2025-09-02 16:30 UTC (permalink / raw)
To: igt-dev, pravalika.gurram, himal.prasad.ghimiray, matthew.brost,
nishit.sharma
From: Nishit Sharma <nishit.sharma@intel.com>
madvise-swizzle test introduced which is called in combination with other
tests as well. In this test the buffer object preferred location is
system memory.
Signed-off-by: Nishit Sharma <nishit.sharma@intel.com>
---
tests/intel/xe_exec_system_allocator.c | 39 ++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
diff --git a/tests/intel/xe_exec_system_allocator.c b/tests/intel/xe_exec_system_allocator.c
index e7f3d423a..16f907ab5 100644
--- a/tests/intel/xe_exec_system_allocator.c
+++ b/tests/intel/xe_exec_system_allocator.c
@@ -777,6 +777,8 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
#define PROCESSES (0x1 << 24)
#define PREFETCH_BENCHMARK (0x1 << 25)
#define PREFETCH_SYS_BENCHMARK (0x1 << 26)
+#define MADVISE_SWIZZLE (0x1 << 27)
+#define MADVISE_OP (0x1 << 28)
#define N_MULTI_FAULT 4
@@ -885,7 +887,9 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
* arg[1]:
*
* @malloc: malloc single buffer for all execs, issue a command which will trigger multiple faults
+ * @malloc-madvise: malloc single buffer for all execs, issue a command which will trigger multiple faults, perfoems madvise operation
* @malloc-prefetch: malloc single buffer for all execs, prefetch buffer before each exec
+ * @malloc-prefetch-madvise: malloc single buffer for all execs, prefetch buffer before each exec, performs madvise operation
* @malloc-multi-fault: malloc single buffer for all execs
* @malloc-fork-read: malloc single buffer for all execs, fork a process to read test output
* @malloc-fork-read-after: malloc single buffer for all execs, fork a process to read test output, check again after fork returns in parent
@@ -897,6 +901,7 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
* @mmap: mmap single buffer for all execs
* @mmap-prefetch: mmap single buffer for all execs, prefetch buffer before each exec
* @mmap-remap: mmap and mremap a buffer for all execs
+ * @mmap-remap-madvise: mmap and mremap a buffer for all execs, performs madvise operations
* @mmap-remap-dontunmap: mmap and mremap a buffer with dontunmap flag for all execs
* @mmap-remap-ro: mmap and mremap a read-only buffer for all execs
* @mmap-remap-ro-dontunmap: mmap and mremap a read-only buffer with dontunmap flag for all execs
@@ -916,8 +921,10 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
* @mmap-file-mlock: mmap and mlock single buffer, with file backing, for all execs
* @mmap-race: mmap single buffer for all execs with race between cpu and gpu access
* @free: malloc and free buffer for each exec
+ * @free-madvise: malloc and free buffer for each exec, performs madvise operation
* @free-race: malloc and free buffer for each exec with race between cpu and gpu access
* @new: malloc a new buffer for each exec
+ * @new-madvise: malloc a new buffer for each exec, performs madvise operation
* @new-prefetch: malloc a new buffer and prefetch for each exec
* @new-race: malloc a new buffer for each exec with race between cpu and gpu access
* @new-bo-map: malloc a new buffer or map BO for each exec
@@ -999,6 +1006,29 @@ static void igt_require_hugepages(void)
"No huge pages available!\n");
}
+static void
+xe_vm_parse_execute_madvise(int fd, uint32_t vm, struct test_exec_data * data,
+ size_t bo_size, struct drm_xe_engine_class_instance *eci,
+ uint64_t addr, unsigned int flags)
+{
+ if (flags & MADVISE_SWIZZLE) {
+ for (int i_loc = 0; i_loc < 2; i_loc++) {
+ uint64_t preferred_loc;
+
+ if (i_loc % 2 == 0)
+ preferred_loc = DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM;
+ else
+ preferred_loc = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE;
+
+ xe_vm_madvise(fd, vm, to_user_pointer(data), bo_size, 0,
+ DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC,
+ preferred_loc,
+ 0);
+ }
+ }
+
+}
+
static void
test_exec(int fd, struct drm_xe_engine_class_instance *eci,
int n_exec_queues, int n_execs, size_t bo_size,
@@ -1134,6 +1164,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
addr = to_user_pointer(data);
+ if (flags & MADVISE_OP)
+ xe_vm_parse_execute_madvise(fd, vm, data, bo_size, eci, addr, flags);
+
if (flags & BO_UNMAP) {
bo_flags = DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
bo = xe_bo_create(fd, vm, bo_size,
@@ -1790,7 +1823,9 @@ igt_main
struct drm_xe_engine_class_instance *hwe;
const struct section sections[] = {
{ "malloc", 0 },
+ { "malloc-madvise", MADVISE_OP | MADVISE_SWIZZLE },
{ "malloc-prefetch", PREFETCH },
+ { "malloc-prefetch-madvise", PREFETCH | MADVISE_OP | MADVISE_SWIZZLE },
{ "malloc-multi-fault", MULTI_FAULT },
{ "malloc-fork-read", FORK_READ },
{ "malloc-fork-read-after", FORK_READ | FORK_READ_AFTER },
@@ -1802,6 +1837,7 @@ igt_main
{ "mmap", MMAP },
{ "mmap-prefetch", MMAP | PREFETCH },
{ "mmap-remap", MMAP | MREMAP },
+ { "mmap-remap-madvise", MMAP | MREMAP | MADVISE_OP | MADVISE_SWIZZLE },
{ "mmap-remap-dontunmap", MMAP | MREMAP | DONTUNMAP },
{ "mmap-remap-ro", MMAP | MREMAP | READ_ONLY_REMAP },
{ "mmap-remap-ro-dontunmap", MMAP | MREMAP | DONTUNMAP |
@@ -1828,13 +1864,16 @@ igt_main
{ "mmap-file-mlock", MMAP | LOCK | FILE_BACKED },
{ "mmap-race", MMAP | RACE },
{ "free", NEW | FREE },
+ { "free-madvise", NEW | FREE | MADVISE_OP | MADVISE_SWIZZLE },
{ "free-race", NEW | FREE | RACE },
{ "new", NEW },
+ { "new-madvise", NEW | MADVISE_OP | MADVISE_SWIZZLE },
{ "new-prefetch", NEW | PREFETCH },
{ "new-race", NEW | RACE },
{ "new-bo-map", NEW | BO_MAP },
{ "new-busy", NEW | BUSY },
{ "mmap-free", MMAP | NEW | FREE },
+ { "mmap-free", MMAP | NEW | FREE | MADVISE_OP | MADVISE_SWIZZLE },
{ "mmap-free-huge", MMAP | NEW | FREE | HUGE_PAGE },
{ "mmap-free-race", MMAP | NEW | FREE | RACE },
{ "mmap-new", MMAP | NEW },
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH i-g-t v8 5/5] tests/intel/xe_exec_system_allocator: Add atomic_batch test in IGT
2025-09-02 16:30 [PATCH i-g-t v8 0/5] Madvise Tests in IGT nishit.sharma
` (3 preceding siblings ...)
2025-09-02 16:30 ` [PATCH i-g-t v8 4/5] tests/intel/xe_exec_system_allocator: Add madvise-swizzle test nishit.sharma
@ 2025-09-02 16:30 ` nishit.sharma
2025-09-03 3:58 ` ✓ Xe.CI.BAT: success for Madvise Tests in IGT (rev8) Patchwork
` (2 subsequent siblings)
7 siblings, 0 replies; 10+ messages in thread
From: nishit.sharma @ 2025-09-02 16:30 UTC (permalink / raw)
To: igt-dev, pravalika.gurram, himal.prasad.ghimiray, matthew.brost,
nishit.sharma
From: Nishit Sharma <nishit.sharma@intel.com>
ATOMIC_BATCH flag is introduced when true MI_ATOMIC | MI_ATOMIC_INC
operation will be called. This will avoid writing another function which
performs atomic increment operations. ATOMIC_BATCH flag is passed as
argument in write_dword() if true then value will be written on passed
address and incremented by ATOMIC_INC operation. For all memory
operations this flag will be used to verify if ATOMIC operation is
working or not.
Added MADVISE_SWIZZLE flag which covers tests related to
migration_policy
Signed-off-by: Nishit Sharma <nishit.sharma@intel.com>
---
tests/intel/xe_exec_system_allocator.c | 434 ++++++++++++++++++++++---
1 file changed, 394 insertions(+), 40 deletions(-)
diff --git a/tests/intel/xe_exec_system_allocator.c b/tests/intel/xe_exec_system_allocator.c
index 16f907ab5..5a406f9fa 100644
--- a/tests/intel/xe_exec_system_allocator.c
+++ b/tests/intel/xe_exec_system_allocator.c
@@ -17,6 +17,7 @@
#include <time.h>
#include "igt.h"
+#include "intel_pat.h"
#include "lib/igt_syncobj.h"
#include "lib/intel_compute.h"
#include "lib/intel_reg.h"
@@ -31,6 +32,15 @@
#define QUARTER_SEC (NSEC_PER_SEC / 4)
#define FIVE_SEC (5LL * NSEC_PER_SEC)
+struct test_exec_data {
+ uint32_t batch[32];
+ uint64_t pad;
+ uint64_t vm_sync;
+ uint64_t exec_sync;
+ uint32_t data;
+ uint32_t expected_data;
+};
+
struct batch_data {
uint32_t batch[16];
uint64_t pad;
@@ -38,6 +48,7 @@ struct batch_data {
uint32_t expected_data;
};
+#define VAL_ATOMIC_EXPECTED 56
#define WRITE_VALUE(data__, i__) ({ \
if (!(data__)->expected_data) \
(data__)->expected_data = rand() << 12 | (i__); \
@@ -54,10 +65,19 @@ static void __write_dword(uint32_t *batch, uint64_t sdi_addr, uint32_t wdata,
batch[(*idx)++] = wdata;
}
-static void write_dword(uint32_t *batch, uint64_t sdi_addr, uint32_t wdata,
- int *idx)
+static void write_dword(struct test_exec_data *data, uint64_t sdi_addr, uint32_t wdata,
+ int *idx, bool atomic)
{
- __write_dword(batch, sdi_addr, wdata, idx);
+ uint32_t *batch = data->batch;
+
+ if (atomic) {
+ data->data = 55;
+ batch[(*idx)++] = MI_ATOMIC | MI_ATOMIC_INC;
+ batch[(*idx)++] = sdi_addr;
+ batch[(*idx)++] = sdi_addr >> 32;
+ } else
+ __write_dword(batch, sdi_addr, wdata, idx);
+
batch[(*idx)++] = MI_BATCH_BUFFER_END;
}
@@ -304,7 +324,7 @@ static void touch_all_pages(int fd, uint32_t exec_queue, void *ptr,
uint64_t sdi_addr = addr + sdi_offset;
int b = 0;
- write_dword(data->batch, sdi_addr, WRITE_VALUE(data, i), &b);
+ write_dword((struct test_exec_data *)data, sdi_addr, WRITE_VALUE(data, i), &b, false);
igt_assert(b <= ARRAY_SIZE(data->batch));
}
@@ -447,6 +467,54 @@ static void __aligned_partial_free(struct aligned_alloc_type *aligned_alloc_typ
* SUBTEST: processes-evict-malloc-mix-bo
* Description: multi-process trigger eviction of VRAM allocated via malloc and BO create
* Test category: stress test
+ *
+ * SUBTEST: madvise-multi-vma
+ * Description: performs multiple madvise operations on multiple virtual memory area using atomic device attributes
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-split-vma
+ * Description: perform madvise operations on multiple type VMAs (BO and CPU VMAs)
+ * Test category: perform madvise operations on multiple type VMAs (BO and CPU VMAs)
+ *
+ * SUBTEST: madvise-atomic-vma
+ * Description: perform madvise atomic operations on BO in VRAM/SMEM if atomic ATTR global/device
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-split-vma-with-mapping
+ * Description: performs prefetch and page migration
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-preffered-loc-atomic-vram
+ * Description: performs both atomic and preferred loc madvise operations atomic device attributes set
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-preffered-loc-atomic-gl
+ * Description: performs both atomic and preferred loc madvise operations with atomic global attributes set
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-preffered-loc-atomic-cpu
+ * Description: performs both atomic and preferred loc madvise operations with atomic cpu attributes set
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-preffered-loc-sram-migrate-pages
+ * Description: performs preferred loc madvise operations and migrating all pages in smem
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-no-range-invalidate-same-attr
+ * Description: performs atomic global madvise operation, prefetch and again madvise operation with same atomic attribute
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-range-invalidate-change-attr
+ * Description: performs atomic global madvise operation, prefetch and again madvise operation with different atomic attribute
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-preffered-loc-atomic-und
+ * Description: Tests madvise with preferred location set for atomic operations, but with an undefined
+ * Test category: functionality test
+ *
+ * SUBTEST: madvise-atomic-inc
+ * Description: Tests madvise atomic operations
+ * Test category: functionality test
*/
static void
@@ -701,7 +769,8 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
uint64_t sdi_addr = addr + sdi_offset;
int b = 0;
- write_dword(data[i].batch, sdi_addr, WRITE_VALUE(&data[i], i), &b);
+ write_dword((struct test_exec_data *)&data[i], sdi_addr, WRITE_VALUE(&data[i], i),
+ &b, false);
igt_assert(b <= ARRAY_SIZE(data[i].batch));
if (!i)
@@ -779,6 +848,19 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
#define PREFETCH_SYS_BENCHMARK (0x1 << 26)
#define MADVISE_SWIZZLE (0x1 << 27)
#define MADVISE_OP (0x1 << 28)
+#define ATOMIC_BATCH (0x1 << 29)
+#define MIGRATE_ALL_PAGES (0x1 << 30)
+#define PREFERRED_LOC_ATOMIC_DEVICE (0x1ull << 31)
+#define PREFERRED_LOC_ATOMIC_GL (0x1ull << 32)
+#define PREFERRED_LOC_ATOMIC_CPU (0x1ull << 33)
+#define MADVISE_MULTI_VMA (0x1ull << 34)
+#define MADVISE_SPLIT_VMA (0x1ull << 35)
+#define MADVISE_ATOMIC_VMA (0x1ull << 36)
+#define PREFETCH_SPLIT_VMA (0x1ull << 37)
+#define PREFETCH_CHANGE_ATTR (0x1ull << 38)
+#define PREFETCH_SAME_ATTR (0x1ull << 39)
+#define PREFERRED_LOC_ATOMIC_UND (0x1ull << 40)
+#define MADVISE_ATOMIC_DEVICE (0x1ull << 41)
#define N_MULTI_FAULT 4
@@ -887,7 +969,7 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
* arg[1]:
*
* @malloc: malloc single buffer for all execs, issue a command which will trigger multiple faults
- * @malloc-madvise: malloc single buffer for all execs, issue a command which will trigger multiple faults, perfoems madvise operation
+ * @malloc-madvise: malloc single buffer for all execs, issue a command which will trigger multiple faults, performs madvise operation
* @malloc-prefetch: malloc single buffer for all execs, prefetch buffer before each exec
* @malloc-prefetch-madvise: malloc single buffer for all execs, prefetch buffer before each exec, performs madvise operation
* @malloc-multi-fault: malloc single buffer for all execs
@@ -988,16 +1070,6 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
* Description: Create multiple threads with a faults on different hardware engines to same addresses, racing between CPU and GPU access
* Test category: stress test
*/
-
-struct test_exec_data {
- uint32_t batch[32];
- uint64_t pad;
- uint64_t vm_sync;
- uint64_t exec_sync;
- uint32_t data;
- uint32_t expected_data;
-};
-
static void igt_require_hugepages(void)
{
igt_skip_on_f(!igt_get_meminfo("HugePages_Total"),
@@ -1007,10 +1079,39 @@ static void igt_require_hugepages(void)
}
static void
-xe_vm_parse_execute_madvise(int fd, uint32_t vm, struct test_exec_data * data,
+xe_vm_madvixe_pat_attr(int fd, uint32_t vm, uint64_t addr, uint64_t range,
+ int pat_index)
+{
+ xe_vm_madvise(fd, vm, addr, range, 0,
+ DRM_XE_MEM_RANGE_ATTR_PAT, pat_index, 0);
+}
+
+static void
+xe_vm_madvise_atomic_attr(int fd, uint32_t vm, uint64_t addr, uint64_t range,
+ int mem_attr)
+{
+ xe_vm_madvise(fd, vm, addr, range, 0,
+ DRM_XE_MEM_RANGE_ATTR_ATOMIC,
+ mem_attr, 0);
+}
+
+static void
+xe_vm_madvise_migrate_pages(int fd, uint32_t vm, uint64_t addr, uint64_t range)
+{
+ xe_vm_madvise(fd, vm, addr, range, 0,
+ DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC,
+ DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM,
+ DRM_XE_MIGRATE_ALL_PAGES);
+}
+
+static void
+xe_vm_parse_execute_madvise(int fd, uint32_t vm, struct test_exec_data *data,
size_t bo_size, struct drm_xe_engine_class_instance *eci,
- uint64_t addr, unsigned int flags)
+ uint64_t addr, unsigned long long flags,
+ struct drm_xe_sync *sync)
{
+ uint32_t bo_flags, bo = 0;
+
if (flags & MADVISE_SWIZZLE) {
for (int i_loc = 0; i_loc < 2; i_loc++) {
uint64_t preferred_loc;
@@ -1027,13 +1128,185 @@ xe_vm_parse_execute_madvise(int fd, uint32_t vm, struct test_exec_data * data,
}
}
+ if (flags & MADVISE_ATOMIC_DEVICE)
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size,
+ DRM_XE_ATOMIC_DEVICE);
+
+ if (flags & PREFERRED_LOC_ATOMIC_UND) {
+ xe_vm_madvise_migrate_pages(fd, vm, to_user_pointer(data), bo_size);
+
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size,
+ DRM_XE_ATOMIC_UNDEFINED);
+ }
+
+ if (flags & PREFERRED_LOC_ATOMIC_DEVICE) {
+ xe_vm_madvise_migrate_pages(fd, vm, to_user_pointer(data), bo_size);
+
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size,
+ DRM_XE_ATOMIC_DEVICE);
+ }
+
+ if (flags & PREFERRED_LOC_ATOMIC_GL) {
+ xe_vm_madvise_migrate_pages(fd, vm, to_user_pointer(data), bo_size);
+
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size,
+ DRM_XE_ATOMIC_GLOBAL);
+ }
+
+ if (flags & PREFERRED_LOC_ATOMIC_CPU) {
+ xe_vm_madvise_migrate_pages(fd, vm, to_user_pointer(data), bo_size);
+
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size,
+ DRM_XE_ATOMIC_CPU);
+ }
+
+ if (flags & MADVISE_MULTI_VMA) {
+ if (bo_size)
+ bo_size = ALIGN(bo_size, SZ_4K);
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data) + bo_size/2,
+ bo_size/2, DRM_XE_ATOMIC_DEVICE);
+
+ xe_vm_madvixe_pat_attr(fd, vm, to_user_pointer(data) + bo_size/2, bo_size/2,
+ intel_get_pat_idx_wb(fd));
+
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data) + bo_size,
+ bo_size, DRM_XE_ATOMIC_DEVICE);
+
+ xe_vm_madvixe_pat_attr(fd, vm, to_user_pointer(data), bo_size,
+ intel_get_pat_idx_wb(fd));
+ }
+
+ if (flags & MADVISE_SPLIT_VMA) {
+ if (bo_size)
+ bo_size = ALIGN(bo_size, SZ_4K);
+
+ bo_flags = DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
+ bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id),
+ bo_flags);
+ xe_vm_bind_async(fd, vm, 0, bo, 0, to_user_pointer(data) + bo_size/2,
+ bo_size/2, 0, 0);
+
+ __xe_vm_bind_assert(fd, vm, 0, 0, 0, to_user_pointer(data) + bo_size/2,
+ bo_size/2, DRM_XE_VM_BIND_OP_MAP,
+ DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR, sync,
+ 1, 0, 0);
+ xe_wait_ufence(fd, &data[0].vm_sync, USER_FENCE_VALUE, 0, FIVE_SEC);
+ data[0].vm_sync = 0;
+ gem_close(fd, bo);
+ bo = 0;
+
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data),
+ bo_size/2,
+ DRM_XE_ATOMIC_DEVICE);
+ }
+
+ if (flags & MADVISE_ATOMIC_VMA) {
+ if (bo_size)
+ bo_size = ALIGN(bo_size, SZ_4K);
+
+ bo_flags = DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
+ bo = xe_bo_create(fd, vm, bo_size, vram_if_possible(fd, eci->gt_id), bo_flags);
+ xe_vm_bind_async(fd, vm, 0, bo, 0, to_user_pointer(data), bo_size, 0, 0);
+
+ __xe_vm_bind_assert(fd, vm, 0, 0, 0, to_user_pointer(data), bo_size,
+ DRM_XE_VM_BIND_OP_MAP,
+ DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR, sync,
+ 1, 0, 0);
+ xe_wait_ufence(fd, &data[0].vm_sync, USER_FENCE_VALUE, 0, FIVE_SEC);
+ data[0].vm_sync = 0;
+ gem_close(fd, bo);
+ bo = 0;
+
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size/2,
+ DRM_XE_ATOMIC_GLOBAL);
+ }
+}
+
+static void
+madvise_prefetch_op(int fd, uint32_t vm, uint64_t addr, size_t bo_size,
+ unsigned long long flags, struct test_exec_data *data)
+{
+ uint32_t val;
+
+ if (flags & PREFETCH_SPLIT_VMA) {
+ bo_size = ALIGN(bo_size, SZ_4K);
+
+ xe_vm_prefetch_async(fd, vm, 0, 0, addr, bo_size, NULL, 0, 0);
+
+ val = xe_vm_print_mem_attr_values_in_range(fd, vm, addr, bo_size);
+
+ igt_debug("num_vmas before madvise = %d\n", val);
+
+ xe_vm_madvise_migrate_pages(fd, vm, to_user_pointer(data), bo_size/2);
+
+ val = xe_vm_print_mem_attr_values_in_range(fd, vm, addr, bo_size);
+
+ igt_debug("num_vmas after madvise= %d\n", val);
+ } else if (flags & PREFETCH_SAME_ATTR) {
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size,
+ DRM_XE_ATOMIC_GLOBAL);
+ val = xe_vm_print_mem_attr_values_in_range(fd, vm, addr, bo_size);
+
+ xe_vm_prefetch_async(fd, vm, 0, 0, addr, bo_size, NULL, 0,
+ DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC);
+
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size/2,
+ DRM_XE_ATOMIC_GLOBAL);
+ } else if (flags & PREFETCH_CHANGE_ATTR) {
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size,
+ DRM_XE_ATOMIC_GLOBAL);
+
+ val = xe_vm_print_mem_attr_values_in_range(fd, vm, addr, bo_size);
+
+ xe_vm_prefetch_async(fd, vm, 0, 0, addr, bo_size, NULL, 0,
+ DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC);
+
+ xe_vm_madvise_atomic_attr(fd, vm, to_user_pointer(data), bo_size,
+ DRM_XE_ATOMIC_DEVICE);
+
+ val = xe_vm_print_mem_attr_values_in_range(fd, vm, addr, bo_size);
+ }
+}
+
+static void
+madvise_op_data_store(uint64_t addr, int i, int idx, size_t bo_size,
+ struct test_exec_data *data,
+ uint64_t *batch_offset,
+ uint64_t *batch_addr, uint64_t *sdi_offset, uint64_t *sdi_addr,
+ unsigned long long flags,
+ uint64_t *split_vma_offset)
+{
+ int b;
+
+ if (flags & MADVISE_MULTI_VMA) {
+ addr = addr + i * bo_size;
+ data = from_user_pointer(addr);
+ *batch_offset = (size_t)((char *)&(data[idx].batch) - (char *)data);
+ *batch_addr = addr + *batch_offset;
+ *sdi_offset = (size_t)((char *)&(data[idx].data) - (char *)data);
+ *sdi_addr = addr + *sdi_offset;
+
+ b = 0;
+ write_dword(&data[idx], *sdi_addr,
+ WRITE_VALUE(&data[idx], idx), &b,
+ flags & ATOMIC_BATCH ? true : false);
+ igt_assert(b <= ARRAY_SIZE(data[idx].batch));
+ }
+
+ if (flags & MADVISE_SPLIT_VMA) {
+ b = 0;
+ write_dword(&data[idx], *sdi_addr,
+ WRITE_VALUE(&data[idx], idx), &b,
+ flags & ATOMIC_BATCH ? true : false);
+ igt_assert(b <= ARRAY_SIZE(data[idx].batch));
+ }
}
static void
test_exec(int fd, struct drm_xe_engine_class_instance *eci,
int n_exec_queues, int n_execs, size_t bo_size,
size_t stride, uint32_t vm, void *alloc, pthread_barrier_t *barrier,
- unsigned int flags)
+ unsigned long long flags)
{
uint64_t addr;
struct drm_xe_sync sync[1] = {
@@ -1046,7 +1319,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
.syncs = to_user_pointer(sync),
};
uint32_t exec_queues[MAX_N_EXEC_QUEUES];
- struct test_exec_data *data, *next_data = NULL;
+ struct test_exec_data *data, *next_data = NULL, *original_data;
uint32_t bo_flags;
uint32_t bo = 0, bind_sync = 0;
void **pending_free;
@@ -1165,7 +1438,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
addr = to_user_pointer(data);
if (flags & MADVISE_OP)
- xe_vm_parse_execute_madvise(fd, vm, data, bo_size, eci, addr, flags);
+ xe_vm_parse_execute_madvise(fd, vm, data, bo_size, eci, addr, flags, sync);
if (flags & BO_UNMAP) {
bo_flags = DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
@@ -1240,6 +1513,16 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
bool fault_inject = (FAULT & flags) && i == n_execs / 2;
bool fault_injected = (FAULT & flags) && i > n_execs;
+ uint64_t split_vma_offset;
+
+ if (flags & MADVISE_OP) {
+ if (flags & MADVISE_MULTI_VMA)
+ original_data = data;
+
+ madvise_op_data_store(addr, i, idx, bo_size, data, &batch_offset,
+ &batch_addr, &sdi_offset, &sdi_addr, flags,
+ &split_vma_offset);
+ }
if (barrier)
pthread_barrier_wait(barrier);
@@ -1249,30 +1532,40 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
__write_dword(data[idx].batch,
sdi_addr + j * orig_size,
WRITE_VALUE(&data[idx], idx), &b);
- write_dword(data[idx].batch, sdi_addr + j * orig_size,
- WRITE_VALUE(&data[idx], idx), &b);
+ write_dword(&data[idx], sdi_addr + j * orig_size,
+ WRITE_VALUE(&data[idx], idx), &b,
+ flags & ATOMIC_BATCH ? true : false);
igt_assert(b <= ARRAY_SIZE(data[idx].batch));
} else if (!(flags & EVERY_OTHER_CHECK)) {
+ if (!(flags & MADVISE_SPLIT_VMA)) {
b = 0;
- write_dword(data[idx].batch, sdi_addr,
- WRITE_VALUE(&data[idx], idx), &b);
+ write_dword(&data[idx], sdi_addr,
+ WRITE_VALUE(&data[idx], idx), &b,
+ flags & ATOMIC_BATCH ? true : false);
igt_assert(b <= ARRAY_SIZE(data[idx].batch));
+ }
+ if (flags & PREFETCH)
+ madvise_prefetch_op(fd, vm, addr, bo_size, flags, data);
} else if (flags & EVERY_OTHER_CHECK && !odd(i)) {
b = 0;
- write_dword(data[idx].batch, sdi_addr,
- WRITE_VALUE(&data[idx], idx), &b);
+ write_dword(&data[idx], sdi_addr,
+ WRITE_VALUE(&data[idx], idx), &b,
+ flags & ATOMIC_BATCH ? true : false);
igt_assert(b <= ARRAY_SIZE(data[idx].batch));
aligned_alloc_type = __aligned_alloc(aligned_size, bo_size);
next_data = aligned_alloc_type.ptr;
igt_assert(next_data);
+
+ xe_vm_parse_execute_madvise(fd, vm, data, bo_size, eci, addr, flags, sync);
__aligned_partial_free(&aligned_alloc_type);
b = 0;
- write_dword(data[next_idx].batch,
+ write_dword(&data[next_idx],
to_user_pointer(next_data) +
(char *)&data[next_idx].data - (char *)data,
- WRITE_VALUE(&data[next_idx], next_idx), &b);
+ WRITE_VALUE(&data[next_idx], next_idx), &b,
+ flags & ATOMIC_BATCH ? true : false);
igt_assert(b <= ARRAY_SIZE(data[next_idx].batch));
}
@@ -1306,7 +1599,6 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
exec.address = batch_addr * 2;
else
exec.address = batch_addr;
-
if (fault_injected) {
err = __xe_exec(fd, &exec);
igt_assert(err == -ENOENT);
@@ -1326,9 +1618,19 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
exec_queues[e], &timeout);
igt_assert(err == -ETIME || err == -EIO);
} else {
- xe_wait_ufence(fd, exec_ufence ? exec_ufence :
- &data[idx].exec_sync, USER_FENCE_VALUE,
- exec_queues[e], FIVE_SEC);
+ if (flags & PREFERRED_LOC_ATOMIC_CPU || flags & PREFERRED_LOC_ATOMIC_UND) {
+ int64_t timeout = QUARTER_SEC;
+
+ err = __xe_wait_ufence(fd, exec_ufence ? exec_ufence :
+ &data[idx].exec_sync,
+ USER_FENCE_VALUE,
+ exec_queues[e], &timeout);
+ if (err)
+ goto cleanup;
+ } else
+ xe_wait_ufence(fd, exec_ufence ? exec_ufence :
+ &data[idx].exec_sync, USER_FENCE_VALUE,
+ exec_queues[e], FIVE_SEC);
if (flags & LOCK && !i)
munlock(data, bo_size);
@@ -1378,17 +1680,25 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
if (flags & FORK_READ) {
igt_fork(child, 1)
igt_assert_eq(data[idx].data,
- READ_VALUE(&data[idx]));
+ flags & ATOMIC_BATCH
+ ? VAL_ATOMIC_EXPECTED
+ : READ_VALUE(&data[idx]));
if (!(flags & FORK_READ_AFTER))
igt_assert_eq(data[idx].data,
- READ_VALUE(&data[idx]));
+ flags & ATOMIC_BATCH
+ ? VAL_ATOMIC_EXPECTED
+ : READ_VALUE(&data[idx]));
igt_waitchildren();
if (flags & FORK_READ_AFTER)
igt_assert_eq(data[idx].data,
- READ_VALUE(&data[idx]));
+ flags & ATOMIC_BATCH
+ ? VAL_ATOMIC_EXPECTED
+ : READ_VALUE(&data[idx]));
} else {
igt_assert_eq(data[idx].data,
- READ_VALUE(&data[idx]));
+ flags & ATOMIC_BATCH
+ ? VAL_ATOMIC_EXPECTED
+ : READ_VALUE(&data[idx]));
if (flags & PREFETCH_SYS_BENCHMARK) {
struct timespec tv = {};
u64 start, end;
@@ -1415,13 +1725,17 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
((void *)data) + j * orig_size;
igt_assert_eq(__data[idx].data,
- READ_VALUE(&data[idx]));
+ flags & ATOMIC_BATCH
+ ? VAL_ATOMIC_EXPECTED
+ : READ_VALUE(&data[idx]));
}
}
}
if (flags & EVERY_OTHER_CHECK)
igt_assert_eq(data[prev_idx].data,
- READ_VALUE(&data[prev_idx]));
+ flags & ATOMIC_BATCH
+ ? VAL_ATOMIC_EXPECTED
+ : READ_VALUE(&data[prev_idx]));
}
}
@@ -1442,6 +1756,11 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
gem_close(fd, bo);
}
+ if (flags & MADVISE_MULTI_VMA) {
+ data = original_data;
+ original_data = NULL;
+ }
+
if (flags & NEW) {
if (flags & MMAP) {
if (flags & FREE)
@@ -1520,6 +1839,7 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
pf_count, pf_count_after);
}
+cleanup:
if (bo) {
sync[0].addr = to_user_pointer(bind_ufence);
__xe_vm_bind_assert(fd, vm, 0,
@@ -1815,7 +2135,7 @@ test_compute(int fd, struct drm_xe_engine_class_instance *eci, size_t size)
struct section {
const char *name;
- unsigned int flags;
+ unsigned long long flags;
};
igt_main
@@ -1921,6 +2241,32 @@ igt_main
{ "malloc-mix-bo", MIX_BO_ALLOC },
{ NULL },
};
+ const struct section msections[] = {
+ { "atomic-inc", MADVISE_OP | MADVISE_ATOMIC_DEVICE | ATOMIC_BATCH },
+ { "preffered-loc-sram-migrate-pages",
+ MADVISE_OP | MADVISE_SWIZZLE | MIGRATE_ALL_PAGES | ATOMIC_BATCH },
+ { "preffered-loc-atomic-vram",
+ MADVISE_OP | PREFERRED_LOC_ATOMIC_DEVICE | ATOMIC_BATCH },
+ { "preffered-loc-atomic-gl",
+ MADVISE_OP | PREFERRED_LOC_ATOMIC_GL | ATOMIC_BATCH },
+ { "preffered-loc-atomic-cpu",
+ MADVISE_OP | PREFERRED_LOC_ATOMIC_CPU | ATOMIC_BATCH },
+ { "preffered-loc-atomic-und",
+ MADVISE_OP | PREFERRED_LOC_ATOMIC_UND | ATOMIC_BATCH },
+ { "multi-vma",
+ MADVISE_OP | MADVISE_MULTI_VMA | ATOMIC_BATCH },
+ { "split-vma",
+ MADVISE_OP | MADVISE_SPLIT_VMA | ATOMIC_BATCH },
+ { "atomic-vma",
+ MADVISE_OP | MADVISE_ATOMIC_VMA | ATOMIC_BATCH },
+ { "split-vma-with-mapping",
+ MADVISE_OP | PREFETCH | PREFETCH_SPLIT_VMA | ATOMIC_BATCH },
+ { "range-invalidate-change-attr",
+ MADVISE_OP | PREFETCH | PREFETCH_CHANGE_ATTR | ATOMIC_BATCH },
+ { "no-range-invalidate-same-attr",
+ MADVISE_OP | PREFETCH | PREFETCH_SAME_ATTR | ATOMIC_BATCH },
+ { NULL },
+ };
int fd;
igt_fixture {
@@ -2117,6 +2463,14 @@ igt_main
processes_evict(fd, SZ_8M, SZ_1M, s->flags);
}
+ for (const struct section *s = msections; s->name; s++) {
+ igt_subtest_f("madvise-%s", s->name) {
+ xe_for_each_engine(fd, hwe)
+ test_exec(fd, hwe, 1, 1, SZ_64K, 0, 0, NULL,
+ NULL, s->flags);
+ }
+ }
+
igt_subtest("compute")
xe_for_each_engine(fd, hwe)
test_compute(fd, hwe, SZ_2M);
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* ✓ Xe.CI.BAT: success for Madvise Tests in IGT (rev8)
2025-09-02 16:30 [PATCH i-g-t v8 0/5] Madvise Tests in IGT nishit.sharma
` (4 preceding siblings ...)
2025-09-02 16:30 ` [PATCH i-g-t v8 5/5] tests/intel/xe_exec_system_allocator: Add atomic_batch test in IGT nishit.sharma
@ 2025-09-03 3:58 ` Patchwork
2025-09-03 4:00 ` ✗ i915.CI.BAT: failure " Patchwork
2025-09-03 11:07 ` ✗ Xe.CI.Full: " Patchwork
7 siblings, 0 replies; 10+ messages in thread
From: Patchwork @ 2025-09-03 3:58 UTC (permalink / raw)
To: nishit.sharma; +Cc: igt-dev
[-- Attachment #1: Type: text/plain, Size: 1827 bytes --]
== Series Details ==
Series: Madvise Tests in IGT (rev8)
URL : https://patchwork.freedesktop.org/series/153335/
State : success
== Summary ==
CI Bug Log - changes from XEIGT_8520_BAT -> XEIGTPW_13679_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (11 -> 11)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in XEIGTPW_13679_BAT that come from known issues:
### IGT changes ###
#### Possible fixes ####
* igt@xe_vm@bind-execqueues-independent:
- {bat-ptl-vm}: [FAIL][1] ([Intel XE#5783]) -> [PASS][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/bat-ptl-vm/igt@xe_vm@bind-execqueues-independent.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/bat-ptl-vm/igt@xe_vm@bind-execqueues-independent.html
- {bat-ptl-2}: [FAIL][3] ([Intel XE#5783]) -> [PASS][4]
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/bat-ptl-2/igt@xe_vm@bind-execqueues-independent.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/bat-ptl-2/igt@xe_vm@bind-execqueues-independent.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#5783]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5783
Build changes
-------------
* IGT: IGT_8520 -> IGTPW_13679
IGTPW_13679: 6a9d8eb7048d8ece8bfeba6132a6d59489c667c8 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
IGT_8520: 8520
xe-3668-97a9560f0f1dd0a4472e669ff2188d0a8293b375: 97a9560f0f1dd0a4472e669ff2188d0a8293b375
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/index.html
[-- Attachment #2: Type: text/html, Size: 2476 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* ✗ i915.CI.BAT: failure for Madvise Tests in IGT (rev8)
2025-09-02 16:30 [PATCH i-g-t v8 0/5] Madvise Tests in IGT nishit.sharma
` (5 preceding siblings ...)
2025-09-03 3:58 ` ✓ Xe.CI.BAT: success for Madvise Tests in IGT (rev8) Patchwork
@ 2025-09-03 4:00 ` Patchwork
2025-09-03 11:07 ` ✗ Xe.CI.Full: " Patchwork
7 siblings, 0 replies; 10+ messages in thread
From: Patchwork @ 2025-09-03 4:00 UTC (permalink / raw)
To: nishit.sharma; +Cc: igt-dev
[-- Attachment #1: Type: text/plain, Size: 3978 bytes --]
== Series Details ==
Series: Madvise Tests in IGT (rev8)
URL : https://patchwork.freedesktop.org/series/153335/
State : failure
== Summary ==
CI Bug Log - changes from IGT_8520 -> IGTPW_13679
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with IGTPW_13679 absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in IGTPW_13679, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_13679/index.html
Participating hosts (44 -> 42)
------------------------------
Missing (2): fi-snb-2520m bat-adls-6
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in IGTPW_13679:
### IGT changes ###
#### Possible regressions ####
* igt@i915_selftest@live:
- bat-jsl-1: [PASS][1] -> [DMESG-FAIL][2] +1 other test dmesg-fail
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8520/bat-jsl-1/igt@i915_selftest@live.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_13679/bat-jsl-1/igt@i915_selftest@live.html
Known issues
------------
Here are the changes found in IGTPW_13679 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@i915_selftest@live:
- bat-mtlp-8: [PASS][3] -> [DMESG-FAIL][4] ([i915#12061]) +1 other test dmesg-fail
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8520/bat-mtlp-8/igt@i915_selftest@live.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_13679/bat-mtlp-8/igt@i915_selftest@live.html
* igt@i915_selftest@live@workarounds:
- bat-arls-5: [PASS][5] -> [DMESG-FAIL][6] ([i915#12061]) +1 other test dmesg-fail
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8520/bat-arls-5/igt@i915_selftest@live@workarounds.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_13679/bat-arls-5/igt@i915_selftest@live@workarounds.html
#### Possible fixes ####
* igt@i915_selftest@live:
- bat-dg2-8: [DMESG-FAIL][7] ([i915#12061]) -> [PASS][8] +1 other test pass
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8520/bat-dg2-8/igt@i915_selftest@live.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_13679/bat-dg2-8/igt@i915_selftest@live.html
* igt@i915_selftest@live@workarounds:
- bat-dg2-11: [DMESG-FAIL][9] ([i915#12061]) -> [PASS][10] +1 other test pass
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8520/bat-dg2-11/igt@i915_selftest@live@workarounds.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_13679/bat-dg2-11/igt@i915_selftest@live@workarounds.html
- bat-dg2-14: [DMESG-FAIL][11] ([i915#12061]) -> [PASS][12] +1 other test pass
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8520/bat-dg2-14/igt@i915_selftest@live@workarounds.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_13679/bat-dg2-14/igt@i915_selftest@live@workarounds.html
- bat-arls-6: [DMESG-FAIL][13] ([i915#12061]) -> [PASS][14] +1 other test pass
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8520/bat-arls-6/igt@i915_selftest@live@workarounds.html
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_13679/bat-arls-6/igt@i915_selftest@live@workarounds.html
[i915#12061]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12061
Build changes
-------------
* CI: CI-20190529 -> None
* IGT: IGT_8520 -> IGTPW_13679
CI-20190529: 20190529
CI_DRM_17119: 97a9560f0f1dd0a4472e669ff2188d0a8293b375 @ git://anongit.freedesktop.org/gfx-ci/linux
IGTPW_13679: 6a9d8eb7048d8ece8bfeba6132a6d59489c667c8 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
IGT_8520: 8520
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/IGTPW_13679/index.html
[-- Attachment #2: Type: text/html, Size: 5092 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH i-g-t v8 4/5] tests/intel/xe_exec_system_allocator: Add madvise-swizzle test
2025-09-02 16:30 ` [PATCH i-g-t v8 4/5] tests/intel/xe_exec_system_allocator: Add madvise-swizzle test nishit.sharma
@ 2025-09-03 5:50 ` Matthew Brost
0 siblings, 0 replies; 10+ messages in thread
From: Matthew Brost @ 2025-09-03 5:50 UTC (permalink / raw)
To: nishit.sharma; +Cc: igt-dev, pravalika.gurram, himal.prasad.ghimiray
On Tue, Sep 02, 2025 at 04:30:50PM +0000, nishit.sharma@intel.com wrote:
> From: Nishit Sharma <nishit.sharma@intel.com>
>
> madvise-swizzle test introduced which is called in combination with other
> tests as well. In this test the buffer object preferred location is
> system memory.
>
> Signed-off-by: Nishit Sharma <nishit.sharma@intel.com>
> ---
> tests/intel/xe_exec_system_allocator.c | 39 ++++++++++++++++++++++++++
> 1 file changed, 39 insertions(+)
>
> diff --git a/tests/intel/xe_exec_system_allocator.c b/tests/intel/xe_exec_system_allocator.c
> index e7f3d423a..16f907ab5 100644
> --- a/tests/intel/xe_exec_system_allocator.c
> +++ b/tests/intel/xe_exec_system_allocator.c
> @@ -777,6 +777,8 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
> #define PROCESSES (0x1 << 24)
> #define PREFETCH_BENCHMARK (0x1 << 25)
> #define PREFETCH_SYS_BENCHMARK (0x1 << 26)
> +#define MADVISE_SWIZZLE (0x1 << 27)
> +#define MADVISE_OP (0x1 << 28)
>
> #define N_MULTI_FAULT 4
>
> @@ -885,7 +887,9 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
> * arg[1]:
> *
> * @malloc: malloc single buffer for all execs, issue a command which will trigger multiple faults
> + * @malloc-madvise: malloc single buffer for all execs, issue a command which will trigger multiple faults, perfoems madvise operation
> * @malloc-prefetch: malloc single buffer for all execs, prefetch buffer before each exec
> + * @malloc-prefetch-madvise: malloc single buffer for all execs, prefetch buffer before each exec, performs madvise operation
> * @malloc-multi-fault: malloc single buffer for all execs
> * @malloc-fork-read: malloc single buffer for all execs, fork a process to read test output
> * @malloc-fork-read-after: malloc single buffer for all execs, fork a process to read test output, check again after fork returns in parent
> @@ -897,6 +901,7 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
> * @mmap: mmap single buffer for all execs
> * @mmap-prefetch: mmap single buffer for all execs, prefetch buffer before each exec
> * @mmap-remap: mmap and mremap a buffer for all execs
> + * @mmap-remap-madvise: mmap and mremap a buffer for all execs, performs madvise operations
> * @mmap-remap-dontunmap: mmap and mremap a buffer with dontunmap flag for all execs
> * @mmap-remap-ro: mmap and mremap a read-only buffer for all execs
> * @mmap-remap-ro-dontunmap: mmap and mremap a read-only buffer with dontunmap flag for all execs
> @@ -916,8 +921,10 @@ partial(int fd, struct drm_xe_engine_class_instance *eci, unsigned int flags)
> * @mmap-file-mlock: mmap and mlock single buffer, with file backing, for all execs
> * @mmap-race: mmap single buffer for all execs with race between cpu and gpu access
> * @free: malloc and free buffer for each exec
> + * @free-madvise: malloc and free buffer for each exec, performs madvise operation
> * @free-race: malloc and free buffer for each exec with race between cpu and gpu access
> * @new: malloc a new buffer for each exec
> + * @new-madvise: malloc a new buffer for each exec, performs madvise operation
> * @new-prefetch: malloc a new buffer and prefetch for each exec
> * @new-race: malloc a new buffer for each exec with race between cpu and gpu access
> * @new-bo-map: malloc a new buffer or map BO for each exec
> @@ -999,6 +1006,29 @@ static void igt_require_hugepages(void)
> "No huge pages available!\n");
> }
>
> +static void
> +xe_vm_parse_execute_madvise(int fd, uint32_t vm, struct test_exec_data * data,
> + size_t bo_size, struct drm_xe_engine_class_instance *eci,
> + uint64_t addr, unsigned int flags)
> +{
> + if (flags & MADVISE_SWIZZLE) {
> + for (int i_loc = 0; i_loc < 2; i_loc++) {
> + uint64_t preferred_loc;
> +
> + if (i_loc % 2 == 0)
> + preferred_loc = DRM_XE_PREFERRED_LOC_DEFAULT_SYSTEM;
> + else
> + preferred_loc = DRM_XE_PREFERRED_LOC_DEFAULT_DEVICE;
> +
> + xe_vm_madvise(fd, vm, to_user_pointer(data), bo_size, 0,
> + DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC,
> + preferred_loc,
> + 0);
> + }
> + }
> +
> +}
This isn't quite what I suggested in the previous rev. I was suggesting
in the 'for (i = 0; i < n_execs; i++) {' loop, call madvise toggling
between a preferred location of SYSTEM / DEVICE on each pass of the
loop.
Matt
> +
> static void
> test_exec(int fd, struct drm_xe_engine_class_instance *eci,
> int n_exec_queues, int n_execs, size_t bo_size,
> @@ -1134,6 +1164,9 @@ test_exec(int fd, struct drm_xe_engine_class_instance *eci,
>
> addr = to_user_pointer(data);
>
> + if (flags & MADVISE_OP)
> + xe_vm_parse_execute_madvise(fd, vm, data, bo_size, eci, addr, flags);
> +
> if (flags & BO_UNMAP) {
> bo_flags = DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM;
> bo = xe_bo_create(fd, vm, bo_size,
> @@ -1790,7 +1823,9 @@ igt_main
> struct drm_xe_engine_class_instance *hwe;
> const struct section sections[] = {
> { "malloc", 0 },
> + { "malloc-madvise", MADVISE_OP | MADVISE_SWIZZLE },
> { "malloc-prefetch", PREFETCH },
> + { "malloc-prefetch-madvise", PREFETCH | MADVISE_OP | MADVISE_SWIZZLE },
> { "malloc-multi-fault", MULTI_FAULT },
> { "malloc-fork-read", FORK_READ },
> { "malloc-fork-read-after", FORK_READ | FORK_READ_AFTER },
> @@ -1802,6 +1837,7 @@ igt_main
> { "mmap", MMAP },
> { "mmap-prefetch", MMAP | PREFETCH },
> { "mmap-remap", MMAP | MREMAP },
> + { "mmap-remap-madvise", MMAP | MREMAP | MADVISE_OP | MADVISE_SWIZZLE },
> { "mmap-remap-dontunmap", MMAP | MREMAP | DONTUNMAP },
> { "mmap-remap-ro", MMAP | MREMAP | READ_ONLY_REMAP },
> { "mmap-remap-ro-dontunmap", MMAP | MREMAP | DONTUNMAP |
> @@ -1828,13 +1864,16 @@ igt_main
> { "mmap-file-mlock", MMAP | LOCK | FILE_BACKED },
> { "mmap-race", MMAP | RACE },
> { "free", NEW | FREE },
> + { "free-madvise", NEW | FREE | MADVISE_OP | MADVISE_SWIZZLE },
> { "free-race", NEW | FREE | RACE },
> { "new", NEW },
> + { "new-madvise", NEW | MADVISE_OP | MADVISE_SWIZZLE },
> { "new-prefetch", NEW | PREFETCH },
> { "new-race", NEW | RACE },
> { "new-bo-map", NEW | BO_MAP },
> { "new-busy", NEW | BUSY },
> { "mmap-free", MMAP | NEW | FREE },
> + { "mmap-free", MMAP | NEW | FREE | MADVISE_OP | MADVISE_SWIZZLE },
> { "mmap-free-huge", MMAP | NEW | FREE | HUGE_PAGE },
> { "mmap-free-race", MMAP | NEW | FREE | RACE },
> { "mmap-new", MMAP | NEW },
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* ✗ Xe.CI.Full: failure for Madvise Tests in IGT (rev8)
2025-09-02 16:30 [PATCH i-g-t v8 0/5] Madvise Tests in IGT nishit.sharma
` (6 preceding siblings ...)
2025-09-03 4:00 ` ✗ i915.CI.BAT: failure " Patchwork
@ 2025-09-03 11:07 ` Patchwork
7 siblings, 0 replies; 10+ messages in thread
From: Patchwork @ 2025-09-03 11:07 UTC (permalink / raw)
To: nishit.sharma; +Cc: igt-dev
[-- Attachment #1: Type: text/plain, Size: 44264 bytes --]
== Series Details ==
Series: Madvise Tests in IGT (rev8)
URL : https://patchwork.freedesktop.org/series/153335/
State : failure
== Summary ==
CI Bug Log - changes from XEIGT_8520_FULL -> XEIGTPW_13679_FULL
====================================================
Summary
-------
**WARNING**
Minor unknown changes coming with XEIGTPW_13679_FULL need to be verified
manually.
If you think the reported changes have nothing to do with the changes
introduced in XEIGTPW_13679_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 3)
------------------------------
Missing (1): shard-adlp
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in XEIGTPW_13679_FULL:
### IGT changes ###
#### Warnings ####
* igt@xe_exec_system_allocator@threads-many-large-mmap-free:
- shard-dg2-set2: [SKIP][1] ([Intel XE#4915]) -> [SKIP][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-436/igt@xe_exec_system_allocator@threads-many-large-mmap-free.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@xe_exec_system_allocator@threads-many-large-mmap-free.html
New tests
---------
New tests have been introduced between XEIGT_8520_FULL and XEIGTPW_13679_FULL:
### New IGT tests (28) ###
* igt@xe_exec_system_allocator@madvise-atomic-vma:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@xe_exec_system_allocator@many-64k-malloc-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 1.70] s
* igt@xe_exec_system_allocator@many-execqueues-malloc-madvise:
- Statuses :
- Exec time: [None] s
* igt@xe_exec_system_allocator@many-execqueues-new-madvise:
- Statuses :
- Exec time: [None] s
* igt@xe_exec_system_allocator@many-large-execqueues-malloc-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 6.44] s
* igt@xe_exec_system_allocator@many-large-new-madvise:
- Statuses :
- Exec time: [None] s
* igt@xe_exec_system_allocator@many-malloc-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 1.56] s
* igt@xe_exec_system_allocator@many-stride-free-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 4.71] s
* igt@xe_exec_system_allocator@many-stride-malloc-prefetch-madvise:
- Statuses :
- Exec time: [None] s
* igt@xe_exec_system_allocator@once-large-malloc-madvise:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@xe_exec_system_allocator@process-many-execqueues-free-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 0.33] s
* igt@xe_exec_system_allocator@process-many-execqueues-malloc-prefetch-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 0.27] s
* igt@xe_exec_system_allocator@process-many-large-malloc-prefetch-madvise:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@xe_exec_system_allocator@process-many-stride-malloc-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 0.58] s
* igt@xe_exec_system_allocator@process-many-stride-mmap-remap-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 0.57] s
* igt@xe_exec_system_allocator@process-many-stride-new-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 1.14] s
* igt@xe_exec_system_allocator@threads-many-execqueues-malloc-prefetch-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 0.26] s
* igt@xe_exec_system_allocator@threads-many-execqueues-mmap-remap-madvise:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@xe_exec_system_allocator@threads-many-execqueues-new-madvise:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@xe_exec_system_allocator@threads-many-large-execqueues-free-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 2.09] s
* igt@xe_exec_system_allocator@threads-many-large-new-madvise:
- Statuses :
- Exec time: [None] s
* igt@xe_exec_system_allocator@threads-many-stride-malloc-madvise:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@xe_exec_system_allocator@threads-shared-vm-many-execqueues-malloc-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 0.34] s
* igt@xe_exec_system_allocator@threads-shared-vm-many-mmap-remap-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 0.65] s
* igt@xe_exec_system_allocator@threads-shared-vm-many-new-madvise:
- Statuses :
- Exec time: [None] s
* igt@xe_exec_system_allocator@twice-large-free-madvise:
- Statuses : 1 skip(s)
- Exec time: [0.0] s
* igt@xe_exec_system_allocator@twice-large-new-madvise:
- Statuses : 2 pass(s) 1 skip(s)
- Exec time: [0.0, 0.10] s
* igt@xe_exec_system_allocator@twice-mmap-remap-madvise:
- Statuses :
- Exec time: [None] s
Known issues
------------
Here are the changes found in XEIGTPW_13679_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_big_fb@x-tiled-8bpp-rotate-270:
- shard-dg2-set2: NOTRUN -> [SKIP][3] ([Intel XE#316])
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@kms_big_fb@x-tiled-8bpp-rotate-270.html
* igt@kms_big_fb@yf-tiled-8bpp-rotate-270:
- shard-dg2-set2: NOTRUN -> [SKIP][4] ([Intel XE#1124])
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@kms_big_fb@yf-tiled-8bpp-rotate-270.html
- shard-lnl: NOTRUN -> [SKIP][5] ([Intel XE#1124])
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-8/igt@kms_big_fb@yf-tiled-8bpp-rotate-270.html
* igt@kms_bw@linear-tiling-1-displays-2560x1440p:
- shard-dg2-set2: NOTRUN -> [SKIP][6] ([Intel XE#367])
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@kms_bw@linear-tiling-1-displays-2560x1440p.html
* igt@kms_bw@linear-tiling-1-displays-3840x2160p:
- shard-bmg: NOTRUN -> [SKIP][7] ([Intel XE#367])
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@kms_bw@linear-tiling-1-displays-3840x2160p.html
* igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [SKIP][8] ([Intel XE#787]) +55 other tests skip
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@kms_ccs@bad-aux-stride-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-6.html
* igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs@pipe-b-hdmi-a-3:
- shard-bmg: NOTRUN -> [SKIP][9] ([Intel XE#2652] / [Intel XE#787]) +8 other tests skip
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_ccs@bad-rotation-90-4-tiled-lnl-ccs@pipe-b-hdmi-a-3.html
* igt@kms_ccs@ccs-on-another-bo-4-tiled-mtl-mc-ccs:
- shard-bmg: NOTRUN -> [SKIP][10] ([Intel XE#2887])
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_ccs@ccs-on-another-bo-4-tiled-mtl-mc-ccs.html
* igt@kms_ccs@crc-primary-basic-y-tiled-gen12-mc-ccs@pipe-d-dp-2:
- shard-dg2-set2: NOTRUN -> [SKIP][11] ([Intel XE#455] / [Intel XE#787]) +7 other tests skip
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-432/igt@kms_ccs@crc-primary-basic-y-tiled-gen12-mc-ccs@pipe-d-dp-2.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs:
- shard-dg2-set2: [PASS][12] -> [INCOMPLETE][13] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#3124] / [Intel XE#4345]) +1 other test incomplete
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-463/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
* igt@kms_cursor_crc@cursor-offscreen-64x21:
- shard-bmg: NOTRUN -> [SKIP][14] ([Intel XE#2320])
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@kms_cursor_crc@cursor-offscreen-64x21.html
* igt@kms_cursor_crc@cursor-random-512x512:
- shard-bmg: NOTRUN -> [SKIP][15] ([Intel XE#2321])
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@kms_cursor_crc@cursor-random-512x512.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic:
- shard-bmg: [PASS][16] -> [SKIP][17] ([Intel XE#2291])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-4/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic.html
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic.html
* igt@kms_flip@2x-absolute-wf_vblank-interruptible:
- shard-bmg: [PASS][18] -> [SKIP][19] ([Intel XE#2316])
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-4/igt@kms_flip@2x-absolute-wf_vblank-interruptible.html
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@kms_flip@2x-absolute-wf_vblank-interruptible.html
* igt@kms_flip@2x-dpms-vs-vblank-race-interruptible:
- shard-bmg: NOTRUN -> [SKIP][20] ([Intel XE#2316])
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@kms_flip@2x-dpms-vs-vblank-race-interruptible.html
* igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling:
- shard-bmg: NOTRUN -> [SKIP][21] ([Intel XE#2293] / [Intel XE#2380])
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling@pipe-a-valid-mode:
- shard-bmg: NOTRUN -> [SKIP][22] ([Intel XE#2293])
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling@pipe-a-valid-mode.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-indfb-msflip-blt:
- shard-dg2-set2: NOTRUN -> [SKIP][23] ([Intel XE#651]) +1 other test skip
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-indfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary:
- shard-bmg: NOTRUN -> [SKIP][24] ([Intel XE#5390]) +1 other test skip
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-shrfb-scaledprimary.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscren-pri-shrfb-draw-mmap-wc:
- shard-bmg: NOTRUN -> [SKIP][25] ([Intel XE#2311])
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcdrrs-1p-offscren-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt:
- shard-bmg: NOTRUN -> [SKIP][26] ([Intel XE#2313]) +3 other tests skip
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@psr-rgb565-draw-blt:
- shard-dg2-set2: NOTRUN -> [SKIP][27] ([Intel XE#653]) +1 other test skip
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@kms_frontbuffer_tracking@psr-rgb565-draw-blt.html
* igt@kms_plane_scaling@plane-downscale-factor-0-5-with-modifiers:
- shard-lnl: NOTRUN -> [SKIP][28] ([Intel XE#2763]) +3 other tests skip
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-8/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-modifiers.html
* igt@kms_pm_dc@deep-pkgc:
- shard-dg2-set2: NOTRUN -> [SKIP][29] ([Intel XE#908])
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@kms_pm_dc@deep-pkgc.html
* igt@kms_pm_rpm@i2c:
- shard-dg2-set2: [PASS][30] -> [FAIL][31] ([Intel XE#5099])
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-432/igt@kms_pm_rpm@i2c.html
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-432/igt@kms_pm_rpm@i2c.html
* igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area:
- shard-bmg: NOTRUN -> [SKIP][32] ([Intel XE#1406] / [Intel XE#1489])
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area.html
- shard-dg2-set2: NOTRUN -> [SKIP][33] ([Intel XE#1406] / [Intel XE#1489]) +1 other test skip
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-432/igt@kms_psr2_sf@psr2-plane-move-sf-dmg-area.html
* igt@kms_psr@fbc-pr-sprite-plane-move:
- shard-bmg: NOTRUN -> [SKIP][34] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850])
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@kms_psr@fbc-pr-sprite-plane-move.html
* igt@xe_compute@ccs-mode-compute-kernel:
- shard-lnl: NOTRUN -> [SKIP][35] ([Intel XE#1447])
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-3/igt@xe_compute@ccs-mode-compute-kernel.html
- shard-bmg: NOTRUN -> [FAIL][36] ([Intel XE#5963])
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@xe_compute@ccs-mode-compute-kernel.html
* igt@xe_copy_basic@mem-set-linear-0xfffe:
- shard-dg2-set2: NOTRUN -> [SKIP][37] ([Intel XE#1126])
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@xe_copy_basic@mem-set-linear-0xfffe.html
* igt@xe_eudebug@basic-vm-bind-discovery:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#4837])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@xe_eudebug@basic-vm-bind-discovery.html
* igt@xe_eudebug@basic-vm-bind-vm-destroy-discovery:
- shard-dg2-set2: NOTRUN -> [SKIP][39] ([Intel XE#4837]) +1 other test skip
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@xe_eudebug@basic-vm-bind-vm-destroy-discovery.html
* igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap:
- shard-dg2-set2: [PASS][40] -> [SKIP][41] ([Intel XE#1392]) +1 other test skip
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-436/igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap.html
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-432/igt@xe_exec_basic@multigpu-no-exec-null-defer-mmap.html
* igt@xe_exec_capture@reset:
- shard-dg2-set2: [PASS][42] -> [FAIL][43] ([Intel XE#5481])
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-432/igt@xe_exec_capture@reset.html
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@xe_exec_capture@reset.html
* igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence:
- shard-dg2-set2: NOTRUN -> [SKIP][44] ([Intel XE#2360])
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@xe_exec_mix_modes@exec-spinner-interrupted-dma-fence.html
* igt@xe_exec_system_allocator@threads-many-large-execqueues-mmap-free-huge:
- shard-lnl: NOTRUN -> [SKIP][45] ([Intel XE#4943])
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-8/igt@xe_exec_system_allocator@threads-many-large-execqueues-mmap-free-huge.html
* igt@xe_exec_system_allocator@threads-many-large-execqueues-mmap-new-huge:
- shard-bmg: NOTRUN -> [SKIP][46] ([Intel XE#4943]) +1 other test skip
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@xe_exec_system_allocator@threads-many-large-execqueues-mmap-new-huge.html
* igt@xe_exec_system_allocator@twice-mmap-file-mlock:
- shard-dg2-set2: NOTRUN -> [SKIP][47] ([Intel XE#4915]) +57 other tests skip
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-432/igt@xe_exec_system_allocator@twice-mmap-file-mlock.html
* igt@xe_pmu@gt-frequency:
- shard-dg2-set2: [PASS][48] -> [FAIL][49] ([Intel XE#4819]) +1 other test fail
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-464/igt@xe_pmu@gt-frequency.html
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@xe_pmu@gt-frequency.html
* igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq:
- shard-bmg: NOTRUN -> [SKIP][50] ([Intel XE#4733])
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq.html
- shard-dg2-set2: NOTRUN -> [SKIP][51] ([Intel XE#4733])
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@xe_pxp@pxp-stale-bo-bind-post-termination-irq.html
* igt@xe_sriov_auto_provisioning@exclusive-ranges@numvfs-random:
- shard-bmg: [PASS][52] -> [FAIL][53] ([Intel XE#6006]) +1 other test fail
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-7/igt@xe_sriov_auto_provisioning@exclusive-ranges@numvfs-random.html
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@xe_sriov_auto_provisioning@exclusive-ranges@numvfs-random.html
#### Possible fixes ####
* igt@kms_cursor_legacy@cursorb-vs-flipb-legacy:
- shard-bmg: [SKIP][54] ([Intel XE#2291]) -> [PASS][55] +1 other test pass
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipb-legacy.html
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_cursor_legacy@cursorb-vs-flipb-legacy.html
* igt@kms_flip@2x-blocking-absolute-wf_vblank:
- shard-bmg: [SKIP][56] ([Intel XE#2316]) -> [PASS][57]
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-6/igt@kms_flip@2x-blocking-absolute-wf_vblank.html
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_flip@2x-blocking-absolute-wf_vblank.html
* igt@kms_joiner@invalid-modeset-force-big-joiner:
- shard-bmg: [SKIP][58] ([Intel XE#3012]) -> [PASS][59]
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-6/igt@kms_joiner@invalid-modeset-force-big-joiner.html
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_joiner@invalid-modeset-force-big-joiner.html
* igt@kms_plane_scaling@intel-max-src-size:
- shard-bmg: [SKIP][60] ([Intel XE#2685] / [Intel XE#3307]) -> [PASS][61]
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-2/igt@kms_plane_scaling@intel-max-src-size.html
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@kms_plane_scaling@intel-max-src-size.html
- shard-dg2-set2: [SKIP][62] ([Intel XE#455]) -> [PASS][63]
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-434/igt@kms_plane_scaling@intel-max-src-size.html
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@kms_plane_scaling@intel-max-src-size.html
* igt@xe_exec_basic@multigpu-once-bindexecqueue-rebind:
- shard-dg2-set2: [SKIP][64] ([Intel XE#1392]) -> [PASS][65]
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-432/igt@xe_exec_basic@multigpu-once-bindexecqueue-rebind.html
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@xe_exec_basic@multigpu-once-bindexecqueue-rebind.html
* igt@xe_exec_system_allocator@process-many-large-mmap-race:
- shard-bmg: [FAIL][66] ([Intel XE#4937]) -> [PASS][67] +1 other test pass
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-5/igt@xe_exec_system_allocator@process-many-large-mmap-race.html
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@xe_exec_system_allocator@process-many-large-mmap-race.html
* igt@xe_module_load@load:
- shard-lnl: ([PASS][68], [PASS][69], [PASS][70], [PASS][71], [PASS][72], [PASS][73], [PASS][74], [PASS][75], [PASS][76], [PASS][77], [PASS][78], [PASS][79], [PASS][80], [PASS][81], [PASS][82], [PASS][83], [PASS][84], [PASS][85], [PASS][86], [PASS][87], [PASS][88], [SKIP][89], [PASS][90], [PASS][91], [PASS][92], [PASS][93]) ([Intel XE#378]) -> ([PASS][94], [PASS][95], [PASS][96], [PASS][97], [PASS][98], [PASS][99], [PASS][100], [PASS][101], [PASS][102], [PASS][103], [PASS][104], [PASS][105], [PASS][106], [PASS][107], [PASS][108], [PASS][109], [PASS][110], [PASS][111], [PASS][112], [PASS][113], [PASS][114], [PASS][115], [PASS][116], [PASS][117], [PASS][118])
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-7/igt@xe_module_load@load.html
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-1/igt@xe_module_load@load.html
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-5/igt@xe_module_load@load.html
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-5/igt@xe_module_load@load.html
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-7/igt@xe_module_load@load.html
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-2/igt@xe_module_load@load.html
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-3/igt@xe_module_load@load.html
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-7/igt@xe_module_load@load.html
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-7/igt@xe_module_load@load.html
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-2/igt@xe_module_load@load.html
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-1/igt@xe_module_load@load.html
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-8/igt@xe_module_load@load.html
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-8/igt@xe_module_load@load.html
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-1/igt@xe_module_load@load.html
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-5/igt@xe_module_load@load.html
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-4/igt@xe_module_load@load.html
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-8/igt@xe_module_load@load.html
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-1/igt@xe_module_load@load.html
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-4/igt@xe_module_load@load.html
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-4/igt@xe_module_load@load.html
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-3/igt@xe_module_load@load.html
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-4/igt@xe_module_load@load.html
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-2/igt@xe_module_load@load.html
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-8/igt@xe_module_load@load.html
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-3/igt@xe_module_load@load.html
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-lnl-4/igt@xe_module_load@load.html
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-5/igt@xe_module_load@load.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-2/igt@xe_module_load@load.html
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-2/igt@xe_module_load@load.html
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-2/igt@xe_module_load@load.html
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-2/igt@xe_module_load@load.html
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-5/igt@xe_module_load@load.html
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-4/igt@xe_module_load@load.html
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-4/igt@xe_module_load@load.html
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-4/igt@xe_module_load@load.html
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-4/igt@xe_module_load@load.html
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-3/igt@xe_module_load@load.html
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-3/igt@xe_module_load@load.html
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-8/igt@xe_module_load@load.html
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-5/igt@xe_module_load@load.html
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-7/igt@xe_module_load@load.html
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-7/igt@xe_module_load@load.html
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-7/igt@xe_module_load@load.html
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-1/igt@xe_module_load@load.html
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-1/igt@xe_module_load@load.html
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-3/igt@xe_module_load@load.html
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-1/igt@xe_module_load@load.html
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-8/igt@xe_module_load@load.html
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-1/igt@xe_module_load@load.html
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-8/igt@xe_module_load@load.html
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-lnl-8/igt@xe_module_load@load.html
- shard-bmg: ([PASS][119], [PASS][120], [PASS][121], [PASS][122], [SKIP][123], [PASS][124], [PASS][125], [PASS][126], [PASS][127], [PASS][128], [PASS][129], [PASS][130], [PASS][131], [PASS][132], [PASS][133], [PASS][134], [PASS][135], [PASS][136], [PASS][137], [PASS][138], [PASS][139], [PASS][140], [PASS][141], [PASS][142], [PASS][143], [PASS][144]) ([Intel XE#2457]) -> ([PASS][145], [PASS][146], [PASS][147], [PASS][148], [PASS][149], [PASS][150], [PASS][151], [PASS][152], [PASS][153], [PASS][154], [PASS][155], [PASS][156], [PASS][157], [PASS][158], [PASS][159], [PASS][160], [PASS][161], [PASS][162], [PASS][163], [PASS][164], [PASS][165], [PASS][166], [PASS][167], [PASS][168], [PASS][169])
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-4/igt@xe_module_load@load.html
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-6/igt@xe_module_load@load.html
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-7/igt@xe_module_load@load.html
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-6/igt@xe_module_load@load.html
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-2/igt@xe_module_load@load.html
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-8/igt@xe_module_load@load.html
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-4/igt@xe_module_load@load.html
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-4/igt@xe_module_load@load.html
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-7/igt@xe_module_load@load.html
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-1/igt@xe_module_load@load.html
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-1/igt@xe_module_load@load.html
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-8/igt@xe_module_load@load.html
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-8/igt@xe_module_load@load.html
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-2/igt@xe_module_load@load.html
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-2/igt@xe_module_load@load.html
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-5/igt@xe_module_load@load.html
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-5/igt@xe_module_load@load.html
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-4/igt@xe_module_load@load.html
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-6/igt@xe_module_load@load.html
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-1/igt@xe_module_load@load.html
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-8/igt@xe_module_load@load.html
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-7/igt@xe_module_load@load.html
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-2/igt@xe_module_load@load.html
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-3/igt@xe_module_load@load.html
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-3/igt@xe_module_load@load.html
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-6/igt@xe_module_load@load.html
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-3/igt@xe_module_load@load.html
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@xe_module_load@load.html
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@xe_module_load@load.html
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-5/igt@xe_module_load@load.html
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-5/igt@xe_module_load@load.html
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@xe_module_load@load.html
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-1/igt@xe_module_load@load.html
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-1/igt@xe_module_load@load.html
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-1/igt@xe_module_load@load.html
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-8/igt@xe_module_load@load.html
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-8/igt@xe_module_load@load.html
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-8/igt@xe_module_load@load.html
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@xe_module_load@load.html
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-4/igt@xe_module_load@load.html
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-4/igt@xe_module_load@load.html
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-4/igt@xe_module_load@load.html
[161]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-3/igt@xe_module_load@load.html
[162]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-3/igt@xe_module_load@load.html
[163]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@xe_module_load@load.html
[164]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@xe_module_load@load.html
[165]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@xe_module_load@load.html
[166]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@xe_module_load@load.html
[167]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-5/igt@xe_module_load@load.html
[168]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@xe_module_load@load.html
[169]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@xe_module_load@load.html
- shard-dg2-set2: ([PASS][170], [PASS][171], [PASS][172], [PASS][173], [PASS][174], [PASS][175], [PASS][176], [PASS][177], [PASS][178], [PASS][179], [PASS][180], [PASS][181], [PASS][182], [PASS][183], [PASS][184], [PASS][185], [PASS][186], [PASS][187], [PASS][188], [PASS][189], [PASS][190], [PASS][191], [PASS][192], [SKIP][193], [PASS][194], [PASS][195]) ([Intel XE#378]) -> ([PASS][196], [PASS][197], [PASS][198], [PASS][199], [PASS][200], [PASS][201], [PASS][202], [PASS][203], [PASS][204], [PASS][205], [PASS][206], [PASS][207], [PASS][208], [PASS][209], [PASS][210], [PASS][211], [PASS][212], [PASS][213], [PASS][214], [PASS][215], [PASS][216], [PASS][217], [PASS][218], [PASS][219])
[170]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-466/igt@xe_module_load@load.html
[171]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-432/igt@xe_module_load@load.html
[172]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-432/igt@xe_module_load@load.html
[173]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-435/igt@xe_module_load@load.html
[174]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-435/igt@xe_module_load@load.html
[175]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-435/igt@xe_module_load@load.html
[176]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-463/igt@xe_module_load@load.html
[177]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-463/igt@xe_module_load@load.html
[178]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-433/igt@xe_module_load@load.html
[179]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-466/igt@xe_module_load@load.html
[180]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-464/igt@xe_module_load@load.html
[181]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-464/igt@xe_module_load@load.html
[182]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-464/igt@xe_module_load@load.html
[183]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-433/igt@xe_module_load@load.html
[184]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-463/igt@xe_module_load@load.html
[185]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-434/igt@xe_module_load@load.html
[186]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-434/igt@xe_module_load@load.html
[187]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-434/igt@xe_module_load@load.html
[188]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-464/igt@xe_module_load@load.html
[189]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-433/igt@xe_module_load@load.html
[190]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-436/igt@xe_module_load@load.html
[191]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-436/igt@xe_module_load@load.html
[192]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-436/igt@xe_module_load@load.html
[193]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-434/igt@xe_module_load@load.html
[194]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-466/igt@xe_module_load@load.html
[195]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-dg2-432/igt@xe_module_load@load.html
[196]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-463/igt@xe_module_load@load.html
[197]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-436/igt@xe_module_load@load.html
[198]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-436/igt@xe_module_load@load.html
[199]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-433/igt@xe_module_load@load.html
[200]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-433/igt@xe_module_load@load.html
[201]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@xe_module_load@load.html
[202]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-434/igt@xe_module_load@load.html
[203]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-434/igt@xe_module_load@load.html
[204]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-434/igt@xe_module_load@load.html
[205]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-463/igt@xe_module_load@load.html
[206]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-463/igt@xe_module_load@load.html
[207]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-432/igt@xe_module_load@load.html
[208]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-432/igt@xe_module_load@load.html
[209]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-432/igt@xe_module_load@load.html
[210]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-432/igt@xe_module_load@load.html
[211]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@xe_module_load@load.html
[212]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@xe_module_load@load.html
[213]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-466/igt@xe_module_load@load.html
[214]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-466/igt@xe_module_load@load.html
[215]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-466/igt@xe_module_load@load.html
[216]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@xe_module_load@load.html
[217]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-464/igt@xe_module_load@load.html
[218]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@xe_module_load@load.html
[219]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-dg2-435/igt@xe_module_load@load.html
#### Warnings ####
* igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][220] ([Intel XE#2312]) -> [SKIP][221] ([Intel XE#2311])
[220]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-mmap-wc.html
[221]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-7/igt@kms_frontbuffer_tracking@drrs-2p-scndscrn-spr-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff:
- shard-bmg: [SKIP][222] ([Intel XE#5390]) -> [SKIP][223] ([Intel XE#2312])
[222]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-8/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
[223]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-spr-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-indfb-msflip-blt:
- shard-bmg: [SKIP][224] ([Intel XE#2311]) -> [SKIP][225] ([Intel XE#2312]) +2 other tests skip
[224]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-4/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-indfb-msflip-blt.html
[225]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-2p-primscrn-indfb-msflip-blt.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-fullscreen:
- shard-bmg: [SKIP][226] ([Intel XE#2313]) -> [SKIP][227] ([Intel XE#2312]) +2 other tests skip
[226]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-7/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-fullscreen.html
[227]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-fullscreen.html
* igt@kms_frontbuffer_tracking@psr-2p-pri-indfb-multidraw:
- shard-bmg: [SKIP][228] ([Intel XE#2312]) -> [SKIP][229] ([Intel XE#2313])
[228]: https://intel-gfx-ci.01.org/tree/intel-xe/IGT_8520/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-pri-indfb-multidraw.html
[229]: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-2p-pri-indfb-multidraw.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1126]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1126
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1447]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1447
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2293]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2293
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2320]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2320
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2360]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2360
[Intel XE#2380]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2380
[Intel XE#2457]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2457
[Intel XE#2652]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2652
[Intel XE#2685]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2685
[Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#3012]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3012
[Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
[Intel XE#3124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3124
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#3307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3307
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#378]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/378
[Intel XE#4345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4345
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4819]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4819
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
[Intel XE#4937]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4937
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5099]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5099
[Intel XE#5390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5390
[Intel XE#5481]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5481
[Intel XE#5963]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5963
[Intel XE#6006]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6006
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#908]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/908
Build changes
-------------
* IGT: IGT_8520 -> IGTPW_13679
IGTPW_13679: 6a9d8eb7048d8ece8bfeba6132a6d59489c667c8 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
IGT_8520: 8520
xe-3668-97a9560f0f1dd0a4472e669ff2188d0a8293b375: 97a9560f0f1dd0a4472e669ff2188d0a8293b375
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/IGTPW_13679/index.html
[-- Attachment #2: Type: text/html, Size: 48933 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-09-03 11:07 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-02 16:30 [PATCH i-g-t v8 0/5] Madvise Tests in IGT nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 1/5] DO-NOT-MERGE: include/drm-uapi: Add drm_xe_madvise structure nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 2/5] lib/xe: Add xe_vm_madvise ioctl support nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 3/5] lib/xe: Add Helper to get memory attributes nishit.sharma
2025-09-02 16:30 ` [PATCH i-g-t v8 4/5] tests/intel/xe_exec_system_allocator: Add madvise-swizzle test nishit.sharma
2025-09-03 5:50 ` Matthew Brost
2025-09-02 16:30 ` [PATCH i-g-t v8 5/5] tests/intel/xe_exec_system_allocator: Add atomic_batch test in IGT nishit.sharma
2025-09-03 3:58 ` ✓ Xe.CI.BAT: success for Madvise Tests in IGT (rev8) Patchwork
2025-09-03 4:00 ` ✗ i915.CI.BAT: failure " Patchwork
2025-09-03 11:07 ` ✗ Xe.CI.Full: " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).