Igt-dev Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH i-g-t v3 0/4] tests/intel/xe_madvise: Add atomic madvise subtests
@ 2026-05-11 13:57 Varun Gupta
  2026-05-11 13:57 ` [PATCH i-g-t v3 1/4] tests/intel/xe_madvise: Generalize metadata and group purgeable subtests Varun Gupta
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Varun Gupta @ 2026-05-11 13:57 UTC (permalink / raw)
  To: igt-dev; +Cc: arvind.yadav, himal.prasad.ghimiray, nishit.sharma

Add three subtests to validate DRM_XE_MEM_RANGE_ATTR_ATOMIC madvise on
fault-mode SVM VMAs:

  atomic-device — GPU MI_ATOMIC_INC succeeds via fault handler
  atomic-global — CPU atomic increments + GPU MI_ATOMIC_INC with
                  fault-driven SMEM-to-VRAM migration
  atomic-cpu    — GPU MI_ATOMIC_INC rejected (-EACCES), engine reset

Patch 1 generalizes the test metadata and groups existing purgeable
subtests so the atomic subtests can coexist without being gated on
purgeable support.

v2:
  - Add UNMAP of CPU_ADDR_MIRROR binding before xe_vm_destroy in all
    three atomic tests (Nishit)
  - Add pagefault count print before/after exec in atomic-device and
    atomic-global (Nishit)
  - Add comment explaining single-engine rationale in atomic-cpu (Nishit)
v3:
  - Print pagefault count only when count changed (Nishit)

Varun Gupta (4):
  tests/intel/xe_madvise: Generalize metadata and group purgeable
    subtests
  tests/intel/xe_madvise: Add atomic-device subtest
  tests/intel/xe_madvise: Add atomic-global subtest
  tests/intel/xe_madvise: Add atomic-cpu subtest

 tests/intel/xe_madvise.c | 381 ++++++++++++++++++++++++++++++++++-----
 1 file changed, 338 insertions(+), 43 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH i-g-t v3 1/4] tests/intel/xe_madvise: Generalize metadata and group purgeable subtests
  2026-05-11 13:57 [PATCH i-g-t v3 0/4] tests/intel/xe_madvise: Add atomic madvise subtests Varun Gupta
@ 2026-05-11 13:57 ` Varun Gupta
  2026-05-11 13:57 ` [PATCH i-g-t v3 2/4] tests/intel/xe_madvise: Add atomic-device subtest Varun Gupta
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Varun Gupta @ 2026-05-11 13:57 UTC (permalink / raw)
  To: igt-dev; +Cc: arvind.yadav, himal.prasad.ghimiray, nishit.sharma

Generalize the test file description and functionality tags from
purgeable-only to cover upcoming atomic madvise subtests.

Wrap existing purgeable subtests in igt_subtest_group with a dedicated
fixture so the purgeable capability check does not gate future
non-purgeable subtests.

Reviewed-by: Nishit Sharma <nishit.sharma@intel.com>
Signed-off-by: Varun Gupta <varun.gupta@intel.com>
---
 tests/intel/xe_madvise.c | 94 +++++++++++++++++++++-------------------
 1 file changed, 49 insertions(+), 45 deletions(-)

diff --git a/tests/intel/xe_madvise.c b/tests/intel/xe_madvise.c
index e9bf55ff5..e79cafbff 100644
--- a/tests/intel/xe_madvise.c
+++ b/tests/intel/xe_madvise.c
@@ -4,11 +4,11 @@
  */
 
 /**
- * TEST: Validate purgeable BO madvise functionality
+ * TEST: Validate madvise functionality
  * Category: Core
  * Mega feature: General Core features
  * Sub-category: Memory management tests
- * Functionality: madvise, purgeable
+ * Functionality: madvise, purgeable, atomic
  */
 
 #include "igt.h"
@@ -776,51 +776,55 @@ int igt_main()
 	igt_fixture() {
 		fd = drm_open_driver(DRIVER_XE);
 		xe_device_get(fd);
-		igt_require_f(xe_has_purgeable_support(fd),
-			      "Kernel does not support purgeable buffer objects\n");
 	}
 
-	igt_subtest("dontneed-before-mmap")
-		xe_for_each_engine(fd, hwe) {
-			test_dontneed_before_mmap(fd);
-			break;
-		}
-
-	igt_subtest("purged-mmap-blocked")
-		xe_for_each_engine(fd, hwe) {
-			test_purged_mmap_blocked(fd);
-			break;
-		}
-
-	igt_subtest("dontneed-after-mmap")
-		xe_for_each_engine(fd, hwe) {
-			test_dontneed_after_mmap(fd);
-			break;
-		}
-
-	igt_subtest("dontneed-before-exec")
-		xe_for_each_engine(fd, hwe) {
-			test_dontneed_before_exec(fd, hwe);
-			break;
-		}
-
-	igt_subtest("dontneed-after-exec")
-		xe_for_each_engine(fd, hwe) {
-			test_dontneed_after_exec(fd, hwe);
-			break;
-		}
-
-	igt_subtest("per-vma-tracking")
-		xe_for_each_engine(fd, hwe) {
-			test_per_vma_tracking(fd);
-			break;
-		}
-
-	igt_subtest("per-vma-protection")
-		xe_for_each_engine(fd, hwe) {
-			test_per_vma_protection(fd, hwe);
-			break;
-		}
+	igt_subtest_group() {
+		igt_fixture()
+			igt_require_f(xe_has_purgeable_support(fd),
+				      "Kernel does not support purgeable buffer objects\n");
+
+		igt_subtest("dontneed-before-mmap")
+			xe_for_each_engine(fd, hwe) {
+				test_dontneed_before_mmap(fd);
+				break;
+			}
+
+		igt_subtest("purged-mmap-blocked")
+			xe_for_each_engine(fd, hwe) {
+				test_purged_mmap_blocked(fd);
+				break;
+			}
+
+		igt_subtest("dontneed-after-mmap")
+			xe_for_each_engine(fd, hwe) {
+				test_dontneed_after_mmap(fd);
+				break;
+			}
+
+		igt_subtest("dontneed-before-exec")
+			xe_for_each_engine(fd, hwe) {
+				test_dontneed_before_exec(fd, hwe);
+				break;
+			}
+
+		igt_subtest("dontneed-after-exec")
+			xe_for_each_engine(fd, hwe) {
+				test_dontneed_after_exec(fd, hwe);
+				break;
+			}
+
+		igt_subtest("per-vma-tracking")
+			xe_for_each_engine(fd, hwe) {
+				test_per_vma_tracking(fd);
+				break;
+			}
+
+		igt_subtest("per-vma-protection")
+			xe_for_each_engine(fd, hwe) {
+				test_per_vma_protection(fd, hwe);
+				break;
+			}
+	}
 
 	igt_fixture() {
 		xe_device_put(fd);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH i-g-t v3 2/4] tests/intel/xe_madvise: Add atomic-device subtest
  2026-05-11 13:57 [PATCH i-g-t v3 0/4] tests/intel/xe_madvise: Add atomic madvise subtests Varun Gupta
  2026-05-11 13:57 ` [PATCH i-g-t v3 1/4] tests/intel/xe_madvise: Generalize metadata and group purgeable subtests Varun Gupta
@ 2026-05-11 13:57 ` Varun Gupta
  2026-05-11 13:57 ` [PATCH i-g-t v3 3/4] tests/intel/xe_madvise: Add atomic-global subtest Varun Gupta
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Varun Gupta @ 2026-05-11 13:57 UTC (permalink / raw)
  To: igt-dev; +Cc: arvind.yadav, himal.prasad.ghimiray, nishit.sharma

Validate that madvise ATOMIC_DEVICE allows GPU MI_ATOMIC_INC on SVM
memory.  The test creates a fault-mode VM with CPU_ADDR_MIRROR binding
over heap memory allocated via aligned_alloc().  After setting
ATOMIC_DEVICE via DRM_XE_MEM_RANGE_ATTR_ATOMIC madvise, the GPU
executes MI_ATOMIC_INC through the page-fault handler which migrates
pages to VRAM for device atomics.

Also adds the shared atomic test infrastructure: struct
atomic_data, atomic_build_batch() helper, timeout constants, and the
atomic subtest group gated on VRAM and fault-mode support.

v2: Add UNMAP of CPU_ADDR_MIRROR binding before xe_vm_destroy.
    Add pagefault count print before/after exec (Nishit).
v3: Print pagefault count only when count changed (Nishit).

Reviewed-by: Nishit Sharma <nishit.sharma@intel.com>
Signed-off-by: Varun Gupta <varun.gupta@intel.com>
---
 tests/intel/xe_madvise.c | 123 ++++++++++++++++++++++++++++++++++++++-
 1 file changed, 122 insertions(+), 1 deletion(-)

diff --git a/tests/intel/xe_madvise.c b/tests/intel/xe_madvise.c
index e79cafbff..8aecd0e5b 100644
--- a/tests/intel/xe_madvise.c
+++ b/tests/intel/xe_madvise.c
@@ -14,9 +14,12 @@
 #include "igt.h"
 #include "xe_drm.h"
 
+#include "intel_gpu_commands.h"
+#include "lib/igt_syncobj.h"
+#include "lib/intel_reg.h"
+#include "xe/xe_gt.h"
 #include "xe/xe_ioctl.h"
 #include "xe/xe_query.h"
-#include "lib/igt_syncobj.h"
 
 /* Purgeable test constants */
 #define PURGEABLE_ADDR		0x1a0000
@@ -27,6 +30,11 @@
 #define PURGEABLE_TEST_PATTERN	0xc0ffee
 #define PURGEABLE_DEAD_PATTERN	0xdead
 
+/* Atomic test constants */
+#define USER_FENCE_VALUE	0xdeadbeefdeadbeefull
+#define FIVE_SEC		(5LL * NSEC_PER_SEC)
+#define QUARTER_SEC		(NSEC_PER_SEC / 4)
+
 static bool xe_has_purgeable_support(int fd)
 {
 	struct drm_xe_query_config *config = xe_config(fd);
@@ -768,6 +776,108 @@ out:
 		igt_skip("Unable to induce purge on this platform/config");
 }
 
+/*
+ * Atomic madvise subtests — validate DRM_XE_MEM_RANGE_ATTR_ATOMIC
+ * modes (DEVICE, GLOBAL, CPU) on fault-mode SVM VMAs.
+ */
+
+struct atomic_data {
+	uint32_t batch[32];
+	uint64_t vm_sync;
+	uint64_t exec_sync;
+	uint32_t data;
+};
+
+static void atomic_build_batch(struct atomic_data *d, uint64_t gpu_addr)
+{
+	uint64_t data_offset = (char *)&d->data - (char *)d;
+	uint64_t sdi_addr = gpu_addr + data_offset;
+	int b = 0;
+
+	d->batch[b++] = MI_ATOMIC | MI_ATOMIC_INC;
+	d->batch[b++] = sdi_addr;
+	d->batch[b++] = sdi_addr >> 32;
+	d->batch[b++] = MI_BATCH_BUFFER_END;
+	igt_assert(b <= ARRAY_SIZE(d->batch));
+}
+
+/**
+ * SUBTEST: atomic-device
+ * Description: madvise atomic device supports only GPU atomic operations,
+ *		test executes GPU MI_ATOMIC_INC on SVM memory via fault handler
+ * Test category: functionality test
+ */
+static void test_atomic_device(int fd, struct drm_xe_engine_class_instance *eci)
+{
+	struct drm_xe_sync sync[1] = {
+		{ .type = DRM_XE_SYNC_TYPE_USER_FENCE,
+		  .flags = DRM_XE_SYNC_FLAG_SIGNAL,
+		  .timeline_value = USER_FENCE_VALUE },
+	};
+	struct drm_xe_exec exec = {
+		.num_batch_buffer = 1,
+		.num_syncs = 1,
+		.syncs = to_user_pointer(sync),
+	};
+	struct atomic_data *data;
+	uint32_t vm, exec_queue;
+	uint64_t addr;
+	size_t bo_size;
+	int va_bits;
+	int pf_count_before, pf_count_after;
+
+	va_bits = xe_va_bits(fd);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE |
+			  DRM_XE_VM_CREATE_FLAG_FAULT_MODE, 0);
+
+	bo_size = xe_bb_size(fd, sizeof(*data));
+	data = aligned_alloc(bo_size, bo_size);
+	igt_assert(data);
+	memset(data, 0, bo_size);
+
+	addr = to_user_pointer(data);
+
+	/* Bind entire VA space as CPU_ADDR_MIRROR */
+	sync[0].addr = to_user_pointer(&data->vm_sync);
+	__xe_vm_bind_assert(fd, vm, 0, 0, 0, 0, 0x1ull << va_bits,
+			    DRM_XE_VM_BIND_OP_MAP,
+			    DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR,
+			    sync, 1, 0, 0);
+	xe_wait_ufence(fd, &data->vm_sync, USER_FENCE_VALUE, 0, FIVE_SEC);
+	data->vm_sync = 0;
+
+	xe_vm_madvise(fd, vm, addr, bo_size, 0,
+		      DRM_XE_MEM_RANGE_ATTR_ATOMIC, DRM_XE_ATOMIC_DEVICE, 0, 0);
+
+	atomic_build_batch(data, addr);
+
+	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
+	exec.exec_queue_id = exec_queue;
+	exec.address = addr + ((char *)&data->batch - (char *)data);
+
+	pf_count_before = xe_gt_stats_get_count(fd, eci->gt_id,
+						"svm_pagefault_count");
+
+	sync[0].addr = to_user_pointer(&data->exec_sync);
+	xe_exec(fd, &exec);
+	xe_wait_ufence(fd, &data->exec_sync, USER_FENCE_VALUE,
+		       exec_queue, FIVE_SEC);
+
+	pf_count_after = xe_gt_stats_get_count(fd, eci->gt_id,
+					       "svm_pagefault_count");
+	if (pf_count_before != pf_count_after)
+		igt_info("Pagefault count: before=%d, after=%d\n",
+			 pf_count_before, pf_count_after);
+
+	igt_assert_eq(data->data, 1);
+
+	xe_exec_queue_destroy(fd, exec_queue);
+	__xe_vm_bind_assert(fd, vm, 0, 0, 0, 0, 0x1ull << va_bits,
+			    DRM_XE_VM_BIND_OP_UNMAP, 0, NULL, 0, 0, 0);
+	free(data);
+	xe_vm_destroy(fd, vm);
+}
+
 int igt_main()
 {
 	struct drm_xe_engine_class_instance *hwe;
@@ -826,6 +936,17 @@ int igt_main()
 			}
 	}
 
+	igt_subtest_group() {
+		igt_fixture() {
+			igt_require(xe_has_vram(fd));
+			igt_require(!xe_supports_faults(fd));
+		}
+
+		igt_subtest("atomic-device")
+			xe_for_each_engine(fd, hwe)
+				test_atomic_device(fd, hwe);
+	}
+
 	igt_fixture() {
 		xe_device_put(fd);
 		drm_close_driver(fd);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH i-g-t v3 3/4] tests/intel/xe_madvise: Add atomic-global subtest
  2026-05-11 13:57 [PATCH i-g-t v3 0/4] tests/intel/xe_madvise: Add atomic madvise subtests Varun Gupta
  2026-05-11 13:57 ` [PATCH i-g-t v3 1/4] tests/intel/xe_madvise: Generalize metadata and group purgeable subtests Varun Gupta
  2026-05-11 13:57 ` [PATCH i-g-t v3 2/4] tests/intel/xe_madvise: Add atomic-device subtest Varun Gupta
@ 2026-05-11 13:57 ` Varun Gupta
  2026-05-11 13:57 ` [PATCH i-g-t v3 4/4] tests/intel/xe_madvise: Add atomic-cpu subtest Varun Gupta
  2026-05-12  3:50 ` ✗ Fi.CI.BUILD: failure for tests/intel/xe_madvise: Add atomic madvise subtests Patchwork
  4 siblings, 0 replies; 6+ messages in thread
From: Varun Gupta @ 2026-05-11 13:57 UTC (permalink / raw)
  To: igt-dev; +Cc: arvind.yadav, himal.prasad.ghimiray, nishit.sharma

Validate that madvise ATOMIC_GLOBAL permits both CPU and GPU atomic
access on SVM memory.  The test sets ATOMIC_GLOBAL on heap-allocated
memory, performs 100 CPU atomic increments while data resides in SMEM,
then executes GPU MI_ATOMIC_INC which triggers the page-fault handler
to migrate data to VRAM.  The final counter value must equal 101
(CPU + GPU increments).

v2: Add UNMAP of CPU_ADDR_MIRROR binding before xe_vm_destroy.
    Add pagefault count print before/after exec (Nishit).
v3: Print pagefault count only when count changed (Nishit).

Reviewed-by: Nishit Sharma <nishit.sharma@intel.com>
Signed-off-by: Varun Gupta <varun.gupta@intel.com>
---
 tests/intel/xe_madvise.c | 87 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)

diff --git a/tests/intel/xe_madvise.c b/tests/intel/xe_madvise.c
index 8aecd0e5b..c3b6935bb 100644
--- a/tests/intel/xe_madvise.c
+++ b/tests/intel/xe_madvise.c
@@ -878,6 +878,89 @@ static void test_atomic_device(int fd, struct drm_xe_engine_class_instance *eci)
 	xe_vm_destroy(fd, vm);
 }
 
+/**
+ * SUBTEST: atomic-global
+ * Description: madvise atomic global supports both CPU and GPU atomic operations,
+ *		test does CPU atomic increments on SMEM then GPU MI_ATOMIC_INC
+ *		which triggers fault-driven migration to VRAM
+ * Test category: functionality test
+ */
+static void test_atomic_global(int fd, struct drm_xe_engine_class_instance *eci)
+{
+	struct drm_xe_sync sync[1] = {
+		{ .type = DRM_XE_SYNC_TYPE_USER_FENCE,
+		  .flags = DRM_XE_SYNC_FLAG_SIGNAL,
+		  .timeline_value = USER_FENCE_VALUE },
+	};
+	struct drm_xe_exec exec = {
+		.num_batch_buffer = 1,
+		.num_syncs = 1,
+		.syncs = to_user_pointer(sync),
+	};
+	struct atomic_data *data;
+	uint32_t vm, exec_queue;
+	uint64_t addr;
+	size_t bo_size;
+	int va_bits, i;
+	int n_cpu_ops = 100;
+	int pf_count_before, pf_count_after;
+
+	va_bits = xe_va_bits(fd);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE |
+			  DRM_XE_VM_CREATE_FLAG_FAULT_MODE, 0);
+
+	bo_size = xe_bb_size(fd, sizeof(*data));
+	data = aligned_alloc(bo_size, bo_size);
+	igt_assert(data);
+	memset(data, 0, bo_size);
+
+	addr = to_user_pointer(data);
+
+	sync[0].addr = to_user_pointer(&data->vm_sync);
+	__xe_vm_bind_assert(fd, vm, 0, 0, 0, 0, 0x1ull << va_bits,
+			    DRM_XE_VM_BIND_OP_MAP,
+			    DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR,
+			    sync, 1, 0, 0);
+	xe_wait_ufence(fd, &data->vm_sync, USER_FENCE_VALUE, 0, FIVE_SEC);
+	data->vm_sync = 0;
+
+	xe_vm_madvise(fd, vm, addr, bo_size, 0,
+		      DRM_XE_MEM_RANGE_ATTR_ATOMIC, DRM_XE_ATOMIC_GLOBAL, 0, 0);
+
+	for (i = 0; i < n_cpu_ops; i++)
+		__atomic_fetch_add(&data->data, 1, __ATOMIC_SEQ_CST);
+
+	igt_assert_eq(data->data, n_cpu_ops);
+
+	atomic_build_batch(data, addr);
+
+	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
+	exec.exec_queue_id = exec_queue;
+	exec.address = addr + ((char *)&data->batch - (char *)data);
+
+	pf_count_before = xe_gt_stats_get_count(fd, eci->gt_id,
+						"svm_pagefault_count");
+
+	sync[0].addr = to_user_pointer(&data->exec_sync);
+	xe_exec(fd, &exec);
+	xe_wait_ufence(fd, &data->exec_sync, USER_FENCE_VALUE,
+		       exec_queue, FIVE_SEC);
+
+	pf_count_after = xe_gt_stats_get_count(fd, eci->gt_id,
+					       "svm_pagefault_count");
+	if (pf_count_before != pf_count_after)
+		igt_info("Pagefault count: before=%d, after=%d\n",
+			 pf_count_before, pf_count_after);
+
+	igt_assert_eq(data->data, n_cpu_ops + 1);
+
+	xe_exec_queue_destroy(fd, exec_queue);
+	__xe_vm_bind_assert(fd, vm, 0, 0, 0, 0, 0x1ull << va_bits,
+			    DRM_XE_VM_BIND_OP_UNMAP, 0, NULL, 0, 0, 0);
+	free(data);
+	xe_vm_destroy(fd, vm);
+}
+
 int igt_main()
 {
 	struct drm_xe_engine_class_instance *hwe;
@@ -945,6 +1028,10 @@ int igt_main()
 		igt_subtest("atomic-device")
 			xe_for_each_engine(fd, hwe)
 				test_atomic_device(fd, hwe);
+
+		igt_subtest("atomic-global")
+			xe_for_each_engine(fd, hwe)
+				test_atomic_global(fd, hwe);
 	}
 
 	igt_fixture() {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH i-g-t v3 4/4] tests/intel/xe_madvise: Add atomic-cpu subtest
  2026-05-11 13:57 [PATCH i-g-t v3 0/4] tests/intel/xe_madvise: Add atomic madvise subtests Varun Gupta
                   ` (2 preceding siblings ...)
  2026-05-11 13:57 ` [PATCH i-g-t v3 3/4] tests/intel/xe_madvise: Add atomic-global subtest Varun Gupta
@ 2026-05-11 13:57 ` Varun Gupta
  2026-05-12  3:50 ` ✗ Fi.CI.BUILD: failure for tests/intel/xe_madvise: Add atomic madvise subtests Patchwork
  4 siblings, 0 replies; 6+ messages in thread
From: Varun Gupta @ 2026-05-11 13:57 UTC (permalink / raw)
  To: igt-dev; +Cc: arvind.yadav, himal.prasad.ghimiray, nishit.sharma

Validate that madvise ATOMIC_CPU blocks GPU atomic operations on SVM
memory.  The test sets ATOMIC_CPU on heap-allocated memory, then
submits GPU MI_ATOMIC_INC which must fail because the page-fault
handler returns -EACCES for CPU-only atomic mode, causing an engine
reset.  The fence wait times out (QUARTER_SEC) and the counter must
remain 0.  Only the first engine is tested to limit CAT errors from
repeated resets.

v2: Add UNMAP of CPU_ADDR_MIRROR binding before xe_vm_destroy.
    Add comment explaining break after single-engine run (Nishit).

Reviewed-by: Nishit Sharma <nishit.sharma@intel.com>
Signed-off-by: Varun Gupta <varun.gupta@intel.com>
---
 tests/intel/xe_madvise.c | 83 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/tests/intel/xe_madvise.c b/tests/intel/xe_madvise.c
index c3b6935bb..cdb115d7e 100644
--- a/tests/intel/xe_madvise.c
+++ b/tests/intel/xe_madvise.c
@@ -961,6 +961,79 @@ static void test_atomic_global(int fd, struct drm_xe_engine_class_instance *eci)
 	xe_vm_destroy(fd, vm);
 }
 
+/**
+ * SUBTEST: atomic-cpu
+ * Description: madvise atomic cpu supports only CPU atomic operations,
+ *		test verifies GPU MI_ATOMIC_INC is rejected by fault handler
+ * Test category: functionality test
+ */
+static void test_atomic_cpu(int fd, struct drm_xe_engine_class_instance *eci)
+{
+	struct drm_xe_sync sync[1] = {
+		{ .type = DRM_XE_SYNC_TYPE_USER_FENCE,
+		  .flags = DRM_XE_SYNC_FLAG_SIGNAL,
+		  .timeline_value = USER_FENCE_VALUE },
+	};
+	struct drm_xe_exec exec = {
+		.num_batch_buffer = 1,
+		.num_syncs = 1,
+		.syncs = to_user_pointer(sync),
+	};
+	struct atomic_data *data;
+	uint32_t vm, exec_queue;
+	uint64_t addr;
+	size_t bo_size;
+	int va_bits, err;
+	int64_t timeout = QUARTER_SEC;
+
+	va_bits = xe_va_bits(fd);
+	vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE |
+			  DRM_XE_VM_CREATE_FLAG_FAULT_MODE, 0);
+
+	bo_size = xe_bb_size(fd, sizeof(*data));
+	data = aligned_alloc(bo_size, bo_size);
+	igt_assert(data);
+	memset(data, 0, bo_size);
+
+	addr = to_user_pointer(data);
+
+	sync[0].addr = to_user_pointer(&data->vm_sync);
+	__xe_vm_bind_assert(fd, vm, 0, 0, 0, 0, 0x1ull << va_bits,
+			    DRM_XE_VM_BIND_OP_MAP,
+			    DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR,
+			    sync, 1, 0, 0);
+	xe_wait_ufence(fd, &data->vm_sync, USER_FENCE_VALUE, 0, FIVE_SEC);
+	data->vm_sync = 0;
+
+	xe_vm_madvise(fd, vm, addr, bo_size, 0,
+		      DRM_XE_MEM_RANGE_ATTR_ATOMIC, DRM_XE_ATOMIC_CPU, 0, 0);
+
+	atomic_build_batch(data, addr);
+
+	exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
+	exec.exec_queue_id = exec_queue;
+	exec.address = addr + ((char *)&data->batch - (char *)data);
+
+	/*
+	 * GPU MI_ATOMIC_INC must fail: page-fault handler returns -EACCES
+	 * for ATOMIC_CPU mode, causing engine reset.  Wait with a short
+	 * timeout — the fence should not signal.
+	 */
+	sync[0].addr = to_user_pointer(&data->exec_sync);
+	xe_exec(fd, &exec);
+	err = __xe_wait_ufence(fd, &data->exec_sync, USER_FENCE_VALUE,
+			       exec_queue, &timeout);
+
+	igt_assert_neq(err, 0);
+	igt_assert_eq(data->data, 0);
+
+	xe_exec_queue_destroy(fd, exec_queue);
+	__xe_vm_bind_assert(fd, vm, 0, 0, 0, 0, 0x1ull << va_bits,
+			    DRM_XE_VM_BIND_OP_UNMAP, 0, NULL, 0, 0, 0);
+	free(data);
+	xe_vm_destroy(fd, vm);
+}
+
 int igt_main()
 {
 	struct drm_xe_engine_class_instance *hwe;
@@ -1032,6 +1105,16 @@ int igt_main()
 		igt_subtest("atomic-global")
 			xe_for_each_engine(fd, hwe)
 				test_atomic_global(fd, hwe);
+
+		/* Run on a single engine — each rejection triggers an engine
+		 * reset and CAT error, running on all engines would generate
+		 * redundant resets without adding coverage.
+		 */
+		igt_subtest("atomic-cpu")
+			xe_for_each_engine(fd, hwe) {
+				test_atomic_cpu(fd, hwe);
+				break;
+			}
 	}
 
 	igt_fixture() {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* ✗ Fi.CI.BUILD: failure for tests/intel/xe_madvise: Add atomic madvise subtests
  2026-05-11 13:57 [PATCH i-g-t v3 0/4] tests/intel/xe_madvise: Add atomic madvise subtests Varun Gupta
                   ` (3 preceding siblings ...)
  2026-05-11 13:57 ` [PATCH i-g-t v3 4/4] tests/intel/xe_madvise: Add atomic-cpu subtest Varun Gupta
@ 2026-05-12  3:50 ` Patchwork
  4 siblings, 0 replies; 6+ messages in thread
From: Patchwork @ 2026-05-12  3:50 UTC (permalink / raw)
  To: Varun Gupta; +Cc: igt-dev

== Series Details ==

Series: tests/intel/xe_madvise: Add atomic madvise subtests
URL   : https://patchwork.freedesktop.org/series/166325/
State : failure

== Summary ==

Applying: tests/intel/xe_madvise: Generalize metadata and group purgeable subtests
Using index info to reconstruct a base tree...
M	tests/intel/xe_madvise.c
Falling back to patching base and 3-way merge...
Auto-merging tests/intel/xe_madvise.c
CONFLICT (content): Merge conflict in tests/intel/xe_madvise.c
Patch failed at 0001 tests/intel/xe_madvise: Generalize metadata and group purgeable subtests
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-05-12  3:51 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-11 13:57 [PATCH i-g-t v3 0/4] tests/intel/xe_madvise: Add atomic madvise subtests Varun Gupta
2026-05-11 13:57 ` [PATCH i-g-t v3 1/4] tests/intel/xe_madvise: Generalize metadata and group purgeable subtests Varun Gupta
2026-05-11 13:57 ` [PATCH i-g-t v3 2/4] tests/intel/xe_madvise: Add atomic-device subtest Varun Gupta
2026-05-11 13:57 ` [PATCH i-g-t v3 3/4] tests/intel/xe_madvise: Add atomic-global subtest Varun Gupta
2026-05-11 13:57 ` [PATCH i-g-t v3 4/4] tests/intel/xe_madvise: Add atomic-cpu subtest Varun Gupta
2026-05-12  3:50 ` ✗ Fi.CI.BUILD: failure for tests/intel/xe_madvise: Add atomic madvise subtests Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox