From: "Sharma, Nishit" <nishit.sharma@intel.com>
To: Varun Gupta <varun.gupta@intel.com>, <igt-dev@lists.freedesktop.org>
Cc: <arvind.yadav@intel.com>, <himal.prasad.ghimiray@intel.com>
Subject: Re: [PATCH i-g-t 4/4] tests/intel/xe_madvise: Add atomic-cpu subtest
Date: Wed, 29 Apr 2026 18:55:29 +0530 [thread overview]
Message-ID: <a6ba9e79-cdd6-47ce-848c-4866034e65ef@intel.com> (raw)
In-Reply-To: <20260423063904.3944005-5-varun.gupta@intel.com>
On 4/23/2026 12:08 PM, Varun Gupta wrote:
> Validate that madvise ATOMIC_CPU blocks GPU atomic operations on SVM
> memory. The test sets ATOMIC_CPU on heap-allocated memory, then
> submits GPU MI_ATOMIC_INC which must fail because the page-fault
> handler returns -EACCES for CPU-only atomic mode, causing an engine
> reset. The fence wait times out (QUARTER_SEC) and the counter must
> remain 0. Only the first engine is tested to limit CAT errors from
> repeated resets.
>
> Signed-off-by: Varun Gupta <varun.gupta@intel.com>
> ---
> tests/intel/xe_madvise.c | 77 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 77 insertions(+)
>
> diff --git a/tests/intel/xe_madvise.c b/tests/intel/xe_madvise.c
> index c251186d3..3411a6f3d 100644
> --- a/tests/intel/xe_madvise.c
> +++ b/tests/intel/xe_madvise.c
> @@ -936,6 +936,77 @@ static void test_atomic_global(int fd, struct drm_xe_engine_class_instance *eci)
> xe_vm_destroy(fd, vm);
> }
>
> +/**
> + * SUBTEST: atomic-cpu
> + * Description: madvise atomic cpu supports only CPU atomic operations,
> + * test verifies GPU MI_ATOMIC_INC is rejected by fault handler
> + * Test category: functionality test
> + */
> +static void test_atomic_cpu(int fd, struct drm_xe_engine_class_instance *eci)
> +{
> + struct drm_xe_sync sync[1] = {
> + { .type = DRM_XE_SYNC_TYPE_USER_FENCE,
> + .flags = DRM_XE_SYNC_FLAG_SIGNAL,
> + .timeline_value = USER_FENCE_VALUE },
> + };
> + struct drm_xe_exec exec = {
> + .num_batch_buffer = 1,
> + .num_syncs = 1,
> + .syncs = to_user_pointer(sync),
> + };
> + struct atomic_data *data;
> + uint32_t vm, exec_queue;
> + uint64_t addr;
> + size_t bo_size;
> + int va_bits, err;
> + int64_t timeout = QUARTER_SEC;
> +
> + va_bits = xe_va_bits(fd);
> + vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_LR_MODE |
> + DRM_XE_VM_CREATE_FLAG_FAULT_MODE, 0);
> +
> + bo_size = xe_bb_size(fd, sizeof(*data));
> + data = aligned_alloc(bo_size, bo_size);
> + igt_assert(data);
> + memset(data, 0, bo_size);
> +
> + addr = to_user_pointer(data);
> +
> + sync[0].addr = to_user_pointer(&data->vm_sync);
> + __xe_vm_bind_assert(fd, vm, 0, 0, 0, 0, 0x1ull << va_bits,
> + DRM_XE_VM_BIND_OP_MAP,
> + DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR,
> + sync, 1, 0, 0);
Same comment as in Patch-2/4
> + xe_wait_ufence(fd, &data->vm_sync, USER_FENCE_VALUE, 0, FIVE_SEC);
> + data->vm_sync = 0;
> +
> + xe_vm_madvise(fd, vm, addr, bo_size, 0,
> + DRM_XE_MEM_RANGE_ATTR_ATOMIC, DRM_XE_ATOMIC_CPU, 0, 0);
> +
> + atomic_build_batch(data, addr);
> +
> + exec_queue = xe_exec_queue_create(fd, vm, eci, 0);
> + exec.exec_queue_id = exec_queue;
> + exec.address = addr + ((char *)&data->batch - (char *)data);
> +
> + /*
> + * GPU MI_ATOMIC_INC must fail: page-fault handler returns -EACCES
> + * for ATOMIC_CPU mode, causing engine reset. Wait with a short
> + * timeout — the fence should not signal.
> + */
> + sync[0].addr = to_user_pointer(&data->exec_sync);
> + xe_exec(fd, &exec);
> + err = __xe_wait_ufence(fd, &data->exec_sync, USER_FENCE_VALUE,
> + exec_queue, &timeout);
> +
> + igt_assert_neq(err, 0);
> + igt_assert_eq(data->data, 0);
> +
> + xe_exec_queue_destroy(fd, exec_queue);
> + free(data);
> + xe_vm_destroy(fd, vm);
> +}
> +
> int igt_main()
> {
> struct drm_xe_engine_class_instance *hwe;
> @@ -1007,6 +1078,12 @@ int igt_main()
> igt_subtest("atomic-global")
> xe_for_each_engine(fd, hwe)
> test_atomic_global(fd, hwe);
> +
> + igt_subtest("atomic-cpu")
> + xe_for_each_engine(fd, hwe) {
> + test_atomic_cpu(fd, hwe);
Mention why single HW engine is enough to validate atomic-cpu operation
> + break;
> + }
> }
>
> igt_fixture() {
next prev parent reply other threads:[~2026-04-29 13:26 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-23 6:38 [PATCH i-g-t 0/4] tests/intel/xe_madvise: Add atomic madvise subtests Varun Gupta
2026-04-23 6:38 ` [PATCH i-g-t 1/4] tests/intel/xe_madvise: Generalize metadata and group purgeable subtests Varun Gupta
2026-04-29 4:19 ` Sharma, Nishit
2026-04-23 6:38 ` [PATCH i-g-t 2/4] tests/intel/xe_madvise: Add atomic-device subtest Varun Gupta
2026-04-29 9:11 ` Sharma, Nishit
2026-04-23 6:38 ` [PATCH i-g-t 3/4] tests/intel/xe_madvise: Add atomic-global subtest Varun Gupta
2026-04-29 13:11 ` Sharma, Nishit
2026-04-23 6:38 ` [PATCH i-g-t 4/4] tests/intel/xe_madvise: Add atomic-cpu subtest Varun Gupta
2026-04-29 13:25 ` Sharma, Nishit [this message]
2026-04-23 7:36 ` ✓ Xe.CI.BAT: success for tests/intel/xe_madvise: Add atomic madvise subtests Patchwork
2026-04-23 7:58 ` ✓ i915.CI.BAT: " Patchwork
2026-04-23 11:45 ` ✗ i915.CI.Full: failure " Patchwork
2026-04-23 18:07 ` ✓ Xe.CI.FULL: success " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a6ba9e79-cdd6-47ce-848c-4866034e65ef@intel.com \
--to=nishit.sharma@intel.com \
--cc=arvind.yadav@intel.com \
--cc=himal.prasad.ghimiray@intel.com \
--cc=igt-dev@lists.freedesktop.org \
--cc=varun.gupta@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox