From: "Hellstrom, Thomas" <thomas.hellstrom@intel.com>
To: "igt-dev@lists.freedesktop.org" <igt-dev@lists.freedesktop.org>,
"Sharma, Nishit" <nishit.sharma@intel.com>
Subject: Re: [PATCH i-g-t v7 08/10] tests/intel/xe_multi_gpusvm.c: Add SVM multi-GPU simultaneous access test
Date: Mon, 17 Nov 2025 14:57:19 +0000 [thread overview]
Message-ID: <9b76a0840ba00cda06a80b15e3ed07b5b36d94d3.camel@intel.com> (raw)
In-Reply-To: <20251113163308.633818-9-nishit.sharma@intel.com>
On Thu, 2025-11-13 at 16:33 +0000, nishit.sharma@intel.com wrote:
> From: Nishit Sharma <nishit.sharma@intel.com>
>
> This test launches compute or copy workloads on both GPUs that access
> the same
> SVM buffer, using synchronization primitives (fences/semaphores) to
> coordinate
> access. It verifies data integrity and checks for the absence of race
> conditions
> in a multi-GPU SVM environment.
>
> Signed-off-by: Nishit Sharma <nishit.sharma@intel.com>
> ---
> tests/intel/xe_multi_gpusvm.c | 133
> ++++++++++++++++++++++++++++++++++
> 1 file changed, 133 insertions(+)
>
> diff --git a/tests/intel/xe_multi_gpusvm.c
> b/tests/intel/xe_multi_gpusvm.c
> index 6feb543ae..dc2a8f9c8 100644
> --- a/tests/intel/xe_multi_gpusvm.c
> +++ b/tests/intel/xe_multi_gpusvm.c
> @@ -54,6 +54,11 @@
> * Description:
> * This test intentionally triggers page faults by accessing
> unmapped SVM
> * regions from both GPUs
> + *
> + * SUBTEST: concurrent-access-multi-gpu
> + * Description:
> + * This tests aunches simultaneous workloads on both GPUs
> accessing the
> + * same SVM buffer synchronizes with fences, and verifies data
> integrity
> */
>
> #define MAX_XE_REGIONS 8
> @@ -126,6 +131,11 @@ static void gpu_fault_test_wrapper(struct
> xe_svm_gpu_info *src,
> struct
> drm_xe_engine_class_instance *eci,
> void *extra_args);
>
> +static void gpu_simult_test_wrapper(struct xe_svm_gpu_info *src,
> + struct xe_svm_gpu_info *dst,
> + struct
> drm_xe_engine_class_instance *eci,
> + void *extra_args);
> +
> static void
> create_vm_and_queue(struct xe_svm_gpu_info *gpu, struct
> drm_xe_engine_class_instance *eci,
> uint32_t *vm, uint32_t *exec_queue)
> @@ -900,6 +910,108 @@ gpu_coherecy_test_wrapper(struct
> xe_svm_gpu_info *src,
> coherency_test_multigpu(src, dst, eci, args->op_mod, args-
> >prefetch_req);
> }
>
> +static void
> +multigpu_access_test(struct xe_svm_gpu_info *gpu0,
> + struct xe_svm_gpu_info *gpu1,
> + struct drm_xe_engine_class_instance *eci,
> + bool no_prefetch)
> +{
> + uint64_t addr;
> + uint32_t vm[2];
> + uint32_t exec_queue[2];
> + uint32_t batch_bo[2];
> + struct test_exec_data *data;
> + uint64_t batch_addr[2];
> + struct drm_xe_sync sync[2] = {};
> + volatile uint64_t *sync_addr[2];
> + volatile uint32_t *shared_val;
> +
> + create_vm_and_queue(gpu0, eci, &vm[0], &exec_queue[0]);
> + create_vm_and_queue(gpu1, eci, &vm[1], &exec_queue[1]);
> +
> + data = aligned_alloc(SZ_2M, SZ_4K);
> + igt_assert(data);
> + data[0].vm_sync = 0;
> + addr = to_user_pointer(data);
> +
> + shared_val = (volatile uint32_t *)addr;
> + *shared_val = ATOMIC_OP_VAL - 1;
> +
> + atomic_batch_init(gpu0->fd, vm[0], addr, &batch_bo[0],
> &batch_addr[0]);
> + *shared_val = ATOMIC_OP_VAL - 2;
> + atomic_batch_init(gpu1->fd, vm[1], addr, &batch_bo[1],
> &batch_addr[1]);
> +
> + /* Place destination in an optionally remote location to
> test */
> + xe_multigpu_madvise(gpu0->fd, vm[0], addr, SZ_4K, 0,
> + DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC,
> + gpu0->fd, 0, gpu0->vram_regions[0],
> exec_queue[0],
> + 0, 0);
> + xe_multigpu_madvise(gpu1->fd, vm[1], addr, SZ_4K, 0,
> + DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC,
> + gpu1->fd, 0, gpu1->vram_regions[0],
> exec_queue[1],
> + 0, 0);
> +
> + setup_sync(&sync[0], &sync_addr[0], BIND_SYNC_VAL);
> + setup_sync(&sync[1], &sync_addr[1], BIND_SYNC_VAL);
> +
> + /* For simultaneous access need to call xe_wait_ufence for
> both gpus after prefetch */
> + if(!no_prefetch) {
Here we have double negation. Perhaps invert the meaning of the
variable and call it do_prefetch.
> + xe_vm_prefetch_async(gpu0->fd, vm[0], 0, 0, addr,
> + SZ_4K, &sync[0], 1,
> +
> DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC);
> +
> + xe_vm_prefetch_async(gpu1->fd, vm[1], 0, 0, addr,
> + SZ_4K, &sync[1], 1,
> +
> DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC);
> +
> + if (*sync_addr[0] != BIND_SYNC_VAL)
> + xe_wait_ufence(gpu0->fd, (uint64_t
> *)sync_addr[0], BIND_SYNC_VAL, exec_queue[0],
> + NSEC_PER_SEC * 10);
> + free((void *)sync_addr[0]);
> + if (*sync_addr[1] != BIND_SYNC_VAL)
> + xe_wait_ufence(gpu1->fd, (uint64_t
> *)sync_addr[1], BIND_SYNC_VAL, exec_queue[1],
> + NSEC_PER_SEC * 10);
> + free((void *)sync_addr[1]);
> + }
> +
> + if (no_prefetch) {
> + free((void *)sync_addr[0]);
> + free((void *)sync_addr[1]);
> + }
> +
> + for (int i = 0; i < 100; i++) {
> + sync_addr[0] = (void *)((char *)batch_addr[0] +
> SZ_4K);
> + sync[0].addr = to_user_pointer((uint64_t
> *)sync_addr[0]);
> + sync[0].timeline_value = EXEC_SYNC_VAL;
> +
> + sync_addr[1] = (void *)((char *)batch_addr[1] +
> SZ_4K);
> + sync[1].addr = to_user_pointer((uint64_t
> *)sync_addr[1]);
> + sync[1].timeline_value = EXEC_SYNC_VAL;
> + *sync_addr[0] = 0;
> + *sync_addr[1] = 0;
> +
> + xe_exec_sync(gpu0->fd, exec_queue[0], batch_addr[0],
> &sync[0], 1);
> + if (*sync_addr[0] != EXEC_SYNC_VAL)
> + xe_wait_ufence(gpu0->fd, (uint64_t
> *)sync_addr[0], EXEC_SYNC_VAL, exec_queue[0],
> + NSEC_PER_SEC * 10);
> + xe_exec_sync(gpu1->fd, exec_queue[1], batch_addr[1],
> &sync[1], 1);
> + if (*sync_addr[1] != EXEC_SYNC_VAL)
> + xe_wait_ufence(gpu1->fd, (uint64_t
> *)sync_addr[1], EXEC_SYNC_VAL, exec_queue[1],
> + NSEC_PER_SEC * 10);
Here you are synchronizing after each batch execution, so nothing
really runs in parallel. I suggest only synchronizing on the last
iteration, and don't use any sync objects on the previous iterations.
Thanks,
Thomas
> + }
> +
> + igt_assert_eq(*(uint64_t *)addr, 254);
> +
> + munmap((void *)batch_addr[0], BATCH_SIZE(gpu0->fd));
> + munmap((void *)batch_addr[1], BATCH_SIZE(gpu0->fd));
> + batch_fini(gpu0->fd, vm[0], batch_bo[0], batch_addr[0]);
> + batch_fini(gpu1->fd, vm[1], batch_bo[1], batch_addr[1]);
> + free(data);
> +
> + cleanup_vm_and_queue(gpu0, vm[0], exec_queue[0]);
> + cleanup_vm_and_queue(gpu1, vm[1], exec_queue[1]);
> +}
> +
> static void
> gpu_latency_test_wrapper(struct xe_svm_gpu_info *src,
> struct xe_svm_gpu_info *dst,
> @@ -926,6 +1038,19 @@ gpu_fault_test_wrapper(struct xe_svm_gpu_info
> *src,
> pagefault_test_multigpu(src, dst, eci, args->prefetch_req);
> }
>
> +static void
> +gpu_simult_test_wrapper(struct xe_svm_gpu_info *src,
> + struct xe_svm_gpu_info *dst,
> + struct drm_xe_engine_class_instance *eci,
> + void *extra_args)
> +{
> + struct multigpu_ops_args *args = (struct multigpu_ops_args
> *)extra_args;
> + igt_assert(src);
> + igt_assert(dst);
> +
> + multigpu_access_test(src, dst, eci, args->prefetch_req);
> +}
> +
> igt_main
> {
> struct xe_svm_gpu_info gpus[MAX_XE_GPUS];
> @@ -1001,6 +1126,14 @@ igt_main
> for_each_gpu_pair(gpu_cnt, gpus, &eci,
> gpu_fault_test_wrapper, &fault_args);
> }
>
> + igt_subtest("concurrent-access-multi-gpu") {
> + struct multigpu_ops_args simul_args;
> + simul_args.prefetch_req = 1;
> + for_each_gpu_pair(gpu_cnt, gpus, &eci,
> gpu_simult_test_wrapper, &simul_args);
> + simul_args.prefetch_req = 0;
> + for_each_gpu_pair(gpu_cnt, gpus, &eci,
> gpu_simult_test_wrapper, &simul_args);
> + }
> +
> igt_fixture {
> int cnt;
>
next prev parent reply other threads:[~2025-11-17 14:57 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-13 16:32 [PATCH i-g-t v7 00/10] Madvise feature in SVM for Multi-GPU configs nishit.sharma
2025-11-13 16:33 ` [PATCH i-g-t v7 01/10] lib/xe: Add instance parameter to xe_vm_madvise and introduce lr_sync helpers nishit.sharma
2025-11-17 12:34 ` Hellstrom, Thomas
2025-11-17 15:43 ` Sharma, Nishit
2025-11-18 9:23 ` Hellstrom, Thomas
2025-11-13 16:33 ` [PATCH i-g-t v7 02/10] tests/intel/xe_exec_system_allocator: Add parameter in madvise call nishit.sharma
2025-11-17 12:38 ` Hellstrom, Thomas
2025-11-13 16:33 ` [PATCH i-g-t v7 03/10] tests/intel/xe_multi_gpusvm: Add SVM multi-GPU cross-GPU memory access test nishit.sharma
2025-11-17 13:00 ` Hellstrom, Thomas
2025-11-17 15:49 ` Sharma, Nishit
2025-11-17 20:40 ` Hellstrom, Thomas
2025-11-18 9:24 ` Hellstrom, Thomas
2025-11-13 16:33 ` [PATCH i-g-t v7 04/10] tests/intel/xe_multi_gpusvm.c: Add SVM multi-GPU atomic operations nishit.sharma
2025-11-17 13:10 ` Hellstrom, Thomas
2025-11-17 15:50 ` Sharma, Nishit
2025-11-18 9:26 ` Hellstrom, Thomas
2025-11-13 16:33 ` [PATCH i-g-t v7 05/10] tests/intel/xe_multi_gpusvm.c: Add SVM multi-GPU coherency test nishit.sharma
2025-11-17 14:02 ` Hellstrom, Thomas
2025-11-17 16:18 ` Sharma, Nishit
2025-11-27 7:36 ` Gurram, Pravalika
2025-11-13 16:33 ` [PATCH i-g-t v7 06/10] tests/intel/xe_multi_gpusvm.c: Add SVM multi-GPU performance test nishit.sharma
2025-11-17 14:39 ` Hellstrom, Thomas
2025-11-13 16:33 ` [PATCH i-g-t v7 07/10] tests/intel/xe_multi_gpusvm.c: Add SVM multi-GPU fault handling test nishit.sharma
2025-11-17 14:48 ` Hellstrom, Thomas
2025-11-13 16:33 ` [PATCH i-g-t v7 08/10] tests/intel/xe_multi_gpusvm.c: Add SVM multi-GPU simultaneous access test nishit.sharma
2025-11-17 14:57 ` Hellstrom, Thomas [this message]
2025-11-13 16:33 ` [PATCH i-g-t v7 09/10] tests/intel/xe_multi_gpusvm.c: Add SVM multi-GPU conflicting madvise test nishit.sharma
2025-11-17 15:11 ` Hellstrom, Thomas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9b76a0840ba00cda06a80b15e3ed07b5b36d94d3.camel@intel.com \
--to=thomas.hellstrom@intel.com \
--cc=igt-dev@lists.freedesktop.org \
--cc=nishit.sharma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).