From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 32D0BCD8CA1 for ; Thu, 13 Nov 2025 16:28:39 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E0B1510E8C7; Thu, 13 Nov 2025 16:28:38 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="CR03obFj"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by gabe.freedesktop.org (Postfix) with ESMTPS id 310B310E8C0 for ; Thu, 13 Nov 2025 16:28:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1763051317; x=1794587317; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=aNymC4cHsvz6xcEKcuWFq3tp0CyZXRFlzt0njimx6TI=; b=CR03obFjTtiwev0iGsDDKLzHEaQr63Y65r73+ioWDCgkhjgrc/bc1kDc cY0+hsKF4bRu/zY+dV9R+4UTb/1sz30P6iH658UStmbTgukPImsSGwQDi PNUTiePCLDtxVkTA0+XLSJyNrzByFtaurQ5UdlYb5uVrqJDV6tLH5dBXJ 7pCC4seCODKk1JW7iUtzdGbZvZIcWeO6u3fIkgi0GZFWT7ACFuRP5l+N9 Lk95xMlZPey/vQ0saSk32KfqDxJoQNsSKzubZCepqxHmuyKzVUSJ7GxTU v5pHCATs4mHVQ1tawx8z7Pg8D8DOi9Re2T4d2FeTjG9ZLq8TM/cNnWGvS g==; X-CSE-ConnectionGUID: XSyl1f0IShWGNZnLnpBX9g== X-CSE-MsgGUID: 3RFdS0mCSkSnFgq9HTyNDg== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="65074324" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="65074324" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2025 08:28:37 -0800 X-CSE-ConnectionGUID: Xa+BtnvKQsitlvO93JOgdA== X-CSE-MsgGUID: R9uMlOBARgq2UoQ6Yn0Gcw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.19,302,1754982000"; d="scan'208";a="193984004" Received: from dut7069bmgfrd.fm.intel.com (HELO DUT7069BMGFRD..) ([10.1.40.39]) by orviesa004.jf.intel.com with ESMTP; 13 Nov 2025 08:28:36 -0800 From: nishit.sharma@intel.com To: igt-dev@lists.freedesktop.org, thomas.hellstrom@intel.com, nishit.sharma@intel.com Subject: [PATCH v7 04/10] tests/intel/xe_multi_gpusvm.c: Add SVM multi-GPU atomic operations Date: Thu, 13 Nov 2025 16:28:29 +0000 Message-ID: <20251113162834.633575-5-nishit.sharma@intel.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20251113162834.633575-1-nishit.sharma@intel.com> References: <20251113162834.633575-1-nishit.sharma@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: igt-dev@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development mailing list for IGT GPU Tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: igt-dev-bounces@lists.freedesktop.org Sender: "igt-dev" From: Nishit Sharma This test performs atomic increment operation on a shared SVM buffer from both GPUs and the CPU in a multi-GPU environment. It uses madvise and prefetch to control buffer placement and verifies correctness and ordering of atomic updates across agents. Signed-off-by: Nishit Sharma --- tests/intel/xe_multi_gpusvm.c | 157 +++++++++++++++++++++++++++++++++- 1 file changed, 156 insertions(+), 1 deletion(-) diff --git a/tests/intel/xe_multi_gpusvm.c b/tests/intel/xe_multi_gpusvm.c index 6614ea3d1..54e036724 100644 --- a/tests/intel/xe_multi_gpusvm.c +++ b/tests/intel/xe_multi_gpusvm.c @@ -31,6 +31,11 @@ * region both remotely and locally and copies to it. Reads back to * system memory and checks the result. * + * SUBTEST: atomic-inc-gpu-op + * Description: + * This test does atomic operation in multi-gpu by executing atomic + * operation on GPU1 and then atomic operation on GPU2 using same + * adress */ #define MAX_XE_REGIONS 8 @@ -40,6 +45,7 @@ #define BIND_SYNC_VAL 0x686868 #define EXEC_SYNC_VAL 0x676767 #define COPY_SIZE SZ_64M +#define ATOMIC_OP_VAL 56 struct xe_svm_gpu_info { bool supports_faults; @@ -49,6 +55,16 @@ struct xe_svm_gpu_info { int fd; }; +struct test_exec_data { + uint32_t batch[32]; + uint64_t pad; + uint64_t vm_sync; + uint64_t exec_sync; + uint32_t data; + uint32_t expected_data; + uint64_t batch_addr; +}; + struct multigpu_ops_args { bool prefetch_req; bool op_mod; @@ -72,7 +88,10 @@ static void gpu_mem_access_wrapper(struct xe_svm_gpu_info *src, struct drm_xe_engine_class_instance *eci, void *extra_args); -static void open_pagemaps(int fd, struct xe_svm_gpu_info *info); +static void gpu_atomic_inc_wrapper(struct xe_svm_gpu_info *src, + struct xe_svm_gpu_info *dst, + struct drm_xe_engine_class_instance *eci, + void *extra_args); static void create_vm_and_queue(struct xe_svm_gpu_info *gpu, struct drm_xe_engine_class_instance *eci, @@ -166,6 +185,35 @@ static void for_each_gpu_pair(int num_gpus, struct xe_svm_gpu_info *gpus, } } +static void open_pagemaps(int fd, struct xe_svm_gpu_info *info); + +static void +atomic_batch_init(int fd, uint32_t vm, uint64_t src_addr, + uint32_t *bo, uint64_t *addr) +{ + uint32_t batch_bo_size = BATCH_SIZE(fd); + uint32_t batch_bo; + uint64_t batch_addr; + void *batch; + uint32_t *cmd; + int i = 0; + + batch_bo = xe_bo_create(fd, vm, batch_bo_size, vram_if_possible(fd, 0), 0); + batch = xe_bo_map(fd, batch_bo, batch_bo_size); + cmd = (uint32_t *)batch; + + cmd[i++] = MI_ATOMIC | MI_ATOMIC_INC; + cmd[i++] = src_addr; + cmd[i++] = src_addr >> 32; + cmd[i++] = MI_BATCH_BUFFER_END; + + batch_addr = to_user_pointer(batch); + /* Punch a gap in the SVM map where we map the batch_bo */ + xe_vm_bind_lr_sync(fd, vm, batch_bo, 0, batch_addr, batch_bo_size, 0); + *bo = batch_bo; + *addr = batch_addr; +} + static void batch_init(int fd, uint32_t vm, uint64_t src_addr, uint64_t dst_addr, uint64_t copy_size, uint32_t *bo, uint64_t *addr) @@ -325,6 +373,105 @@ gpu_mem_access_wrapper(struct xe_svm_gpu_info *src, copy_src_dst(src, dst, eci, args->prefetch_req); } +static void +atomic_inc_op(struct xe_svm_gpu_info *gpu0, + struct xe_svm_gpu_info *gpu1, + struct drm_xe_engine_class_instance *eci, + bool prefetch_req) +{ + uint64_t addr; + uint32_t vm[2]; + uint32_t exec_queue[2]; + uint32_t batch_bo; + struct test_exec_data *data; + uint64_t batch_addr; + struct drm_xe_sync sync = {}; + volatile uint64_t *sync_addr; + volatile uint32_t *shared_val; + + create_vm_and_queue(gpu0, eci, &vm[0], &exec_queue[0]); + create_vm_and_queue(gpu1, eci, &vm[1], &exec_queue[1]); + + data = aligned_alloc(SZ_2M, SZ_4K); + igt_assert(data); + data[0].vm_sync = 0; + addr = to_user_pointer(data); + + shared_val = (volatile uint32_t *)addr; + *shared_val = ATOMIC_OP_VAL - 1; + + atomic_batch_init(gpu0->fd, vm[0], addr, &batch_bo, &batch_addr); + + /* Place destination in an optionally remote location to test */ + xe_multigpu_madvise(gpu0->fd, vm[0], addr, SZ_4K, 0, + DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC, + gpu0->fd, 0, gpu0->vram_regions[0], exec_queue[0], + 0, 0); + + setup_sync(&sync, &sync_addr, BIND_SYNC_VAL); + xe_multigpu_prefetch(gpu0->fd, vm[0], addr, SZ_4K, &sync, + sync_addr, exec_queue[0], prefetch_req); + + sync_addr = (void *)((char *)batch_addr + SZ_4K); + sync.addr = to_user_pointer((uint64_t *)sync_addr); + sync.timeline_value = EXEC_SYNC_VAL; + *sync_addr = 0; + + /* Executing ATOMIC_INC on GPU0. */ + xe_exec_sync(gpu0->fd, exec_queue[0], batch_addr, &sync, 1); + if (*sync_addr != EXEC_SYNC_VAL) + xe_wait_ufence(gpu0->fd, (uint64_t *)sync_addr, EXEC_SYNC_VAL, exec_queue[0], + NSEC_PER_SEC * 10); + + igt_assert_eq(*shared_val, ATOMIC_OP_VAL); + + atomic_batch_init(gpu1->fd, vm[1], addr, &batch_bo, &batch_addr); + + /* Place destination in an optionally remote location to test */ + xe_multigpu_madvise(gpu1->fd, vm[1], addr, SZ_4K, 0, + DRM_XE_MEM_RANGE_ATTR_PREFERRED_LOC, + gpu1->fd, 0, gpu1->vram_regions[0], exec_queue[0], + 0, 0); + + setup_sync(&sync, &sync_addr, BIND_SYNC_VAL); + xe_multigpu_prefetch(gpu1->fd, vm[1], addr, SZ_4K, &sync, + sync_addr, exec_queue[1], prefetch_req); + + sync_addr = (void *)((char *)batch_addr + SZ_4K); + sync.addr = to_user_pointer((uint64_t *)sync_addr); + sync.timeline_value = EXEC_SYNC_VAL; + *sync_addr = 0; + + /* Execute ATOMIC_INC on GPU1 */ + xe_exec_sync(gpu1->fd, exec_queue[1], batch_addr, &sync, 1); + if (*sync_addr != EXEC_SYNC_VAL) + xe_wait_ufence(gpu1->fd, (uint64_t *)sync_addr, EXEC_SYNC_VAL, exec_queue[1], + NSEC_PER_SEC * 10); + + igt_assert_eq(*shared_val, ATOMIC_OP_VAL + 1); + + munmap((void *)batch_addr, BATCH_SIZE(gpu0->fd)); + batch_fini(gpu0->fd, vm[0], batch_bo, batch_addr); + batch_fini(gpu1->fd, vm[1], batch_bo, batch_addr); + free(data); + + cleanup_vm_and_queue(gpu0, vm[0], exec_queue[0]); + cleanup_vm_and_queue(gpu1, vm[1], exec_queue[1]); +} + +static void +gpu_atomic_inc_wrapper(struct xe_svm_gpu_info *src, + struct xe_svm_gpu_info *dst, + struct drm_xe_engine_class_instance *eci, + void *extra_args) +{ + struct multigpu_ops_args *args = (struct multigpu_ops_args *)extra_args; + igt_assert(src); + igt_assert(dst); + + atomic_inc_op(src, dst, eci, args->prefetch_req); +} + igt_main { struct xe_svm_gpu_info gpus[MAX_XE_GPUS]; @@ -364,6 +511,14 @@ igt_main for_each_gpu_pair(gpu_cnt, gpus, &eci, gpu_mem_access_wrapper, &op_args); } + igt_subtest("atomic-inc-gpu-op") { + struct multigpu_ops_args atomic_args; + atomic_args.prefetch_req = 1; + for_each_gpu_pair(gpu_cnt, gpus, &eci, gpu_atomic_inc_wrapper, &atomic_args); + atomic_args.prefetch_req = 0; + for_each_gpu_pair(gpu_cnt, gpus, &eci, gpu_atomic_inc_wrapper, &atomic_args); + } + igt_fixture { int cnt; -- 2.48.1