From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 871EEF3C991 for ; Tue, 24 Feb 2026 15:28:40 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 3BE4510E5AF; Tue, 24 Feb 2026 15:28:40 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="eJ1kD3zj"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1FD8D10E5B0 for ; Tue, 24 Feb 2026 15:28:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1771946919; x=1803482919; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=McY7miLZGVu+xu/u0o9/8atojCBVk7K57AFINVBcNEQ=; b=eJ1kD3zjKfAVMq+t67x0Q4xZ6fDVhpqVDN9rDjBAT9T2f51ar8cG9dqg nQqGXYswxo+qNT/WzyYPIsSeyGv/IMOU7XG7uzh5TxrSY0tYP4d/ZMTra HEAy5iN3eGYStp52aMuUNDBz/m7KgBZ4gcdCWIpdkwtibe1OGZ4HeKaGX G1jg1xwzPp2FTEHZKf4U6skMpNvQCG33wuNxirDqUKVLnKXzSKpFdyIw7 mt/6Jnv363PDcSCv8pp8T0xFjQDM9V2l5izzvX9/JSpquL72gkcCbzxtk QXxyyUfeqds/c6S4rD7k+A2FqOYJv951rmf4/mvGfhyw/wk9Ph6Ajxy5c A==; X-CSE-ConnectionGUID: aX2i9QfITPO0U+PwbEl5KA== X-CSE-MsgGUID: FgsxBei8QEuYr43M+vqAYA== X-IronPort-AV: E=McAfee;i="6800,10657,11711"; a="72868292" X-IronPort-AV: E=Sophos;i="6.21,308,1763452800"; d="scan'208";a="72868292" Received: from fmviesa009.fm.intel.com ([10.60.135.149]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Feb 2026 07:28:39 -0800 X-CSE-ConnectionGUID: kz+o39xwR8ejS9RIA/WoDA== X-CSE-MsgGUID: cGmjIOQrSaGPgWj9DvrfUg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,308,1763452800"; d="scan'208";a="213165949" Received: from varungup-desk.iind.intel.com ([10.190.238.71]) by fmviesa009-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Feb 2026 07:28:37 -0800 From: Arvind Yadav To: igt-dev@lists.freedesktop.org Cc: matthew.brost@intel.com, himal.prasad.ghimiray@intel.com, thomas.hellstrom@linux.intel.com, nishit.sharma@intel.com, pravalika.gurram@intel.com Subject: [PATCH i-g-t v4 5/8] tests/intel/xe_madvise: Add dontneed-before-exec subtest Date: Tue, 24 Feb 2026 20:57:53 +0530 Message-ID: <20260224152804.1940820-6-arvind.yadav@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260224152804.1940820-1-arvind.yadav@intel.com> References: <20260224152804.1940820-1-arvind.yadav@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: igt-dev@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Development mailing list for IGT GPU Tools List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: igt-dev-bounces@lists.freedesktop.org Sender: "igt-dev" This test validates GPU execution behavior when a data BO is purged before submission. The test creates a batch that writes to a data BO, purges the data BO (while keeping the batch BO valid to avoid GPU reset), then submits for execution. With VM_CREATE_FLAG_SCRATCH_PAGE, the GPU write may succeed by landing on scratch memory instead of the purged BO, demonstrating graceful handling of purged memory during GPU operations. v4: - Added proper resource cleanup before calling igt_skip(). (Nishit) Cc: Nishit Sharma Cc: Matthew Brost Cc: Thomas Hellström Cc: Himal Prasad Ghimiray Reviewed-by: Pravalika Gurram Signed-off-by: Arvind Yadav --- tests/intel/xe_madvise.c | 148 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 148 insertions(+) diff --git a/tests/intel/xe_madvise.c b/tests/intel/xe_madvise.c index 2e5f6157f..689682fab 100644 --- a/tests/intel/xe_madvise.c +++ b/tests/intel/xe_madvise.c @@ -19,7 +19,11 @@ /* Purgeable test constants */ #define PURGEABLE_ADDR 0x1a0000 +#define PURGEABLE_BATCH_ADDR 0x3c0000 #define PURGEABLE_BO_SIZE 4096 +#define PURGEABLE_FENCE_VAL 0xbeef +#define PURGEABLE_TEST_PATTERN 0xc0ffee +#define PURGEABLE_DEAD_PATTERN 0xdead /** * trigger_memory_pressure - Fill VRAM + 25% to force purgeable reclaim @@ -144,6 +148,62 @@ static void purgeable_setup_simple_bo(int fd, uint32_t *vm, uint32_t *bo, xe_wait_ufence(fd, &sync_val, 1, 0, NSEC_PER_SEC); } +/** + * purgeable_setup_batch_and_data - Setup VM with batch and data BOs for GPU exec + * @fd: DRM file descriptor + * @vm: Output VM handle + * @bind_engine: Output bind engine handle + * @batch_bo: Output batch BO handle + * @data_bo: Output data BO handle + * @batch: Output batch buffer pointer + * @data: Output data buffer pointer + * @batch_addr: Batch virtual address + * @data_addr: Data virtual address + * @batch_size: Batch buffer size + * @data_size: Data buffer size + * + * Helper to create VM, bind engine, batch and data BOs, and bind them. + */ +static void purgeable_setup_batch_and_data(int fd, uint32_t *vm, + uint32_t *bind_engine, + uint32_t *batch_bo, + uint32_t *data_bo, + uint32_t **batch, + uint32_t **data, + uint64_t batch_addr, + uint64_t data_addr, + size_t batch_size, + size_t data_size) +{ + struct drm_xe_sync sync = { + .type = DRM_XE_SYNC_TYPE_USER_FENCE, + .flags = DRM_XE_SYNC_FLAG_SIGNAL, + .timeline_value = PURGEABLE_FENCE_VAL, + }; + uint64_t vm_sync = 0; + + *vm = xe_vm_create(fd, DRM_XE_VM_CREATE_FLAG_SCRATCH_PAGE, 0); + *bind_engine = xe_bind_exec_queue_create(fd, *vm, 0); + + /* Create and bind batch BO */ + *batch_bo = xe_bo_create(fd, *vm, batch_size, vram_if_possible(fd, 0), + DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); + *batch = xe_bo_map(fd, *batch_bo, batch_size); + + sync.addr = to_user_pointer(&vm_sync); + xe_vm_bind_async(fd, *vm, *bind_engine, *batch_bo, 0, batch_addr, batch_size, &sync, 1); + xe_wait_ufence(fd, &vm_sync, PURGEABLE_FENCE_VAL, 0, NSEC_PER_SEC); + + /* Create and bind data BO */ + *data_bo = xe_bo_create(fd, *vm, data_size, vram_if_possible(fd, 0), + DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM); + *data = xe_bo_map(fd, *data_bo, data_size); + + vm_sync = 0; + xe_vm_bind_async(fd, *vm, *bind_engine, *data_bo, 0, data_addr, data_size, &sync, 1); + xe_wait_ufence(fd, &vm_sync, PURGEABLE_FENCE_VAL, 0, NSEC_PER_SEC); +} + /** * SUBTEST: dontneed-before-mmap * Description: Mark BO as DONTNEED before mmap, verify mmap fails or SIGBUS on access @@ -257,6 +317,88 @@ static void test_dontneed_after_mmap(int fd, struct drm_xe_engine_class_instance xe_vm_destroy(fd, vm); } +/** + * SUBTEST: dontneed-before-exec + * Description: Mark BO as DONTNEED before GPU exec, verify GPU behavior with SCRATCH_PAGE + * Test category: functionality test + */ +static void test_dontneed_before_exec(int fd, struct drm_xe_engine_class_instance *hwe) +{ + uint32_t vm, exec_queue, bo, batch_bo, bind_engine; + uint64_t data_addr = PURGEABLE_ADDR; + uint64_t batch_addr = PURGEABLE_BATCH_ADDR; + size_t data_size = PURGEABLE_BO_SIZE; + size_t batch_size = PURGEABLE_BO_SIZE; + struct drm_xe_sync sync[1] = { + { .type = DRM_XE_SYNC_TYPE_USER_FENCE, + .flags = DRM_XE_SYNC_FLAG_SIGNAL, + .timeline_value = PURGEABLE_FENCE_VAL }, + }; + struct drm_xe_exec exec = { + .num_batch_buffer = 1, + .num_syncs = 1, + .syncs = to_user_pointer(sync), + }; + uint32_t *data, *batch; + uint64_t vm_sync = 0; + int b, ret; + + purgeable_setup_batch_and_data(fd, &vm, &bind_engine, &batch_bo, + &bo, &batch, &data, batch_addr, + data_addr, batch_size, data_size); + + /* Prepare batch */ + b = 0; + batch[b++] = MI_STORE_DWORD_IMM_GEN4; + batch[b++] = data_addr; + batch[b++] = data_addr >> 32; + batch[b++] = PURGEABLE_DEAD_PATTERN; + batch[b++] = MI_BATCH_BUFFER_END; + + /* Phase 1: Purge data BO, batch BO still valid */ + if (!purgeable_mark_and_verify_purged(fd, vm, data_addr, data_size)) { + munmap(data, data_size); + munmap(batch, batch_size); + gem_close(fd, bo); + gem_close(fd, batch_bo); + xe_exec_queue_destroy(fd, bind_engine); + xe_vm_destroy(fd, vm); + igt_skip("Unable to induce purge on this platform/config"); + } + + exec_queue = xe_exec_queue_create(fd, vm, hwe, 0); + exec.exec_queue_id = exec_queue; + exec.address = batch_addr; + + vm_sync = 0; + sync[0].addr = to_user_pointer(&vm_sync); + + /* + * VM has SCRATCH_PAGE — exec may succeed with the GPU write + * landing on scratch instead of the purged data BO. + */ + ret = __xe_exec(fd, &exec); + if (ret == 0) { + int64_t timeout = NSEC_PER_SEC; + + __xe_wait_ufence(fd, &vm_sync, PURGEABLE_FENCE_VAL, + exec_queue, &timeout); + } + + /* + * Don't purge the batch BO — GPU would fetch zeroed scratch + * instructions and trigger an engine reset. + */ + + munmap(data, data_size); + munmap(batch, batch_size); + gem_close(fd, bo); + gem_close(fd, batch_bo); + xe_exec_queue_destroy(fd, bind_engine); + xe_exec_queue_destroy(fd, exec_queue); + xe_vm_destroy(fd, vm); +} + int igt_main() { struct drm_xe_engine_class_instance *hwe; @@ -279,6 +421,12 @@ int igt_main() break; } + igt_subtest("dontneed-before-exec") + xe_for_each_engine(fd, hwe) { + test_dontneed_before_exec(fd, hwe); + break; + } + igt_fixture() { xe_device_put(fd); drm_close_driver(fd); -- 2.43.0