From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 02D9CD2ECF7 for ; Tue, 20 Jan 2026 06:09:27 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B7F4510E556; Tue, 20 Jan 2026 06:09:27 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="n2xrSQKJ"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id AA0E610E557 for ; Tue, 20 Jan 2026 06:09:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1768889364; x=1800425364; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xMsxMSOq5CkcGEXCyfUZ+3CNITPOaZRGqwzwGiE7gI8=; b=n2xrSQKJSLNxzGaCvsjvKiJKX//MBJlSCXrtNmbJiXR0ATvMTYhFkgCo pZY1c5eqi7XEyJDF9GrLfb7uHpr1wkCSbTot61F5FsVVjvno3hYfQ84PZ R52CPXdPBJvPaWaHV+R65xCgRv0Lrw/0tC0DFf7uxFvS3OzMxSuDNIAiE aCNWGhaoscfzXCOdV8ehxE2MhVN/G6dmKwdvQlxeemSQDp6RicHGlpMRz ztxFI/rt9FK8Slip6DMwwC78j3aBbHY5LMSnRDinUMknI6H19pDSgfHO4 zBqni1s+aZtKuHAHr68Yn4K6e2pBtEXYA+SKysh7Yr+b8wyseIfDXjTXg g==; X-CSE-ConnectionGUID: o2dkWbPtSNmdCcBtuta4Aw== X-CSE-MsgGUID: a8H1Ti1NSY2QUaeS7d2mxg== X-IronPort-AV: E=McAfee;i="6800,10657,11676"; a="72676998" X-IronPort-AV: E=Sophos;i="6.21,240,1763452800"; d="scan'208";a="72676998" Received: from fmviesa007.fm.intel.com ([10.60.135.147]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2026 22:09:24 -0800 X-CSE-ConnectionGUID: LCmYme2ZQ8SjSGKaegZ0Kg== X-CSE-MsgGUID: D5SZUlV7TWyRW5mbpp0Pew== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.21,240,1763452800"; d="scan'208";a="205658609" Received: from varungup-desk.iind.intel.com ([10.190.238.71]) by fmviesa007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2026 22:09:23 -0800 From: Arvind Yadav To: intel-xe@lists.freedesktop.org Cc: matthew.brost@intel.com, himal.prasad.ghimiray@intel.com, thomas.hellstrom@linux.intel.com, pallavi.mishra@intel.com Subject: [PATCH v4 5/8] drm/xe/vm: Prevent binding of purged buffer objects Date: Tue, 20 Jan 2026 11:38:51 +0530 Message-ID: <20260120060900.3137984-6-arvind.yadav@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260120060900.3137984-1-arvind.yadav@intel.com> References: <20260120060900.3137984-1-arvind.yadav@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" Add check_purged parameter to vma_lock_and_validate() to block new mapping operations on purged BOs while allowing cleanup operations to proceed. Purged BOs have their backing pages freed by the kernel. New mapping operations (MAP, PREFETCH, REMAP) must be rejected with -EINVAL to prevent GPU access to invalid memory. Cleanup operations (UNMAP) must be allowed so applications can release resources after detecting purge via the retained field. REMAP operations require mixed handling - reject new prev/next VMAs if the BO is purged, but allow the unmap portion to proceed for cleanup. The check_purged parameter distinguishes between these cases: true for new mappings (must reject), false for cleanup (allow). v2: - Clarify that purged BOs are permanently invalid (i915 semantics) - Remove incorrect claim about madvise(WILLNEED) restoring purged BOs v3: - Move xe_bo_is_purged check under vma_lock_and_validate (Matthew Brost) - Add check_purged parameter to distinguish new mappings from cleanup - Allow UNMAP operations to prevent resource leaks - Handle REMAP operation's dual nature (cleanup + new mappings) Cc: Matthew Brost Cc: Thomas Hellström Cc: Himal Prasad Ghimiray Signed-off-by: Arvind Yadav --- drivers/gpu/drm/xe/xe_vm.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index c3a5fe76ff96..f250daae3012 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -2883,7 +2883,7 @@ static void vm_bind_ioctl_ops_unwind(struct xe_vm *vm, } static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma, - bool res_evict, bool validate) + bool res_evict, bool validate, bool check_purged) { struct xe_bo *bo = xe_vma_bo(vma); struct xe_vm *vm = xe_vma_vm(vma); @@ -2892,6 +2892,11 @@ static int vma_lock_and_validate(struct drm_exec *exec, struct xe_vma *vma, if (bo) { if (!bo->vm) err = drm_exec_lock_obj(exec, &bo->ttm.base); + + /* Reject new mappings to purged BOs; allow cleanup operations */ + if (!err && check_purged && xe_bo_is_purged(bo)) + err = -EINVAL; + if (!err && validate) err = xe_bo_validate(bo, vm, !xe_vm_in_preempt_fence_mode(vm) && @@ -2990,7 +2995,8 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm, err = vma_lock_and_validate(exec, op->map.vma, res_evict, !xe_vm_in_fault_mode(vm) || - op->map.immediate); + op->map.immediate, + true); break; case DRM_GPUVA_OP_REMAP: err = check_ufence(gpuva_to_vma(op->base.remap.unmap->va)); @@ -2999,13 +3005,13 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm, err = vma_lock_and_validate(exec, gpuva_to_vma(op->base.remap.unmap->va), - res_evict, false); + res_evict, false, false); if (!err && op->remap.prev) err = vma_lock_and_validate(exec, op->remap.prev, - res_evict, true); + res_evict, true, true); if (!err && op->remap.next) err = vma_lock_and_validate(exec, op->remap.next, - res_evict, true); + res_evict, true, true); break; case DRM_GPUVA_OP_UNMAP: err = check_ufence(gpuva_to_vma(op->base.unmap.va)); @@ -3014,7 +3020,7 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm, err = vma_lock_and_validate(exec, gpuva_to_vma(op->base.unmap.va), - res_evict, false); + res_evict, false, false); break; case DRM_GPUVA_OP_PREFETCH: { @@ -3029,7 +3035,7 @@ static int op_lock_and_prep(struct drm_exec *exec, struct xe_vm *vm, err = vma_lock_and_validate(exec, gpuva_to_vma(op->base.prefetch.va), - res_evict, false); + res_evict, false, true); if (!err && !xe_vma_has_no_bo(vma)) err = xe_bo_migrate(xe_vma_bo(vma), region_to_mem_type[region], -- 2.43.0