From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3BB00CCF9E0 for ; Mon, 27 Oct 2025 18:27:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id DCA9B10E551; Mon, 27 Oct 2025 18:27:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="MUkASyKl"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7F55110E54D for ; Mon, 27 Oct 2025 18:27:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1761589663; x=1793125663; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=moootHlUXPf8f6xXh+EJkn3K4X2eIaDha1FmoFo5hAg=; b=MUkASyKlCD8Qf7HEojlZtWcVJwOA6qbKJeC3d5pxxkt7N+0v2nyAI31X YzCtE5FwSQdrL2pIURqilkiSP9cem7q1MmpgxcFgBvRlhmpdYqT3V9fqG N9IORsufcPrAs+paRYNOifpcioAU9jCIkRu27Jfat8gV+NQxppbLnA+NI acGz88Y8wa+9LKUTZAQg4WFpw88v+8hC2QMwvSJ8OUQFutREfqC8vXGzb asyV37t2ru5T/rLP73n5BY/vJKn+9YJAa5OGAcwNBzHVxkhGIxYSLkOuX sJgEnLJJF8Y0x9b5SbckLkKzaYzMYhH9d2EwnZr8T/tvjuNTlO1GcEqEe Q==; X-CSE-ConnectionGUID: 6f9WHY1bTXSjg44l60xm+w== X-CSE-MsgGUID: gvS7/MWsQJm5KOzRcz+yjw== X-IronPort-AV: E=McAfee;i="6800,10657,11586"; a="67544505" X-IronPort-AV: E=Sophos;i="6.19,259,1754982000"; d="scan'208";a="67544505" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2025 11:27:43 -0700 X-CSE-ConnectionGUID: 0l3aHetRScOVmlwkw9waBQ== X-CSE-MsgGUID: qsiPSqwtT/Wc+l1LAe9VfA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.19,259,1754982000"; d="scan'208";a="185884120" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Oct 2025 11:27:42 -0700 From: Matthew Brost To: intel-xe@lists.freedesktop.org Cc: thomas.hellstrom@linux.intel.com Subject: [PATCH v4 3/5] drm/xe: Do not wait on TLB invalidations in page fault binds Date: Mon, 27 Oct 2025 11:27:35 -0700 Message-Id: <20251027182737.2358096-4-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251027182737.2358096-1-matthew.brost@intel.com> References: <20251027182737.2358096-1-matthew.brost@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" The migrate queue is shared by all processes using a device, thus is possible while servicing a page fault another process uses the migrate queue resulting in a TLB invalidation. In case of page fault binds, this TLB invalidation has nothing to do with the current bind so there is not need to wait on it. Teach the bind pipeline to be able to skip waits on TLB invalidations. Signed-off-by: Matthew Brost --- drivers/gpu/drm/xe/xe_vm.c | 14 ++++++++++++-- drivers/gpu/drm/xe/xe_vm_types.h | 1 + 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 2f181c44b8b7..df0a44d9eb46 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -755,6 +755,7 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma xe_assert(vm->xe, xe_vm_in_fault_mode(vm)); xe_vma_ops_init(&vops, vm, NULL, NULL, 0); + vops.flags |= XE_VMA_OPS_FLAG_SKIP_TLB_WAIT; for_each_tile(tile, vm->xe, id) { vops.pt_update_ops[id].wait_vm_bookkeep = true; vops.pt_update_ops[tile->id].q = @@ -845,6 +846,7 @@ struct dma_fence *xe_vm_range_rebind(struct xe_vm *vm, xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma)); xe_vma_ops_init(&vops, vm, NULL, NULL, 0); + vops.flags |= XE_VMA_OPS_FLAG_SKIP_TLB_WAIT; for_each_tile(tile, vm->xe, id) { vops.pt_update_ops[id].wait_vm_bookkeep = true; vops.pt_update_ops[tile->id].q = @@ -3111,8 +3113,13 @@ static struct dma_fence *ops_execute(struct xe_vm *vm, if (number_tiles == 0) return ERR_PTR(-ENODATA); - for_each_tile(tile, vm->xe, id) - n_fence += (1 + XE_MAX_GT_PER_TILE); + if (vops->flags & XE_VMA_OPS_FLAG_SKIP_TLB_WAIT) { + for_each_tile(tile, vm->xe, id) + ++n_fence; + } else { + for_each_tile(tile, vm->xe, id) + n_fence += (1 + XE_MAX_GT_PER_TILE); + } fences = kmalloc_array(n_fence, sizeof(*fences), GFP_KERNEL); if (!fences) { @@ -3153,6 +3160,9 @@ static struct dma_fence *ops_execute(struct xe_vm *vm, collect_fences: fences[current_fence++] = fence ?: dma_fence_get_stub(); + if (vops->flags & XE_VMA_OPS_FLAG_SKIP_TLB_WAIT) + continue; + xe_migrate_job_lock(tile->migrate, q); for_each_tlb_inval(i) fences[current_fence++] = diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h index 542dbe2f9310..3766dc37b3ad 100644 --- a/drivers/gpu/drm/xe/xe_vm_types.h +++ b/drivers/gpu/drm/xe/xe_vm_types.h @@ -466,6 +466,7 @@ struct xe_vma_ops { #define XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH BIT(0) #define XE_VMA_OPS_FLAG_MADVISE BIT(1) #define XE_VMA_OPS_ARRAY_OF_BINDS BIT(2) +#define XE_VMA_OPS_FLAG_SKIP_TLB_WAIT BIT(3) u32 flags; #ifdef TEST_VM_OPS_ERROR /** @inject_error: inject error to test error handling */ -- 2.34.1