From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 12B60CCD18E for ; Mon, 6 Oct 2025 10:44:56 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1610A10E407; Mon, 6 Oct 2025 10:44:55 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="a0mfXrfH"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) by gabe.freedesktop.org (Postfix) with ESMTPS id 148C210E351 for ; Mon, 6 Oct 2025 10:44:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1759747493; x=1791283493; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=zn0/73xsTI12kNGWuFF1VZ6nz8vydO3OH4QkS+NWVMo=; b=a0mfXrfH89VGwXc2IhGVvZrXU8XvooYSY8/mID/6P0B4rsR7qmrh6ZUB hVlZ2GT45pJzsi8sbLOLTBtHM5POwnO2VM0g2TDvTDYOFT8HHQj83dfuh Ckq71Z3AWOKFqmC5yinLiiCCshX3DSNIcVOrU9zKSHPxYT0/uNbpb7u5I BCw/cIK8iz1K7zaJ1N0zwcV8tdSx7G1zNC/UqaphgIL41CSvP2ksWRrOg MCrkH73PpE3WEKCLk8MOqQsdbU3XBHb1Hk90URZGa39245+JCNSTjh0zd Yo3hRwEItWRVwEUIzNju8kKBO9NNRbaBVkJMwufRMhuoILoj6zNa/moAs g==; X-CSE-ConnectionGUID: 3JAle8aPSP2w0+RP9P223A== X-CSE-MsgGUID: LEoqAlKrSXy7My9DfECk3A== X-IronPort-AV: E=McAfee;i="6800,10657,11573"; a="84546328" X-IronPort-AV: E=Sophos;i="6.18,319,1751266800"; d="scan'208";a="84546328" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Oct 2025 03:44:52 -0700 X-CSE-ConnectionGUID: DRK2cw+tSa2A63ekTtSHvA== X-CSE-MsgGUID: l0MtRKWzTsSrbNS7LXWv9g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,319,1751266800"; d="scan'208";a="203589324" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Oct 2025 03:44:51 -0700 From: Matthew Brost To: intel-xe@lists.freedesktop.org Subject: [PATCH v5 15/30] drm/xe/vf: Avoid indefinite blocking in preempt rebind worker for VFs supporting migration Date: Mon, 6 Oct 2025 03:44:30 -0700 Message-Id: <20251006104445.2210624-16-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251006104445.2210624-1-matthew.brost@intel.com> References: <20251006104445.2210624-1-matthew.brost@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" Blocking in work queues on a hardware action that may never occur — especially when it depends on a software fixup also scheduled on the a work queue — is a recipe for deadlock. This situation arises with the preempt rebind worker and VF post-migration recovery. To prevent potential deadlocks, avoid indefinite blocking in the preempt rebind worker for VFs that support migration. v4: - Use dma_fence_wait_timeout (CI) Signed-off-by: Matthew Brost Reviewed-by: Tomasz Lis --- drivers/gpu/drm/xe/xe_vm.c | 26 +++++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 4e914928e0a9..faca626702b8 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -35,6 +35,7 @@ #include "xe_pt.h" #include "xe_pxp.h" #include "xe_res_cursor.h" +#include "xe_sriov_vf.h" #include "xe_svm.h" #include "xe_sync.h" #include "xe_tile.h" @@ -111,12 +112,22 @@ static int alloc_preempt_fences(struct xe_vm *vm, struct list_head *list, static int wait_for_existing_preempt_fences(struct xe_vm *vm) { struct xe_exec_queue *q; + bool vf_migration = IS_SRIOV_VF(vm->xe) && + xe_sriov_vf_migration_supported(vm->xe); + signed long wait_time = vf_migration ? HZ / 5 : MAX_SCHEDULE_TIMEOUT; xe_vm_assert_held(vm); list_for_each_entry(q, &vm->preempt.exec_queues, lr.link) { if (q->lr.pfence) { - long timeout = dma_fence_wait(q->lr.pfence, false); + long timeout; + + timeout = dma_fence_wait_timeout(q->lr.pfence, false, + wait_time); + if (!timeout) { + xe_assert(vm->xe, vf_migration); + return -EAGAIN; + } /* Only -ETIME on fence indicates VM needs to be killed */ if (timeout < 0 || q->lr.pfence->error == -ETIME) @@ -541,6 +552,19 @@ static void preempt_rebind_work_func(struct work_struct *w) out_unlock_outer: if (err == -EAGAIN) { trace_xe_vm_rebind_worker_retry(vm); + + /* + * We can't block in workers on a VF which supports migration + * given this can block the VF post-migration workers from + * getting scheduled. + */ + if (IS_SRIOV_VF(vm->xe) && + xe_sriov_vf_migration_supported(vm->xe)) { + up_write(&vm->lock); + xe_vm_queue_rebind_worker(vm); + return; + } + goto retry; } -- 2.34.1