From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5DB0ECAC5B5 for ; Mon, 29 Sep 2025 02:56:32 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 1F4EC10E32F; Mon, 29 Sep 2025 02:56:32 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="gkpECEEA"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) by gabe.freedesktop.org (Postfix) with ESMTPS id 99E3810E211 for ; Mon, 29 Sep 2025 02:55:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1759114551; x=1790650551; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=wbrplEOrAIpu6OJcCdPUs0VVXb9oACskUo5vkjbsJwA=; b=gkpECEEAGhDrzpwNeFfsiiIANrZIppMoJNUU4zbmwj06nOLwvy9TzYvx cdqrf08ayGI4FxvHapHU0gVIxTrsrZyNjBB54Bwa3x8VaSGquZvD7zrE+ CGGZex0NqwuiEIst3gmYCZVjtypWg6JiEoiN3CgS/ydb4SU9RkAjTkwrW zzhjUc8ombUEqKKa8MJ9mSadt/Z6oWpQCA1pa1hsPZtZm8ncyiapSSBjL l+c4EIkiJc2V+FN1VExfTMvSBM+Ogl8ldEL7Svj6p4hWufLUauYdmU/sn AufvBaGUQPG8/kuLzyknuinrVJKbsVdcll8XHo9NERaTKyvAxpt7wyNzq w==; X-CSE-ConnectionGUID: PKRv+LwCTjiVnMEeNgEZAA== X-CSE-MsgGUID: h6UWlOn5Ryiivrhizb0a5Q== X-IronPort-AV: E=McAfee;i="6800,10657,11531"; a="61398536" X-IronPort-AV: E=Sophos;i="6.17,312,1747724400"; d="scan'208";a="61398536" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2025 19:55:51 -0700 X-CSE-ConnectionGUID: PJkqKMRSRGC8X0owC2OrOA== X-CSE-MsgGUID: SNKwfRhGR1m9qa/bzkAmrQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,300,1751266800"; d="scan'208";a="182529270" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2025 19:55:50 -0700 From: Matthew Brost To: intel-xe@lists.freedesktop.org Subject: [PATCH v3 20/36] drm/xe/vf: Avoid indefinite blocking in preempt rebind worker for VFs supporting migration Date: Sun, 28 Sep 2025 19:55:26 -0700 Message-Id: <20250929025542.1486303-21-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250929025542.1486303-1-matthew.brost@intel.com> References: <20250929025542.1486303-1-matthew.brost@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" Blocking in work queues on a hardware action that may never occur — especially when it depends on a software fixup also scheduled on the awork queue — is a recipe for deadlock. This situation arises with the preempt rebind worker and VF post-migration recovery. To prevent potential deadlocks, avoid indefinite blocking in the preempt rebind worker for VFs that support migration. Signed-off-by: Matthew Brost --- drivers/gpu/drm/xe/xe_vm.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 80b7f13ecd80..b527ee2a5da5 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -35,6 +35,7 @@ #include "xe_pt.h" #include "xe_pxp.h" #include "xe_res_cursor.h" +#include "xe_sriov_vf.h" #include "xe_svm.h" #include "xe_sync.h" #include "xe_tile.h" @@ -111,12 +112,25 @@ static int alloc_preempt_fences(struct xe_vm *vm, struct list_head *list, static int wait_for_existing_preempt_fences(struct xe_vm *vm) { struct xe_exec_queue *q; + bool vf_migration = IS_SRIOV_VF(vm->xe) && + xe_sriov_vf_migration_supported(vm->xe); xe_vm_assert_held(vm); list_for_each_entry(q, &vm->preempt.exec_queues, lr.link) { if (q->lr.pfence) { - long timeout = dma_fence_wait(q->lr.pfence, false); + long timeout; + + if (vf_migration) + timeout = dma_fence_wait_timeout(q->lr.pfence, + false, HZ / 5); + else + timeout = dma_fence_wait(q->lr.pfence, false); + + if (!timeout) { + xe_assert(vm->xe, vf_migration); + return -EAGAIN; + } /* Only -ETIME on fence indicates VM needs to be killed */ if (timeout < 0 || q->lr.pfence->error == -ETIME) @@ -541,6 +555,19 @@ static void preempt_rebind_work_func(struct work_struct *w) out_unlock_outer: if (err == -EAGAIN) { trace_xe_vm_rebind_worker_retry(vm); + + /* + * We can't block in workers on a VF which supports migration + * given this can block the VF post-migration workers from + * getting scheduled. + */ + if (IS_SRIOV_VF(vm->xe) && + xe_sriov_vf_migration_supported(vm->xe)) { + up_write(&vm->lock); + xe_vm_queue_rebind_worker(vm); + return; + } + goto retry; } -- 2.34.1