From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 55F36CCA476 for ; Mon, 6 Oct 2025 11:10:50 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id F108610E40A; Mon, 6 Oct 2025 11:10:49 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="TAnf0s81"; dkim-atps=neutral Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) by gabe.freedesktop.org (Postfix) with ESMTPS id AB21810E35D for ; Mon, 6 Oct 2025 11:10:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1759749046; x=1791285046; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=O/TUoDvsKdngD8PFHdVUvAJw3sJVh/t8X4wRrRaL7XE=; b=TAnf0s81B71x0zuVTcuGlHMNFK0vBBnTUW2xOUd6B+IMBc8aBRI5AQOX k/B9zRhfAnN4Niy/a2puwT4v3c0Wq3H6/+ll6IV1JYZXtbrNRLYOtCNyc zXf4dxFaWSmhKs1QQxJIZiXLaZU4K6xv0wdOm4G0mAEBwFJwqpiZOSHff Pk5CJD055CCa+VkzetDjIdh9OcDnde1tMfYaX6LVxxtRC4c7eEhuL1aCi TeWORBPVt/SX0H2HW0HWmxeMBr3JUPOjqkFz1DNJWQyDjMPti4vQ0yL6U 4Na0vYb1ly9K7fqKWQMXcCyj2cF4UPEXJdZ2ONq01EXuoo7I/HRuY6Nw7 w==; X-CSE-ConnectionGUID: Aju2Ebj6S6+1rptQnmzJfw== X-CSE-MsgGUID: Q4CGF0VzShqZixaRJ8O8Yg== X-IronPort-AV: E=McAfee;i="6800,10657,11573"; a="73020401" X-IronPort-AV: E=Sophos;i="6.18,319,1751266800"; d="scan'208";a="73020401" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Oct 2025 04:10:46 -0700 X-CSE-ConnectionGUID: DRZZ21K8TfCJOYxlCxqJVA== X-CSE-MsgGUID: bmp9VpM3RsOwquuCz4I9zg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.18,319,1751266800"; d="scan'208";a="180655244" Received: from lstrano-desk.jf.intel.com ([10.54.39.91]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Oct 2025 04:10:45 -0700 From: Matthew Brost To: intel-xe@lists.freedesktop.org Subject: [PATCH v6 24/30] drm/xe/vf: Add debug prints for GuC replaying state during VF recovery Date: Mon, 6 Oct 2025 04:10:32 -0700 Message-Id: <20251006111038.2234860-25-matthew.brost@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251006111038.2234860-1-matthew.brost@intel.com> References: <20251006111038.2234860-1-matthew.brost@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" Helpful to manually verify the GuC state machine can correctly replay the state during a VF post-migration recovery. All replay paths have been manually verified as triggered and working during testing. Signed-off-by: Matthew Brost Reviewed-by: Tomasz Lis --- drivers/gpu/drm/xe/xe_guc_submit.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c index 48d5133e76a6..b33a3dd883d7 100644 --- a/drivers/gpu/drm/xe/xe_guc_submit.c +++ b/drivers/gpu/drm/xe/xe_guc_submit.c @@ -2026,21 +2026,27 @@ void xe_guc_submit_stop(struct xe_guc *guc) } -static void guc_exec_queue_revert_pending_state_change(struct xe_exec_queue *q) +static void guc_exec_queue_revert_pending_state_change(struct xe_guc *guc, + struct xe_exec_queue *q) { bool pending_enable, pending_disable, pending_resume; pending_enable = exec_queue_pending_enable(q); pending_resume = exec_queue_pending_resume(q); - if (pending_enable && pending_resume) + if (pending_enable && pending_resume) { q->guc->needs_resume = true; + xe_gt_dbg(guc_to_gt(guc), "Replay RESUME - guc_id=%d", + q->guc->id); + } if (pending_enable && !pending_resume && !exec_queue_pending_tdr_exit(q)) { clear_exec_queue_registered(q); if (xe_exec_queue_is_lr(q)) xe_exec_queue_put(q); + xe_gt_dbg(guc_to_gt(guc), "Replay REGISTER - guc_id=%d", + q->guc->id); } if (pending_enable) { @@ -2048,6 +2054,8 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_exec_queue *q) clear_exec_queue_pending_resume(q); clear_exec_queue_pending_tdr_exit(q); clear_exec_queue_pending_enable(q); + xe_gt_dbg(guc_to_gt(guc), "Replay ENABLE - guc_id=%d", + q->guc->id); } if (exec_queue_destroyed(q) && exec_queue_registered(q)) { @@ -2057,6 +2065,8 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_exec_queue *q) else q->guc->needs_cleanup = true; clear_exec_queue_extra_ref(q); + xe_gt_dbg(guc_to_gt(guc), "Replay CLEANUP - guc_id=%d", + q->guc->id); } pending_disable = exec_queue_pending_disable(q); @@ -2064,6 +2074,8 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_exec_queue *q) if (pending_disable && exec_queue_suspended(q)) { clear_exec_queue_suspended(q); q->guc->needs_suspend = true; + xe_gt_dbg(guc_to_gt(guc), "Replay SUSPEND - guc_id=%d", + q->guc->id); } if (pending_disable) { @@ -2071,6 +2083,8 @@ static void guc_exec_queue_revert_pending_state_change(struct xe_exec_queue *q) set_exec_queue_enabled(q); clear_exec_queue_pending_disable(q); clear_exec_queue_check_timeout(q); + xe_gt_dbg(guc_to_gt(guc), "Replay DISABLE - guc_id=%d", + q->guc->id); } q->guc->resume_time = 0; @@ -2096,7 +2110,7 @@ static void guc_exec_queue_pause(struct xe_guc *guc, struct xe_exec_queue *q) else cancel_delayed_work_sync(&sched->base.work_tdr); - guc_exec_queue_revert_pending_state_change(q); + guc_exec_queue_revert_pending_state_change(guc, q); if (xe_exec_queue_is_parallel(q)) { struct xe_device *xe = guc_to_xe(guc); @@ -2206,6 +2220,9 @@ static void guc_exec_queue_unpause_prepare(struct xe_guc *guc, list_for_each_entry(s_job, &sched->base.pending_list, list) { job = to_xe_sched_job(s_job); + xe_gt_dbg(guc_to_gt(guc), "Replay JOB - guc_id=%d, seqno=%d", + q->guc->id, xe_sched_job_seqno(job)); + q->ring_ops->emit_job(job); job->skip_emit = true; } -- 2.34.1