Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Lucas De Marchi <lucas.demarchi@intel.com>
Subject: [PATCH v3 08/23] drm/xe/devcoredump: Update handling of xe_force_wake_get return
Date: Tue, 17 Sep 2024 17:51:11 +0530	[thread overview]
Message-ID: <20240917122126.438448-9-himal.prasad.ghimiray@intel.com> (raw)
In-Reply-To: <20240917122126.438448-1-himal.prasad.ghimiray@intel.com>

With xe_force_wake_get() now returning the refcount-incremented domain
mask, a non-zero return value in the case of XE_FORCEWAKE_ALL does not
necessarily indicate success. Compare the return value with
XE_FORCEWAKE_ALL to determine the status of the call.

Modify the return handling of xe_force_wake_get() accordingly and pass
the return value to xe_force_wake_put().

v3
- return xe_wakeref_t instead of int in xe_force_wake_get()

Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
---
 drivers/gpu/drm/xe/xe_devcoredump.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
index bdb76e834e4c..d1881f95daa4 100644
--- a/drivers/gpu/drm/xe/xe_devcoredump.c
+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
@@ -141,13 +141,15 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
 {
 	struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
 	struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
+	xe_wakeref_t fw_ref;
 
 	/* keep going if fw fails as we still want to save the memory and SW data */
-	if (xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL))
+	fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
+	if (fw_ref != XE_FORCEWAKE_ALL)
 		xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
 	xe_vm_snapshot_capture_delayed(ss->vm);
 	xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
-	xe_force_wake_put(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
+	xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
 
 	/* Calculate devcoredump size */
 	ss->read.size = __xe_devcoredump_read(NULL, INT_MAX, coredump);
@@ -220,8 +222,9 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
 	u32 width_mask = (0x1 << q->width) - 1;
 	const char *process_name = "no process";
 
-	int i;
+	xe_wakeref_t fw_ref;
 	bool cookie;
+	int i;
 
 	ss->snapshot_time = ktime_get_real();
 	ss->boot_time = ktime_get_boottime();
@@ -244,7 +247,8 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
 	}
 
 	/* keep going if fw fails as we still want to save the memory and SW data */
-	if (xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL))
+	fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+	if (fw_ref != XE_FORCEWAKE_ALL)
 		xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
 
 	coredump->snapshot.ct = xe_guc_ct_snapshot_capture(&guc->ct, true);
@@ -263,7 +267,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
 
 	queue_work(system_unbound_wq, &ss->work);
 
-	xe_force_wake_put(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+	xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
 	dma_fence_end_signalling(cookie);
 }
 
-- 
2.34.1


  parent reply	other threads:[~2024-09-17 12:03 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-17 12:21 [PATCH v3 00/23] Fix xe_force_wake_get() failure handling Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 01/23] drm/xe: Error handling in xe_force_wake_get() Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 02/23] drm/xe: Modify xe_force_wake_put to handle _get returned mask Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 03/23] drm/xe/device: Update handling of xe_force_wake_get return Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 04/23] drm/xe/hdcp: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 05/23] drm/xe/gsc: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 06/23] drm/xe/gt: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 07/23] drm/xe/xe_gt_idle: " Himal Prasad Ghimiray
2024-09-17 12:21 ` Himal Prasad Ghimiray [this message]
2024-09-17 12:21 ` [PATCH v3 09/23] drm/xe/tests/mocs: Update xe_force_wake_get() return handling Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 10/23] drm/xe/mocs: Update handling of xe_force_wake_get return Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 11/23] drm/xe/xe_drm_client: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 12/23] drm/xe/xe_gt_debugfs: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 13/23] drm/xe/guc: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 14/23] drm/xe/huc: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 15/23] drm/xe/oa: Handle force_wake_get failure in xe_oa_stream_init() Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 16/23] drm/xe/pat: Update handling of xe_force_wake_get return Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 17/23] drm/xe/gt_tlb_invalidation_ggtt: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 18/23] drm/xe/xe_reg_sr: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 19/23] drm/xe/query: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 20/23] drm/xe/vram: " Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 21/23] drm/xe: forcewake debugfs open fails on xe_forcewake_get failure Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 22/23] drm/xe: Ensure __must_check for xe_force_wake_get() return Himal Prasad Ghimiray
2024-09-17 12:21 ` [PATCH v3 23/23] drm/xe: Change return type to void for xe_force_wake_put Himal Prasad Ghimiray
2024-09-17 13:18 ` ✓ CI.Patch_applied: success for Fix xe_force_wake_get() failure handling (rev3) Patchwork
2024-09-17 13:18 ` ✗ CI.checkpatch: warning " Patchwork
2024-09-17 13:20 ` ✓ CI.KUnit: success " Patchwork
2024-09-17 13:35 ` ✓ CI.Build: " Patchwork
2024-09-17 13:37 ` ✓ CI.Hooks: " Patchwork
2024-09-17 13:39 ` ✓ CI.checksparse: " Patchwork
2024-09-17 14:05 ` ✗ CI.BAT: failure " Patchwork
2024-09-17 17:53   ` Ghimiray, Himal Prasad
2024-09-17 16:55 ` ✗ CI.FULL: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240917122126.438448-9-himal.prasad.ghimiray@intel.com \
    --to=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=rodrigo.vivi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox