dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers.
@ 2025-02-04 13:22 Maarten Lankhorst
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 1/8] header/cleanup.h: Add _init_args to DEFINE_LOCK_GUARD_1(_COND) Maarten Lankhorst
                   ` (8 more replies)
  0 siblings, 9 replies; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 13:22 UTC (permalink / raw)
  To: intel-xe
  Cc: dri-devel, linux-kernel, Maarten Lankhorst, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

Ignore my previous series please, it should have been sent to intel-xe, was sent to intel-gfx.

Instead of all this repetition of

{
	unsigned fw_ref;

	fw_ref = xe_force_wake_get(fw, domain);
	if (!xe_force_wake_ref_has_domain(..))
		return -ETIMEDOUT;

	...

out:
	xe_force_wake_put(fw_ref);
	return ret;
}

I thought I would look at how to replace it with the guard helpers.
It is easy, but it required some minor fixes to make DEFINE_LOCK_GUARD_1
work with extra init arguments.

So I changed the function signature slightly to make the first optional argument
a struct member (current behavior), and any additional argument goes to the init
call.

This replaces the previous code with
{
	scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, fw, domain) {
		....

		return ret;
	}
}

I' ve thought also of playing with this:
{
	CLASS(xe_force_wake_get, fw_ref)(fw, domain);
	if (!fw_ref.lock))
		return -ETIMEDOUT;

	...
	return ret;
}

I'm just fearing that the scoped_cond_guard makes it imposssible to get this
wrong, while in the second example it's not clear that it can fail, and that
you have to check.

Let me know what you think!
Feedback welcome for the header change as well, should probably go into the locking tree..

Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Lechner <dlechner@baylibre.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>

Maarten Lankhorst (8):
  header/cleanup.h: Add _init_args to DEFINE_LOCK_GUARD_1(_COND)
  drm/xe/gt: Unify xe_hw_fence_irq_finish() calls.
  drm/xe: Add scoped guards for xe_force_wake
  drm/xe: Add xe_force_wake_get_all
  drm/xe/coredump: Use guard helpers for xe_force_wake.
  drm/xe/gsc: Use guard helper for xe_gsc_print_info.
  drm/xe/vram: Use xe_force_wake guard helper
  drm/xe/gt: Convert to xe_force_wake guard helpers

 drivers/gpu/drm/xe/xe_devcoredump.c |  36 ++---
 drivers/gpu/drm/xe/xe_force_wake.c  | 161 ++++++++++++++----
 drivers/gpu/drm/xe/xe_force_wake.h  |  17 ++
 drivers/gpu/drm/xe/xe_gsc.c         |  22 +--
 drivers/gpu/drm/xe/xe_gt.c          | 243 ++++++++++------------------
 drivers/gpu/drm/xe/xe_vram.c        |  45 +++---
 include/linux/cleanup.h             |  30 ++--
 7 files changed, 293 insertions(+), 261 deletions(-)

-- 
2.47.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH-resent-to-correct-ml 1/8] header/cleanup.h: Add _init_args to DEFINE_LOCK_GUARD_1(_COND)
  2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
@ 2025-02-04 13:22 ` Maarten Lankhorst
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 2/8] drm/xe/gt: Unify xe_hw_fence_irq_finish() calls Maarten Lankhorst
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 13:22 UTC (permalink / raw)
  To: intel-xe
  Cc: dri-devel, linux-kernel, Maarten Lankhorst, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

This makes it possible to use the lock guards for guards that need
extra arguments.

I've been attempting to add a guard to xe_force_wake handling, but that
required an extra argument specifying the domain. For nested spinlock
handling, it could also be beneficial to be able to do something like
this.

For example:
DEFINE_LOCK_GUARD_1_COND(spinlock_irqsave, _nested,
			 spin_lock_irqsave_nested(_T->lock, _T->flags, nest),
			 unsigned nest);

guard(spinlock_irqsave_nested, &lock, SINGLE_DEPTH_NESTING);

The first optional argument in DEFINE_LOCK_GUARD_1 is now used for the struct members,
the remainder goes to init_args to allow the same usage in the base case..

I'm abusing the preprocessor to add an extra meaning to the first optional
argument is done by creating a __DO_DEFINE_LOCK_GUARD_1, and passing
__VA_ARGS__ not ##__VA_ARGS__ to it to ensure _struct_members is empty
when not passed explicitly.

Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Lechner <dlechner@baylibre.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
---
 include/linux/cleanup.h | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h
index ec00e3f7af2b3..dbaf02447f206 100644
--- a/include/linux/cleanup.h
+++ b/include/linux/cleanup.h
@@ -349,19 +349,23 @@ _label:									\
  * locks that don't have a native type (eg. RCU, preempt) or those that need a
  * 'fat' pointer (eg. spin_lock_irqsave).
  *
- * DEFINE_LOCK_GUARD_0(name, lock, unlock, ...)
- * DEFINE_LOCK_GUARD_1(name, type, lock, unlock, ...)
- * DEFINE_LOCK_GUARD_1_COND(name, ext, condlock)
+ * DEFINE_LOCK_GUARD_0(name, lock, unlock, _lock_members...)
+ * DEFINE_LOCK_GUARD_1(name, type, lock, unlock, (opt)_lock_members, _init_args...)
+ * DEFINE_LOCK_GUARD_1_COND(name, ext, condlock, _init_args...)
  *
  * will result in the following type:
  *
  *   typedef struct {
  *	type *lock;		// 'type := void' for the _0 variant
- *	__VA_ARGS__;
+ *	_lock_members;		// use ; as separator to add multiple members
  *   } class_##name##_t;
  *
  * As above, both _lock and _unlock are statements, except this time '_T' will
  * be a pointer to the above struct.
+ *
+ * For DEFINE_LOCK_GUARD_1 and DEFINE_LOCK_GUARD_1_COND, it adds all
+ * _init_args as local variables available to the lock statement.
+ * They need to be passed to all guard() functions as extra argument.
  */
 
 #define __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, ...)		\
@@ -381,8 +385,8 @@ static inline void *class_##_name##_lock_ptr(class_##_name##_t *_T)	\
 }
 
 
-#define __DEFINE_LOCK_GUARD_1(_name, _type, _lock)			\
-static inline class_##_name##_t class_##_name##_constructor(_type *l)	\
+#define __DEFINE_LOCK_GUARD_1(_name, _type, _lock, ...)			\
+static inline class_##_name##_t class_##_name##_constructor(_type *l, ##__VA_ARGS__)	\
 {									\
 	class_##_name##_t _t = { .lock = l }, *_T = &_t;		\
 	_lock;								\
@@ -398,23 +402,27 @@ static inline class_##_name##_t class_##_name##_constructor(void)	\
 	return _t;							\
 }
 
-#define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...)		\
+#define __DO_DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, _lock_members, _init_args...) \
 __DEFINE_CLASS_IS_CONDITIONAL(_name, false);				\
-__DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__)		\
-__DEFINE_LOCK_GUARD_1(_name, _type, _lock)
+__DEFINE_UNLOCK_GUARD(_name, _type, _unlock, _lock_members)		\
+__DEFINE_LOCK_GUARD_1(_name, _type, _lock, ##_init_args)
+
+/* Call __DO_DEFINE_LOCK_GUARD_1 here because of the 2 optional arguments */
+#define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...)		\
+	__DO_DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, __VA_ARGS__)
 
 #define DEFINE_LOCK_GUARD_0(_name, _lock, _unlock, ...)			\
 __DEFINE_CLASS_IS_CONDITIONAL(_name, false);				\
 __DEFINE_UNLOCK_GUARD(_name, void, _unlock, __VA_ARGS__)		\
 __DEFINE_LOCK_GUARD_0(_name, _lock)
 
-#define DEFINE_LOCK_GUARD_1_COND(_name, _ext, _condlock)		\
+#define DEFINE_LOCK_GUARD_1_COND(_name, _ext, _condlock, ...)		\
 	__DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true);		\
 	EXTEND_CLASS(_name, _ext,					\
 		     ({ class_##_name##_t _t = { .lock = l }, *_T = &_t;\
 		        if (_T->lock && !(_condlock)) _T->lock = NULL;	\
 			_t; }),						\
-		     typeof_member(class_##_name##_t, lock) l)		\
+		     typeof_member(class_##_name##_t, lock) l, ##__VA_ARGS__)		\
 	static inline void * class_##_name##_ext##_lock_ptr(class_##_name##_t *_T) \
 	{ return class_##_name##_lock_ptr(_T); }
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH-resent-to-correct-ml 2/8] drm/xe/gt: Unify xe_hw_fence_irq_finish() calls.
  2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 1/8] header/cleanup.h: Add _init_args to DEFINE_LOCK_GUARD_1(_COND) Maarten Lankhorst
@ 2025-02-04 13:22 ` Maarten Lankhorst
  2025-02-04 15:20   ` Lucas De Marchi
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake Maarten Lankhorst
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 13:22 UTC (permalink / raw)
  To: intel-xe
  Cc: dri-devel, linux-kernel, Maarten Lankhorst, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

Those calls should be from xe_gt_init, not the diverse amount of places
they are called.

Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
---
 drivers/gpu/drm/xe/xe_gt.c | 31 ++++++++++++++-----------------
 1 file changed, 14 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index 01a4a852b8f43..943bab94119fa 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -408,13 +408,11 @@ static void dump_pat_on_error(struct xe_gt *gt)
 static int gt_fw_domain_init(struct xe_gt *gt)
 {
 	unsigned int fw_ref;
-	int err, i;
+	int err;
 
 	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
-	if (!fw_ref) {
-		err = -ETIMEDOUT;
-		goto err_hw_fence_irq;
-	}
+	if (!fw_ref)
+		return -ETIMEDOUT;
 
 	if (!xe_gt_is_media_type(gt)) {
 		err = xe_ggtt_init(gt_to_tile(gt)->mem.ggtt);
@@ -455,9 +453,6 @@ static int gt_fw_domain_init(struct xe_gt *gt)
 err_force_wake:
 	dump_pat_on_error(gt);
 	xe_force_wake_put(gt_to_fw(gt), fw_ref);
-err_hw_fence_irq:
-	for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
-		xe_hw_fence_irq_finish(&gt->fence_irq[i]);
 
 	return err;
 }
@@ -465,7 +460,7 @@ static int gt_fw_domain_init(struct xe_gt *gt)
 static int all_fw_domain_init(struct xe_gt *gt)
 {
 	unsigned int fw_ref;
-	int err, i;
+	int err;
 
 	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
 	if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
@@ -543,8 +538,6 @@ static int all_fw_domain_init(struct xe_gt *gt)
 
 err_force_wake:
 	xe_force_wake_put(gt_to_fw(gt), fw_ref);
-	for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
-		xe_hw_fence_irq_finish(&gt->fence_irq[i]);
 
 	return err;
 }
@@ -596,35 +589,39 @@ int xe_gt_init(struct xe_gt *gt)
 
 	err = xe_gt_pagefault_init(gt);
 	if (err)
-		return err;
+		goto err;
 
 	xe_mocs_init_early(gt);
 
 	err = xe_gt_sysfs_init(gt);
 	if (err)
-		return err;
+		goto err;
 
 	err = gt_fw_domain_init(gt);
 	if (err)
-		return err;
+		goto err;
 
 	err = xe_gt_idle_init(&gt->gtidle);
 	if (err)
-		return err;
+		goto err;
 
 	err = xe_gt_freq_init(gt);
 	if (err)
-		return err;
+		goto err;
 
 	xe_force_wake_init_engines(gt, gt_to_fw(gt));
 
 	err = all_fw_domain_init(gt);
 	if (err)
-		return err;
+		goto err;
 
 	xe_gt_record_user_engines(gt);
 
 	return 0;
+err:
+	for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
+		xe_hw_fence_irq_finish(&gt->fence_irq[i]);
+	return err;
 }
 
 /**
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake
  2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 1/8] header/cleanup.h: Add _init_args to DEFINE_LOCK_GUARD_1(_COND) Maarten Lankhorst
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 2/8] drm/xe/gt: Unify xe_hw_fence_irq_finish() calls Maarten Lankhorst
@ 2025-02-04 13:22 ` Maarten Lankhorst
  2025-02-04 15:28   ` Lucas De Marchi
  2025-02-04 16:30   ` Michal Wajdeczko
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 4/8] drm/xe: Add xe_force_wake_get_all Maarten Lankhorst
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 13:22 UTC (permalink / raw)
  To: intel-xe
  Cc: dri-devel, linux-kernel, Maarten Lankhorst, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

Instead of finding bugs where we may or may not release force_wake, I've
decided to be inspired by the spinlock guards, and use the same ones to
do xe_force_wake handling.

Examples are added as documentation in xe_force_wake.c

Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
---
 drivers/gpu/drm/xe/xe_force_wake.c | 51 ++++++++++++++++++++++++++++++
 drivers/gpu/drm/xe/xe_force_wake.h | 15 +++++++++
 2 files changed, 66 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
index 4f6784e5abf88..805c19f6de9e7 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.c
+++ b/drivers/gpu/drm/xe/xe_force_wake.c
@@ -16,6 +16,57 @@
 
 #define XE_FORCE_WAKE_ACK_TIMEOUT_MS	50
 
+/**
+ * DOC: Force wake handling
+ *
+ * Traditionally, the force wake handling has been done using the error prone
+ * set of calls:
+ *
+ * int func(struct xe_force_wake *fw)
+ * {
+ * 	unsigned int fw_ref = xe_force_wake_get(fw, XE_FORCEWAKE_ALL);
+ * 	if (!fw_ref)
+ * 		return -ETIMEDOUT;
+ *
+ * 	err = do_something();
+ *
+ * 	xe_force_wake_put(fw, fw_ref);
+ * 	return err;
+ * }
+ *
+ * A new, failure-safe approach is by using the scoped helpers,
+ * which changes the function to this:
+ *
+ * int func(struct xe_force_wake *fw)
+ * {
+ * 	scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, fw, XE_FORCEWAKE_ALL) {
+ * 		return do_something();
+ * 	}
+ * }
+ *
+ * For completeness, the following options also work:
+ * void func(struct xe_force_wake *fw)
+ * {
+ * 	scoped_guard(xe_force_wake_get, fw, XE_FORCEWAKE_ALL) {
+ * 		do_something_only_if_fw_acquired();
+ * 	}
+ * }
+ *
+ * You can use xe_force_wake instead of force_wake_get, if the code
+ * must run but errors acquiring ignored:
+ * void func(struct xe_force_wake *fw)
+ * {
+ * 	scoped_guard(xe_force_wake, fw, XE_FORCEWAKE_ALL) {
+ * 		always_do_something_maybe_fw();
+ * 	}
+ *
+ * 	do_something_no_fw();
+ *
+ * 	guard(xe_force_wake)(fw, XE_FORCEWAKE_ALL);
+ * 	always_do_something_maybe_fw();
+ * }
+ */
+
 static const char *str_wake_sleep(bool wake)
 {
 	return wake ? "wake" : "sleep";
diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
index 0e3e84bfa51c3..0fb1baae0a3a3 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.h
+++ b/drivers/gpu/drm/xe/xe_force_wake.h
@@ -9,6 +9,8 @@
 #include "xe_assert.h"
 #include "xe_force_wake_types.h"
 
+#include <linux/cleanup.h>
+
 struct xe_gt;
 
 void xe_force_wake_init_gt(struct xe_gt *gt,
@@ -61,4 +63,17 @@ xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains dom
 	return fw_ref & domain;
 }
 
+DEFINE_LOCK_GUARD_1(xe_force_wake, struct xe_force_wake,
+		    _T->fw_ref = xe_force_wake_get(_T->lock, domain),
+		    xe_force_wake_put(_T->lock, _T->fw_ref),
+		    unsigned int fw_ref, enum xe_force_wake_domains domain);
+
+DEFINE_LOCK_GUARD_1_COND(xe_force_wake, _get,
+			 _T->fw_ref = xe_force_wake_get_all(_T->lock, domain),
+			 enum xe_force_wake_domains domain);
+
+/* Only useful for guard xe_force_wake, guard xe_force_wake_get gets all or nothing */
+#define xe_force_wake_scope_has_domain(domain) \
+	(xe_force_wake_ref_has_domain(scope.fw_ref, domain))
+
 #endif
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH-resent-to-correct-ml 4/8] drm/xe: Add xe_force_wake_get_all
  2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
                   ` (2 preceding siblings ...)
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake Maarten Lankhorst
@ 2025-02-04 13:22 ` Maarten Lankhorst
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 5/8] drm/xe/coredump: Use guard helpers for xe_force_wake Maarten Lankhorst
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 13:22 UTC (permalink / raw)
  To: intel-xe
  Cc: dri-devel, linux-kernel, Maarten Lankhorst, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

For most usecases, we want to get all the forcewakes, and failing to
grab any is similar to failure to grab all.

This makes the next patch to add cond guards a lot easier.

Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
---
 drivers/gpu/drm/xe/xe_force_wake.c | 110 ++++++++++++++++++++---------
 drivers/gpu/drm/xe/xe_force_wake.h |   2 +
 2 files changed, 77 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
index 805c19f6de9e7..cc00d5de8f0ae 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.c
+++ b/drivers/gpu/drm/xe/xe_force_wake.c
@@ -211,27 +211,36 @@ static int domain_sleep_wait(struct xe_gt *gt,
 					 (ffs(tmp__) - 1))) && \
 					 domain__->reg_ctl.addr)
 
-/**
- * xe_force_wake_get() : Increase the domain refcount
- * @fw: struct xe_force_wake
- * @domains: forcewake domains to get refcount on
- *
- * This function wakes up @domains if they are asleep and takes references.
- * If requested domain is XE_FORCEWAKE_ALL then only applicable/initialized
- * domains will be considered for refcount and it is a caller responsibility
- * to check returned ref if it includes any specific domain by using
- * xe_force_wake_ref_has_domain() function. Caller must call
- * xe_force_wake_put() function to decrease incremented refcounts.
- *
- * Return: opaque reference to woken domains or zero if none of requested
- * domains were awake.
- */
-unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
-					    enum xe_force_wake_domains domains)
+static void __xe_force_wake_put_inner(struct xe_force_wake *fw,
+				      unsigned int fw_ref, unsigned int *ack_fail)
+{
+	struct xe_gt *gt = fw->gt;
+	struct xe_force_wake_domain *domain;
+	unsigned int tmp, sleep = 0;
+
+	for_each_fw_domain_masked(domain, fw_ref, fw, tmp) {
+		xe_gt_assert(gt, domain->ref);
+
+		if (!--domain->ref) {
+			sleep |= BIT(domain->id);
+			domain_sleep(gt, domain);
+		}
+	}
+	for_each_fw_domain_masked(domain, sleep, fw, tmp) {
+		if (domain_sleep_wait(gt, domain) == 0)
+			fw->awake_domains &= ~BIT(domain->id);
+		else
+			*ack_fail |= BIT(domain->id);
+	}
+}
+
+static int __must_check __xe_force_wake_get(struct xe_force_wake *fw,
+					    enum xe_force_wake_domains domains,
+					    bool all_or_nothing)
 {
 	struct xe_gt *gt = fw->gt;
 	struct xe_force_wake_domain *domain;
-	unsigned int ref_incr = 0, awake_rqst = 0, awake_failed = 0;
+	unsigned int ref_incr = 0, awake_rqst = 0, awake_failed = 0, sleep_failed = 0;
 	unsigned int tmp, ref_rqst;
 	unsigned long flags;
 
@@ -257,6 +266,12 @@ unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
 		}
 	}
 	ref_incr &= ~awake_failed;
+
+	if (all_or_nothing && awake_failed && ref_incr) {
+		__xe_force_wake_put_inner(fw, ref_incr, &sleep_failed);
+		ref_incr = 0;
+	}
+
 	spin_unlock_irqrestore(&fw->lock, flags);
 
 	xe_gt_WARN(gt, awake_failed, "Forcewake domain%s %#x failed to acknowledge awake request\n",
@@ -268,6 +283,46 @@ unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
 	return ref_incr;
 }
 
+/**
+ * xe_force_wake_get() : Increase the domain refcount
+ * @fw: struct xe_force_wake
+ * @domains: forcewake domains to get refcount on
+ *
+ * This function wakes up @domains if they are asleep and takes references.
+ * If requested domain is XE_FORCEWAKE_ALL then only applicable/initialized
+ * domains will be considered for refcount and it is a caller responsibility
+ * to check returned ref if it includes any specific domain by using
+ * xe_force_wake_ref_has_domain() function. Caller must call
+ * xe_force_wake_put() function to decrease incremented refcounts.
+ *
+ * Return: opaque reference to woken domains or zero if none of requested
+ * domains were awake.
+ */
+unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
+					    enum xe_force_wake_domains domains)
+{
+	return __xe_force_wake_get(fw, domains, false);
+}
+
+/**
+  * xe_force_wake_get_all() : Increase the domain refcount
+  * @fw: struct xe_force_wake
+  * @domains: forcewake domains to get refcount on
+  *
+  * This function wakes up @domains if they are asleep and takes references.
+  * Unlike xe_force_wake_get(), this function fails if any of the domains
+  * could not be woken up. It's all or nothing. This makes it always safe
+  * to check for 0 only.
+  *
+  * Return: opaque reference to woken domains or zero if not all of the requested
+  * domains could be woken up.
+  */
+unsigned int __must_check xe_force_wake_get_all(struct xe_force_wake *fw,
+						enum xe_force_wake_domains domains)
+{
+	return __xe_force_wake_get(fw, domains, true);
+}
+
 /**
  * xe_force_wake_put - Decrement the refcount and put domain to sleep if refcount becomes 0
  * @fw: Pointer to the force wake structure
@@ -281,10 +336,8 @@ unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
 void xe_force_wake_put(struct xe_force_wake *fw, unsigned int fw_ref)
 {
 	struct xe_gt *gt = fw->gt;
-	struct xe_force_wake_domain *domain;
-	unsigned int tmp, sleep = 0;
 	unsigned long flags;
-	int ack_fail = 0;
+	unsigned int ack_fail = 0;
 
 	/*
 	 * Avoid unnecessary lock and unlock when the function is called
@@ -297,20 +350,7 @@ void xe_force_wake_put(struct xe_force_wake *fw, unsigned int fw_ref)
 		fw_ref = fw->initialized_domains;
 
 	spin_lock_irqsave(&fw->lock, flags);
-	for_each_fw_domain_masked(domain, fw_ref, fw, tmp) {
-		xe_gt_assert(gt, domain->ref);
-
-		if (!--domain->ref) {
-			sleep |= BIT(domain->id);
-			domain_sleep(gt, domain);
-		}
-	}
-	for_each_fw_domain_masked(domain, sleep, fw, tmp) {
-		if (domain_sleep_wait(gt, domain) == 0)
-			fw->awake_domains &= ~BIT(domain->id);
-		else
-			ack_fail |= BIT(domain->id);
-	}
+	__xe_force_wake_put_inner(fw, fw_ref, &ack_fail);
 	spin_unlock_irqrestore(&fw->lock, flags);
 
 	xe_gt_WARN(gt, ack_fail, "Forcewake domain%s %#x failed to acknowledge sleep request\n",
diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
index 0fb1baae0a3a3..7102547260f67 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.h
+++ b/drivers/gpu/drm/xe/xe_force_wake.h
@@ -19,6 +19,8 @@ void xe_force_wake_init_engines(struct xe_gt *gt,
 				struct xe_force_wake *fw);
 unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
 					    enum xe_force_wake_domains domains);
+unsigned int __must_check xe_force_wake_get_all(struct xe_force_wake *fw,
+						enum xe_force_wake_domains domains);
 void xe_force_wake_put(struct xe_force_wake *fw, unsigned int fw_ref);
 
 static inline int
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH-resent-to-correct-ml 5/8] drm/xe/coredump: Use guard helpers for xe_force_wake.
  2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
                   ` (3 preceding siblings ...)
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 4/8] drm/xe: Add xe_force_wake_get_all Maarten Lankhorst
@ 2025-02-04 13:22 ` Maarten Lankhorst
  2025-02-04 15:40   ` Lucas De Marchi
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 6/8] drm/xe/gsc: Use guard helper for xe_gsc_print_info Maarten Lankhorst
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 13:22 UTC (permalink / raw)
  To: intel-xe
  Cc: dri-devel, linux-kernel, Maarten Lankhorst, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

---
 drivers/gpu/drm/xe/xe_devcoredump.c | 36 ++++++++++++++---------------
 1 file changed, 17 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
index 39fe485d20858..afe229fba8a9c 100644
--- a/drivers/gpu/drm/xe/xe_devcoredump.c
+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
@@ -233,7 +233,6 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
 	struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
 	struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
 	struct xe_device *xe = coredump_to_xe(coredump);
-	unsigned int fw_ref;
 
 	/*
 	 * NB: Despite passing a GFP_ flags parameter here, more allocations are done
@@ -247,12 +246,13 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
 	xe_pm_runtime_get(xe);
 
 	/* keep going if fw fails as we still want to save the memory and SW data */
-	fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
-	if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
-		xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
-	xe_vm_snapshot_capture_delayed(ss->vm);
-	xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
-	xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
+	scoped_guard(xe_force_wake, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
+		if (!xe_force_wake_scope_has_domain(XE_FORCEWAKE_ALL))
+			xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
+
+		xe_vm_snapshot_capture_delayed(ss->vm);
+		xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
+	}
 
 	xe_pm_runtime_put(xe);
 
@@ -277,7 +277,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
 	u32 width_mask = (0x1 << q->width) - 1;
 	const char *process_name = "no process";
 
-	unsigned int fw_ref;
 	bool cookie;
 	int i;
 
@@ -305,20 +304,19 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
 	}
 
 	/* keep going if fw fails as we still want to save the memory and SW data */
-	fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
-
-	ss->guc.log = xe_guc_log_snapshot_capture(&guc->log, true);
-	ss->guc.ct = xe_guc_ct_snapshot_capture(&guc->ct);
-	ss->ge = xe_guc_exec_queue_snapshot_capture(q);
-	if (job)
-		ss->job = xe_sched_job_snapshot_capture(job);
-	ss->vm = xe_vm_snapshot_capture(q->vm);
-
-	xe_engine_snapshot_capture_for_queue(q);
+	scoped_guard(xe_force_wake, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
+		ss->guc.log = xe_guc_log_snapshot_capture(&guc->log, true);
+		ss->guc.ct = xe_guc_ct_snapshot_capture(&guc->ct);
+		ss->ge = xe_guc_exec_queue_snapshot_capture(q);
+		if (job)
+			ss->job = xe_sched_job_snapshot_capture(job);
+		ss->vm = xe_vm_snapshot_capture(q->vm);
+
+		xe_engine_snapshot_capture_for_queue(q);
+	}
 
 	queue_work(system_unbound_wq, &ss->work);
 
-	xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
 	dma_fence_end_signalling(cookie);
 }
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH-resent-to-correct-ml 6/8] drm/xe/gsc: Use guard helper for xe_gsc_print_info.
  2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
                   ` (4 preceding siblings ...)
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 5/8] drm/xe/coredump: Use guard helpers for xe_force_wake Maarten Lankhorst
@ 2025-02-04 13:22 ` Maarten Lankhorst
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 7/8] drm/xe/vram: Use xe_force_wake guard helper Maarten Lankhorst
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 13:22 UTC (permalink / raw)
  To: intel-xe
  Cc: dri-devel, linux-kernel, Maarten Lankhorst, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

As an example on how it works.

Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
---
 drivers/gpu/drm/xe/xe_gsc.c | 22 ++++++++--------------
 1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gsc.c b/drivers/gpu/drm/xe/xe_gsc.c
index 1eb791ddc375c..aee9f58b1c3c6 100644
--- a/drivers/gpu/drm/xe/xe_gsc.c
+++ b/drivers/gpu/drm/xe/xe_gsc.c
@@ -600,7 +600,6 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
 {
 	struct xe_gt *gt = gsc_to_gt(gsc);
 	struct xe_mmio *mmio = &gt->mmio;
-	unsigned int fw_ref;
 
 	xe_uc_fw_print(&gsc->fw, p);
 
@@ -609,17 +608,12 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
 	if (!xe_uc_fw_is_enabled(&gsc->fw))
 		return;
 
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
-	if (!fw_ref)
-		return;
-
-	drm_printf(p, "\nHECI1 FWSTS: 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x\n",
-			xe_mmio_read32(mmio, HECI_FWSTS1(MTL_GSC_HECI1_BASE)),
-			xe_mmio_read32(mmio, HECI_FWSTS2(MTL_GSC_HECI1_BASE)),
-			xe_mmio_read32(mmio, HECI_FWSTS3(MTL_GSC_HECI1_BASE)),
-			xe_mmio_read32(mmio, HECI_FWSTS4(MTL_GSC_HECI1_BASE)),
-			xe_mmio_read32(mmio, HECI_FWSTS5(MTL_GSC_HECI1_BASE)),
-			xe_mmio_read32(mmio, HECI_FWSTS6(MTL_GSC_HECI1_BASE)));
-
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
+	scoped_guard(xe_force_wake_get, gt_to_fw(gt), XE_FW_GSC)
+		drm_printf(p, "\nHECI1 FWSTS: 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x\n",
+				xe_mmio_read32(mmio, HECI_FWSTS1(MTL_GSC_HECI1_BASE)),
+				xe_mmio_read32(mmio, HECI_FWSTS2(MTL_GSC_HECI1_BASE)),
+				xe_mmio_read32(mmio, HECI_FWSTS3(MTL_GSC_HECI1_BASE)),
+				xe_mmio_read32(mmio, HECI_FWSTS4(MTL_GSC_HECI1_BASE)),
+				xe_mmio_read32(mmio, HECI_FWSTS5(MTL_GSC_HECI1_BASE)),
+				xe_mmio_read32(mmio, HECI_FWSTS6(MTL_GSC_HECI1_BASE)));
 }
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH-resent-to-correct-ml 7/8] drm/xe/vram: Use xe_force_wake guard helper
  2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
                   ` (5 preceding siblings ...)
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 6/8] drm/xe/gsc: Use guard helper for xe_gsc_print_info Maarten Lankhorst
@ 2025-02-04 13:22 ` Maarten Lankhorst
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 8/8] drm/xe/gt: Convert to xe_force_wake guard helpers Maarten Lankhorst
  2025-02-04 17:40 ` [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to " David Lechner
  8 siblings, 0 replies; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 13:22 UTC (permalink / raw)
  To: intel-xe
  Cc: dri-devel, linux-kernel, Maarten Lankhorst, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
---
 drivers/gpu/drm/xe/xe_vram.c | 45 ++++++++++++++++--------------------
 1 file changed, 20 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_vram.c b/drivers/gpu/drm/xe/xe_vram.c
index b1f81dca610dc..9180bb4d29971 100644
--- a/drivers/gpu/drm/xe/xe_vram.c
+++ b/drivers/gpu/drm/xe/xe_vram.c
@@ -220,7 +220,6 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
 {
 	struct xe_device *xe = tile_to_xe(tile);
 	struct xe_gt *gt = tile->primary_gt;
-	unsigned int fw_ref;
 	u64 offset;
 	u32 reg;
 
@@ -240,33 +239,29 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
 		return 0;
 	}
 
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
-	if (!fw_ref)
-		return -ETIMEDOUT;
-
-	/* actual size */
-	if (unlikely(xe->info.platform == XE_DG1)) {
-		*tile_size = pci_resource_len(to_pci_dev(xe->drm.dev), LMEM_BAR);
-		*tile_offset = 0;
-	} else {
-		reg = xe_gt_mcr_unicast_read_any(gt, XEHP_TILE_ADDR_RANGE(gt->info.id));
-		*tile_size = (u64)REG_FIELD_GET(GENMASK(14, 8), reg) * SZ_1G;
-		*tile_offset = (u64)REG_FIELD_GET(GENMASK(7, 1), reg) * SZ_1G;
-	}
-
-	/* minus device usage */
-	if (xe->info.has_flat_ccs) {
-		offset = get_flat_ccs_offset(gt, *tile_size);
-	} else {
-		offset = xe_mmio_read64_2x32(&tile->mmio, GSMBASE);
-	}
+	scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, gt_to_fw(gt), XE_FW_GT) {
+		/* actual size */
+		if (unlikely(xe->info.platform == XE_DG1)) {
+			*tile_size = pci_resource_len(to_pci_dev(xe->drm.dev), LMEM_BAR);
+			*tile_offset = 0;
+		} else {
+			reg = xe_gt_mcr_unicast_read_any(gt, XEHP_TILE_ADDR_RANGE(gt->info.id));
+			*tile_size = (u64)REG_FIELD_GET(GENMASK(14, 8), reg) * SZ_1G;
+			*tile_offset = (u64)REG_FIELD_GET(GENMASK(7, 1), reg) * SZ_1G;
+		}
 
-	/* remove the tile offset so we have just the available size */
-	*vram_size = offset - *tile_offset;
+		/* minus device usage */
+		if (xe->info.has_flat_ccs) {
+			offset = get_flat_ccs_offset(gt, *tile_size);
+		} else {
+			offset = xe_mmio_read64_2x32(&tile->mmio, GSMBASE);
+		}
 
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
+		/* remove the tile offset so we have just the available size */
+		*vram_size = offset - *tile_offset;
 
-	return 0;
+		return 0;
+	}
 }
 
 static void vram_fini(void *arg)
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH-resent-to-correct-ml 8/8] drm/xe/gt: Convert to xe_force_wake guard helpers
  2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
                   ` (6 preceding siblings ...)
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 7/8] drm/xe/vram: Use xe_force_wake guard helper Maarten Lankhorst
@ 2025-02-04 13:22 ` Maarten Lankhorst
  2025-02-04 17:40 ` [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to " David Lechner
  8 siblings, 0 replies; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 13:22 UTC (permalink / raw)
  To: intel-xe
  Cc: dri-devel, linux-kernel, Maarten Lankhorst, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
---
 drivers/gpu/drm/xe/xe_gt.c | 216 +++++++++++++------------------------
 1 file changed, 74 insertions(+), 142 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index 943bab94119fa..c71041087a735 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -98,29 +98,24 @@ void xe_gt_sanitize(struct xe_gt *gt)
 
 static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
 	u32 reg;
 
 	if (!XE_WA(gt, 16023588340))
 		return;
 
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
-	if (!fw_ref)
-		return;
+	scoped_guard(xe_force_wake_get, gt_to_fw(gt), XE_FW_GT) {
+		if (!xe_gt_is_media_type(gt)) {
+			reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
+			reg |= CG_DIS_CNTLBUS;
+			xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
+		}
 
-	if (!xe_gt_is_media_type(gt)) {
-		reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
-		reg |= CG_DIS_CNTLBUS;
-		xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
+		xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0x3);
 	}
-
-	xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0x3);
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 }
 
 static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
 	u32 reg;
 
 	if (!XE_WA(gt, 16023588340))
@@ -129,15 +124,11 @@ static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
 	if (xe_gt_is_media_type(gt))
 		return;
 
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
-	if (!fw_ref)
-		return;
-
-	reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
-	reg &= ~CG_DIS_CNTLBUS;
-	xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
-
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
+	scoped_guard(xe_force_wake_get, gt_to_fw(gt), XE_FW_GT) {
+		reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
+		reg &= ~CG_DIS_CNTLBUS;
+		xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
+	}
 }
 
 /**
@@ -407,17 +398,12 @@ static void dump_pat_on_error(struct xe_gt *gt)
 
 static int gt_fw_domain_init(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
 	int err;
 
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
-	if (!fw_ref)
-		return -ETIMEDOUT;
-
 	if (!xe_gt_is_media_type(gt)) {
 		err = xe_ggtt_init(gt_to_tile(gt)->mem.ggtt);
 		if (err)
-			goto err_force_wake;
+			goto err_pat;
 		if (IS_SRIOV_PF(gt_to_xe(gt)))
 			xe_lmtt_init(&gt_to_tile(gt)->sriov.pf.lmtt);
 	}
@@ -430,63 +416,53 @@ static int gt_fw_domain_init(struct xe_gt *gt)
 
 	err = xe_hw_engines_init_early(gt);
 	if (err)
-		goto err_force_wake;
+		goto err_pat;
 
 	err = xe_hw_engine_class_sysfs_init(gt);
 	if (err)
-		goto err_force_wake;
+		goto err_pat;
 
 	/* Initialize CCS mode sysfs after early initialization of HW engines */
 	err = xe_gt_ccs_mode_sysfs_init(gt);
 	if (err)
-		goto err_force_wake;
+		goto err_pat;
 
 	/*
 	 * Stash hardware-reported version.  Since this register does not exist
 	 * on pre-MTL platforms, reading it there will (correctly) return 0.
 	 */
 	gt->info.gmdid = xe_mmio_read32(&gt->mmio, GMD_ID);
-
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 	return 0;
 
-err_force_wake:
+err_pat:
 	dump_pat_on_error(gt);
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 
 	return err;
 }
 
 static int all_fw_domain_init(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
 	int err;
 
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
-	if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
-		err = -ETIMEDOUT;
-		goto err_force_wake;
-	}
-
 	xe_gt_mcr_set_implicit_defaults(gt);
 	xe_reg_sr_apply_mmio(&gt->reg_sr, gt);
 
 	err = xe_gt_clock_init(gt);
 	if (err)
-		goto err_force_wake;
+		return err;
 
 	xe_mocs_init(gt);
 	err = xe_execlist_init(gt);
 	if (err)
-		goto err_force_wake;
+		return err;
 
 	err = xe_hw_engines_init(gt);
 	if (err)
-		goto err_force_wake;
+		return err;
 
 	err = xe_uc_init_post_hwconfig(&gt->uc);
 	if (err)
-		goto err_force_wake;
+		return err;
 
 	if (!xe_gt_is_media_type(gt)) {
 		/*
@@ -497,10 +473,8 @@ static int all_fw_domain_init(struct xe_gt *gt)
 
 			gt->usm.bb_pool = xe_sa_bo_manager_init(gt_to_tile(gt),
 								IS_DGFX(xe) ? SZ_1M : SZ_512K, 16);
-			if (IS_ERR(gt->usm.bb_pool)) {
-				err = PTR_ERR(gt->usm.bb_pool);
-				goto err_force_wake;
-			}
+			if (IS_ERR(gt->usm.bb_pool))
+				return PTR_ERR(gt->usm.bb_pool);
 		}
 	}
 
@@ -508,15 +482,13 @@ static int all_fw_domain_init(struct xe_gt *gt)
 		struct xe_tile *tile = gt_to_tile(gt);
 
 		tile->migrate = xe_migrate_init(tile);
-		if (IS_ERR(tile->migrate)) {
-			err = PTR_ERR(tile->migrate);
-			goto err_force_wake;
-		}
+		if (IS_ERR(tile->migrate))
+			return PTR_ERR(tile->migrate);
 	}
 
 	err = xe_uc_init_hw(&gt->uc);
 	if (err)
-		goto err_force_wake;
+		return err;
 
 	/* Configure default CCS mode of 1 engine with all resources */
 	if (xe_gt_ccs_mode_enabled(gt)) {
@@ -532,14 +504,7 @@ static int all_fw_domain_init(struct xe_gt *gt)
 		xe_gt_sriov_pf_init_hw(gt);
 	}
 
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
 	return 0;
-
-err_force_wake:
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
-	return err;
 }
 
 /*
@@ -548,31 +513,25 @@ static int all_fw_domain_init(struct xe_gt *gt)
  */
 int xe_gt_init_hwconfig(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
 	int err;
 
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
-	if (!fw_ref)
-		return -ETIMEDOUT;
-
-	xe_gt_mcr_init_early(gt);
-	xe_pat_init(gt);
-
-	err = xe_uc_init(&gt->uc);
-	if (err)
-		goto out_fw;
+	scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, gt_to_fw(gt), XE_FW_GT) {
+		xe_gt_mcr_init_early(gt);
+		xe_pat_init(gt);
 
-	err = xe_uc_init_hwconfig(&gt->uc);
-	if (err)
-		goto out_fw;
+		err = xe_uc_init(&gt->uc);
+		if (err)
+			return err;
 
-	xe_gt_topology_init(gt);
-	xe_gt_mcr_init(gt);
-	xe_gt_enable_host_l2_vram(gt);
+		err = xe_uc_init_hwconfig(&gt->uc);
+		if (err)
+			return err;
 
-out_fw:
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
-	return err;
+		xe_gt_topology_init(gt);
+		xe_gt_mcr_init(gt);
+		xe_gt_enable_host_l2_vram(gt);
+	}
+	return 0;
 }
 
 int xe_gt_init(struct xe_gt *gt)
@@ -597,7 +556,8 @@ int xe_gt_init(struct xe_gt *gt)
 	if (err)
 		goto err;
 
-	err = gt_fw_domain_init(gt);
+	scoped_cond_guard(xe_force_wake_get, err = -ETIMEDOUT, gt_to_fw(gt), XE_FW_GT)
+		err = gt_fw_domain_init(gt);
 	if (err)
 		goto err;
 
@@ -611,7 +571,8 @@ int xe_gt_init(struct xe_gt *gt)
 
 	xe_force_wake_init_engines(gt, gt_to_fw(gt));
 
-	err = all_fw_domain_init(gt);
+	scoped_cond_guard(xe_force_wake_get, err = -ETIMEDOUT, gt_to_fw(gt), XE_FORCEWAKE_ALL)
+		err = all_fw_domain_init(gt);
 	if (err)
 		goto err;
 
@@ -767,7 +728,6 @@ static int do_gt_restart(struct xe_gt *gt)
 
 static int gt_reset(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
 	int err;
 
 	if (xe_device_wedged(gt_to_xe(gt)))
@@ -788,29 +748,24 @@ static int gt_reset(struct xe_gt *gt)
 
 	xe_gt_sanitize(gt);
 
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
-	if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
-		err = -ETIMEDOUT;
-		goto err_out;
-	}
-
-	xe_uc_gucrc_disable(&gt->uc);
-	xe_uc_stop_prepare(&gt->uc);
-	xe_gt_pagefault_reset(gt);
+	scoped_cond_guard(xe_force_wake_get, err = -ETIMEDOUT; goto err_out, gt_to_fw(gt), XE_FORCEWAKE_ALL) {
+		xe_uc_gucrc_disable(&gt->uc);
+		xe_uc_stop_prepare(&gt->uc);
+		xe_gt_pagefault_reset(gt);
 
-	xe_uc_stop(&gt->uc);
+		xe_uc_stop(&gt->uc);
 
-	xe_gt_tlb_invalidation_reset(gt);
+		xe_gt_tlb_invalidation_reset(gt);
 
-	err = do_gt_reset(gt);
-	if (err)
-		goto err_out;
+		err = do_gt_reset(gt);
+		if (err)
+			goto err_out;
 
-	err = do_gt_restart(gt);
-	if (err)
-		goto err_out;
+		err = do_gt_restart(gt);
+		if (err)
+			goto err_out;
+	}
 
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 	xe_pm_runtime_put(gt_to_xe(gt));
 
 	xe_gt_info(gt, "reset done\n");
@@ -818,7 +773,6 @@ static int gt_reset(struct xe_gt *gt)
 	return 0;
 
 err_out:
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 	XE_WARN_ON(xe_uc_start(&gt->uc));
 err_fail:
 	xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err));
@@ -850,44 +804,32 @@ void xe_gt_reset_async(struct xe_gt *gt)
 
 void xe_gt_suspend_prepare(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
-
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+	guard(xe_force_wake)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
 
 	xe_uc_stop_prepare(&gt->uc);
-
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 }
 
 int xe_gt_suspend(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
-	int err;
+	int err = -ETIMEDOUT;
 
 	xe_gt_dbg(gt, "suspending\n");
 	xe_gt_sanitize(gt);
 
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
-	if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
-		goto err_msg;
-
-	err = xe_uc_suspend(&gt->uc);
-	if (err)
-		goto err_force_wake;
-
-	xe_gt_idle_disable_pg(gt);
+	scoped_cond_guard(xe_force_wake_get, goto err_msg, gt_to_fw(gt), XE_FORCEWAKE_ALL) {
+		err = xe_uc_suspend(&gt->uc);
+		if (err)
+			goto err_msg;
 
-	xe_gt_disable_host_l2_vram(gt);
+		xe_gt_idle_disable_pg(gt);
 
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
+		xe_gt_disable_host_l2_vram(gt);
+	}
 	xe_gt_dbg(gt, "suspended\n");
 
 	return 0;
 
 err_msg:
-	err = -ETIMEDOUT;
-err_force_wake:
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 	xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
 
 	return err;
@@ -895,11 +837,8 @@ int xe_gt_suspend(struct xe_gt *gt)
 
 void xe_gt_shutdown(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
-
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+	guard(xe_force_wake)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
 	do_gt_reset(gt);
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 }
 
 /**
@@ -924,29 +863,22 @@ int xe_gt_sanitize_freq(struct xe_gt *gt)
 
 int xe_gt_resume(struct xe_gt *gt)
 {
-	unsigned int fw_ref;
-	int err;
+	int err = -ETIMEDOUT;
 
 	xe_gt_dbg(gt, "resuming\n");
-	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
-	if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
-		goto err_msg;
-
-	err = do_gt_restart(gt);
-	if (err)
-		goto err_force_wake;
+	scoped_cond_guard(xe_force_wake_get, goto err_msg, gt_to_fw(gt), XE_FORCEWAKE_ALL) {
+		err = do_gt_restart(gt);
+		if (err)
+			goto err_msg;
 
-	xe_gt_idle_enable_pg(gt);
+		xe_gt_idle_enable_pg(gt);
+	}
 
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 	xe_gt_dbg(gt, "resumed\n");
 
 	return 0;
 
 err_msg:
-	err = -ETIMEDOUT;
-err_force_wake:
-	xe_force_wake_put(gt_to_fw(gt), fw_ref);
 	xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(err));
 
 	return err;
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH-resent-to-correct-ml 2/8] drm/xe/gt: Unify xe_hw_fence_irq_finish() calls.
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 2/8] drm/xe/gt: Unify xe_hw_fence_irq_finish() calls Maarten Lankhorst
@ 2025-02-04 15:20   ` Lucas De Marchi
  0 siblings, 0 replies; 17+ messages in thread
From: Lucas De Marchi @ 2025-02-04 15:20 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: intel-xe, dri-devel, linux-kernel, Ingo Molnar, David Lechner,
	Peter Zijlstra, Will Deacon, Waiman Long, Boqun Feng

On Tue, Feb 04, 2025 at 02:22:31PM +0100, Maarten Lankhorst wrote:
>Those calls should be from xe_gt_init, not the diverse amount of places
>they are called.
>
>Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
>---
> drivers/gpu/drm/xe/xe_gt.c | 31 ++++++++++++++-----------------
> 1 file changed, 14 insertions(+), 17 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
>index 01a4a852b8f43..943bab94119fa 100644
>--- a/drivers/gpu/drm/xe/xe_gt.c
>+++ b/drivers/gpu/drm/xe/xe_gt.c
>@@ -408,13 +408,11 @@ static void dump_pat_on_error(struct xe_gt *gt)
> static int gt_fw_domain_init(struct xe_gt *gt)
> {
> 	unsigned int fw_ref;
>-	int err, i;
>+	int err;
>
> 	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>-	if (!fw_ref) {
>-		err = -ETIMEDOUT;
>-		goto err_hw_fence_irq;
>-	}
>+	if (!fw_ref)
>+		return -ETIMEDOUT;
>
> 	if (!xe_gt_is_media_type(gt)) {
> 		err = xe_ggtt_init(gt_to_tile(gt)->mem.ggtt);
>@@ -455,9 +453,6 @@ static int gt_fw_domain_init(struct xe_gt *gt)
> err_force_wake:
> 	dump_pat_on_error(gt);
> 	xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-err_hw_fence_irq:
>-	for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
>-		xe_hw_fence_irq_finish(&gt->fence_irq[i]);
>
> 	return err;
> }
>@@ -465,7 +460,7 @@ static int gt_fw_domain_init(struct xe_gt *gt)
> static int all_fw_domain_init(struct xe_gt *gt)
> {
> 	unsigned int fw_ref;
>-	int err, i;
>+	int err;
>
> 	fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> 	if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
>@@ -543,8 +538,6 @@ static int all_fw_domain_init(struct xe_gt *gt)
>
> err_force_wake:
> 	xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-	for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
>-		xe_hw_fence_irq_finish(&gt->fence_irq[i]);
>
> 	return err;
> }

the two parts above should be fine and is similar to what I've sent on
my probe cleanup:

https://lore.kernel.org/intel-xe/20250131223140.4144292-7-lucas.demarchi@intel.com/


>@@ -596,35 +589,39 @@ int xe_gt_init(struct xe_gt *gt)
>
> 	err = xe_gt_pagefault_init(gt);
> 	if (err)
>-		return err;
>+		goto err;
>
> 	xe_mocs_init_early(gt);
>
> 	err = xe_gt_sysfs_init(gt);
> 	if (err)
>-		return err;
>+		goto err;
>
> 	err = gt_fw_domain_init(gt);
> 	if (err)
>-		return err;
>+		goto err;
>
> 	err = xe_gt_idle_init(&gt->gtidle);
> 	if (err)
>-		return err;
>+		goto err;
>
> 	err = xe_gt_freq_init(gt);
> 	if (err)
>-		return err;
>+		goto err;
>
> 	xe_force_wake_init_engines(gt, gt_to_fw(gt));
>
> 	err = all_fw_domain_init(gt);
> 	if (err)
>-		return err;
>+		goto err;
>
> 	xe_gt_record_user_engines(gt);
>
> 	return 0;
>+err:
>+	for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
>+		xe_hw_fence_irq_finish(&gt->fence_irq[i]);
>+	return err;

this however is moving to the opposite direction from

https://lore.kernel.org/intel-xe/20250131223140.4144292-6-lucas.demarchi@intel.com/

Lucas De Marchi

> }
>
> /**
>-- 
>2.47.1
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake Maarten Lankhorst
@ 2025-02-04 15:28   ` Lucas De Marchi
  2025-02-04 16:30   ` Michal Wajdeczko
  1 sibling, 0 replies; 17+ messages in thread
From: Lucas De Marchi @ 2025-02-04 15:28 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: intel-xe, dri-devel, linux-kernel, Ingo Molnar, David Lechner,
	Peter Zijlstra, Will Deacon, Waiman Long, Boqun Feng

On Tue, Feb 04, 2025 at 02:22:32PM +0100, Maarten Lankhorst wrote:
>Instead of finding bugs where we may or may not release force_wake, I've
>decided to be inspired by the spinlock guards, and use the same ones to
>do xe_force_wake handling.
>
>Examples are added as documentation in xe_force_wake.c
>
>Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
>---
> drivers/gpu/drm/xe/xe_force_wake.c | 51 ++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_force_wake.h | 15 +++++++++
> 2 files changed, 66 insertions(+)
>
>diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
>index 4f6784e5abf88..805c19f6de9e7 100644
>--- a/drivers/gpu/drm/xe/xe_force_wake.c
>+++ b/drivers/gpu/drm/xe/xe_force_wake.c
>@@ -16,6 +16,57 @@
>
> #define XE_FORCE_WAKE_ACK_TIMEOUT_MS	50
>
>+/**
>+ * DOC: Force wake handling
>+ *

doc here should start explaining what is force wake, what it does, the
flags to wake specific parts of the gpu.

>+ * Traditionally, the force wake handling has been done using the error prone
>+ * set of calls:

I'd start with the new/recommended way rather than to digress on
non-recommended ways - this style doesn't age well in a few years.

>+ *
>+ * int func(struct xe_force_wake *fw)
>+ * {
>+ * 	unsigned int fw_ref = xe_force_wake_get(fw, XE_FORCEWAKE_ALL);
>+ * 	if (!fw_ref)
>+ * 		return -ETIMEDOUT;
>+ *
>+ * 	err = do_something();
>+ *
>+ * 	xe_force_wake_put(fw, fw_ref);
>+ * 	return err;
>+ * }
>+ *
>+ * A new, failure-safe approach is by using the scoped helpers,
>+ * which changes the function to this:
>+ *
>+ * int func(struct xe_force_wake *fw)
>+ * {
>+ * 	scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, fw, XE_FORCEWAKE_ALL) {
>+ * 		return do_something();
>+ * 	}
>+ * }
>+ *
>+ * For completeness, the following options also work:
>+ * void func(struct xe_force_wake *fw)
>+ * {
>+ * 	scoped_guard(xe_force_wake_get, fw, XE_FORCEWAKE_ALL) {
>+ * 		do_something_only_if_fw_acquired();
>+ * 	}
>+ * }
>+ *
>+ * You can use xe_force_wake instead of force_wake_get, if the code
>+ * must run but errors acquiring ignored:
>+ * void func(struct xe_force_wake *fw)
>+ * {
>+ * 	scoped_guard(xe_force_wake, fw, XE_FORCEWAKE_ALL) {
>+ * 		always_do_something_maybe_fw();
>+ * 	}
>+ *
>+ * 	do_something_no_fw();
>+ *
>+ * 	guard(xe_force_wake)(fw, XE_FORCEWAKE_ALL);
>+ * 	always_do_something_maybe_fw();
>+ * }
>+ */
>+
> static const char *str_wake_sleep(bool wake)
> {
> 	return wake ? "wake" : "sleep";
>diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
>index 0e3e84bfa51c3..0fb1baae0a3a3 100644
>--- a/drivers/gpu/drm/xe/xe_force_wake.h
>+++ b/drivers/gpu/drm/xe/xe_force_wake.h
>@@ -9,6 +9,8 @@
> #include "xe_assert.h"
> #include "xe_force_wake_types.h"
>
>+#include <linux/cleanup.h>
>+
> struct xe_gt;
>
> void xe_force_wake_init_gt(struct xe_gt *gt,
>@@ -61,4 +63,17 @@ xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains dom
> 	return fw_ref & domain;
> }
>
>+DEFINE_LOCK_GUARD_1(xe_force_wake, struct xe_force_wake,
>+		    _T->fw_ref = xe_force_wake_get(_T->lock, domain),
>+		    xe_force_wake_put(_T->lock, _T->fw_ref),
>+		    unsigned int fw_ref, enum xe_force_wake_domains domain);
>+
>+DEFINE_LOCK_GUARD_1_COND(xe_force_wake, _get,
>+			 _T->fw_ref = xe_force_wake_get_all(_T->lock, domain),
>+			 enum xe_force_wake_domains domain);
>+
>+/* Only useful for guard xe_force_wake, guard xe_force_wake_get gets all or nothing */

please add an usage example here is it's where people trying to use the
interface will look at rather than the kernel-doc. But this seems not
used

>+#define xe_force_wake_scope_has_domain(domain) \
>+	(xe_force_wake_ref_has_domain(scope.fw_ref, domain))

extra paranthesis here?

Lucas De Marchi

>+
> #endif
>-- 
>2.47.1
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH-resent-to-correct-ml 5/8] drm/xe/coredump: Use guard helpers for xe_force_wake.
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 5/8] drm/xe/coredump: Use guard helpers for xe_force_wake Maarten Lankhorst
@ 2025-02-04 15:40   ` Lucas De Marchi
  0 siblings, 0 replies; 17+ messages in thread
From: Lucas De Marchi @ 2025-02-04 15:40 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: intel-xe, dri-devel, linux-kernel, Ingo Molnar, David Lechner,
	Peter Zijlstra, Will Deacon, Waiman Long, Boqun Feng

On Tue, Feb 04, 2025 at 02:22:34PM +0100, Maarten Lankhorst wrote:
>---
> drivers/gpu/drm/xe/xe_devcoredump.c | 36 ++++++++++++++---------------
> 1 file changed, 17 insertions(+), 19 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
>index 39fe485d20858..afe229fba8a9c 100644
>--- a/drivers/gpu/drm/xe/xe_devcoredump.c
>+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
>@@ -233,7 +233,6 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
> 	struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
> 	struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
> 	struct xe_device *xe = coredump_to_xe(coredump);
>-	unsigned int fw_ref;
>
> 	/*
> 	 * NB: Despite passing a GFP_ flags parameter here, more allocations are done
>@@ -247,12 +246,13 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
> 	xe_pm_runtime_get(xe);
>
> 	/* keep going if fw fails as we still want to save the memory and SW data */
>-	fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
>-	if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
>-		xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
>-	xe_vm_snapshot_capture_delayed(ss->vm);
>-	xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
>-	xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
>+	scoped_guard(xe_force_wake, gt_to_fw(ss->gt), XE_FORCEWXE_FORCEWAKE_ALLAKE_ALL) {
>+		if (!xe_force_wake_scope_has_domain(XE_FORCEWAKE_ALL))
>+			xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");

not sure why we are emitting a xe_gt_info() to the kernel log and just
letting the 2 calls above add garbage to the devcoredump - whoever is
processing the devcoredump later may have no clue about that log
message.. but I'm also not seeing why we need XE_FORCEWAKE_ALL in those
calls. Aren't they just reading the memory?

Lucas De Marchi

>+
>+		xe_vm_snapshot_capture_delayed(ss->vm);
>+		xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
>+	}
>
> 	xe_pm_runtime_put(xe);
>
>@@ -277,7 +277,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
> 	u32 width_mask = (0x1 << q->width) - 1;
> 	const char *process_name = "no process";
>
>-	unsigned int fw_ref;
> 	bool cookie;
> 	int i;
>
>@@ -305,20 +304,19 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
> 	}
>
> 	/* keep going if fw fails as we still want to save the memory and SW data */
>-	fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
>-
>-	ss->guc.log = xe_guc_log_snapshot_capture(&guc->log, true);
>-	ss->guc.ct = xe_guc_ct_snapshot_capture(&guc->ct);
>-	ss->ge = xe_guc_exec_queue_snapshot_capture(q);
>-	if (job)
>-		ss->job = xe_sched_job_snapshot_capture(job);
>-	ss->vm = xe_vm_snapshot_capture(q->vm);
>-
>-	xe_engine_snapshot_capture_for_queue(q);
>+	scoped_guard(xe_force_wake, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
>+		ss->guc.log = xe_guc_log_snapshot_capture(&guc->log, true);
>+		ss->guc.ct = xe_guc_ct_snapshot_capture(&guc->ct);
>+		ss->ge = xe_guc_exec_queue_snapshot_capture(q);
>+		if (job)
>+			ss->job = xe_sched_job_snapshot_capture(job);
>+		ss->vm = xe_vm_snapshot_capture(q->vm);
>+
>+		xe_engine_snapshot_capture_for_queue(q);
>+	}
>
> 	queue_work(system_unbound_wq, &ss->work);
>
>-	xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
> 	dma_fence_end_signalling(cookie);
> }
>
>-- 
>2.47.1
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake Maarten Lankhorst
  2025-02-04 15:28   ` Lucas De Marchi
@ 2025-02-04 16:30   ` Michal Wajdeczko
  2025-02-04 22:28     ` Maarten Lankhorst
  1 sibling, 1 reply; 17+ messages in thread
From: Michal Wajdeczko @ 2025-02-04 16:30 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-xe
  Cc: dri-devel, linux-kernel, Ingo Molnar, David Lechner,
	Peter Zijlstra, Will Deacon, Waiman Long, Boqun Feng

Hi Maarten,

On 04.02.2025 14:22, Maarten Lankhorst wrote:
> Instead of finding bugs where we may or may not release force_wake, I've
> decided to be inspired by the spinlock guards, and use the same ones to
> do xe_force_wake handling.

You may want to take a look at [1], which was based on [2], that
introduce fw guard class (and it was already acked and reviewed).
Merging was postponed only due to a request to prepare larger series
that would convert all existing usages to the new model.

And similar guard approach for our RPM was proposed in [3]

Michal

[1] https://patchwork.freedesktop.org/series/141516/
[2] https://patchwork.freedesktop.org/series/134958/
[3] https://patchwork.freedesktop.org/series/134955/

> 
> Examples are added as documentation in xe_force_wake.c
> 
> Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
> ---
>  drivers/gpu/drm/xe/xe_force_wake.c | 51 ++++++++++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_force_wake.h | 15 +++++++++
>  2 files changed, 66 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
> index 4f6784e5abf88..805c19f6de9e7 100644
> --- a/drivers/gpu/drm/xe/xe_force_wake.c
> +++ b/drivers/gpu/drm/xe/xe_force_wake.c
> @@ -16,6 +16,57 @@
>  
>  #define XE_FORCE_WAKE_ACK_TIMEOUT_MS	50
>  
> +/**
> + * DOC: Force wake handling
> + *
> + * Traditionally, the force wake handling has been done using the error prone
> + * set of calls:
> + *
> + * int func(struct xe_force_wake *fw)
> + * {
> + * 	unsigned int fw_ref = xe_force_wake_get(fw, XE_FORCEWAKE_ALL);
> + * 	if (!fw_ref)
> + * 		return -ETIMEDOUT;
> + *
> + * 	err = do_something();
> + *
> + * 	xe_force_wake_put(fw, fw_ref);
> + * 	return err;
> + * }
> + *
> + * A new, failure-safe approach is by using the scoped helpers,
> + * which changes the function to this:
> + *
> + * int func(struct xe_force_wake *fw)
> + * {
> + * 	scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, fw, XE_FORCEWAKE_ALL) {
> + * 		return do_something();
> + * 	}
> + * }
> + *
> + * For completeness, the following options also work:
> + * void func(struct xe_force_wake *fw)
> + * {
> + * 	scoped_guard(xe_force_wake_get, fw, XE_FORCEWAKE_ALL) {
> + * 		do_something_only_if_fw_acquired();
> + * 	}
> + * }
> + *
> + * You can use xe_force_wake instead of force_wake_get, if the code
> + * must run but errors acquiring ignored:
> + * void func(struct xe_force_wake *fw)
> + * {
> + * 	scoped_guard(xe_force_wake, fw, XE_FORCEWAKE_ALL) {
> + * 		always_do_something_maybe_fw();
> + * 	}
> + *
> + * 	do_something_no_fw();
> + *
> + * 	guard(xe_force_wake)(fw, XE_FORCEWAKE_ALL);
> + * 	always_do_something_maybe_fw();
> + * }
> + */
> +
>  static const char *str_wake_sleep(bool wake)
>  {
>  	return wake ? "wake" : "sleep";
> diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
> index 0e3e84bfa51c3..0fb1baae0a3a3 100644
> --- a/drivers/gpu/drm/xe/xe_force_wake.h
> +++ b/drivers/gpu/drm/xe/xe_force_wake.h
> @@ -9,6 +9,8 @@
>  #include "xe_assert.h"
>  #include "xe_force_wake_types.h"
>  
> +#include <linux/cleanup.h>
> +
>  struct xe_gt;
>  
>  void xe_force_wake_init_gt(struct xe_gt *gt,
> @@ -61,4 +63,17 @@ xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains dom
>  	return fw_ref & domain;
>  }
>  
> +DEFINE_LOCK_GUARD_1(xe_force_wake, struct xe_force_wake,
> +		    _T->fw_ref = xe_force_wake_get(_T->lock, domain),
> +		    xe_force_wake_put(_T->lock, _T->fw_ref),
> +		    unsigned int fw_ref, enum xe_force_wake_domains domain);
> +
> +DEFINE_LOCK_GUARD_1_COND(xe_force_wake, _get,
> +			 _T->fw_ref = xe_force_wake_get_all(_T->lock, domain),
> +			 enum xe_force_wake_domains domain);
> +
> +/* Only useful for guard xe_force_wake, guard xe_force_wake_get gets all or nothing */
> +#define xe_force_wake_scope_has_domain(domain) \
> +	(xe_force_wake_ref_has_domain(scope.fw_ref, domain))
> +
>  #endif


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers.
  2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
                   ` (7 preceding siblings ...)
  2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 8/8] drm/xe/gt: Convert to xe_force_wake guard helpers Maarten Lankhorst
@ 2025-02-04 17:40 ` David Lechner
  2025-02-05 20:11   ` Maarten Lankhorst
  8 siblings, 1 reply; 17+ messages in thread
From: David Lechner @ 2025-02-04 17:40 UTC (permalink / raw)
  To: Maarten Lankhorst, intel-xe
  Cc: dri-devel, linux-kernel, Ingo Molnar, Peter Zijlstra, Will Deacon,
	Waiman Long, Boqun Feng

On 2/4/25 7:22 AM, Maarten Lankhorst wrote:
> Ignore my previous series please, it should have been sent to intel-xe, was sent to intel-gfx.
> 
> Instead of all this repetition of
> 
> {
> 	unsigned fw_ref;
> 
> 	fw_ref = xe_force_wake_get(fw, domain);
> 	if (!xe_force_wake_ref_has_domain(..))
> 		return -ETIMEDOUT;
> 
> 	...
> 
> out:
> 	xe_force_wake_put(fw_ref);
> 	return ret;
> }
> 
> I thought I would look at how to replace it with the guard helpers.
> It is easy, but it required some minor fixes to make DEFINE_LOCK_GUARD_1
> work with extra init arguments.
> 
> So I changed the function signature slightly to make the first optional argument
> a struct member (current behavior), and any additional argument goes to the init
> call.
> 
> This replaces the previous code with
> {
> 	scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, fw, domain) {
> 		....
> 
> 		return ret;
> 	}
> }
> 
> I' ve thought also of playing with this:
> {
> 	CLASS(xe_force_wake_get, fw_ref)(fw, domain);
> 	if (!fw_ref.lock))
> 		return -ETIMEDOUT;
> 
> 	...
> 	return ret;
> }
> 
> I'm just fearing that the scoped_cond_guard makes it imposssible to get this
> wrong, while in the second example it's not clear that it can fail, and that
> you have to check.
> 
> Let me know what you think!
> Feedback welcome for the header change as well, should probably go into the locking tree..
> 
When I tried to make a similar improvement, Linus said to please stop trying
with the conditional guard stuff [1]. So my advice is don't do it.

Just replace the:

> 	...
> 
> out:

with a helper function if you want to get rid of the gotos.

That is what we are doing in the iio subsystem [2][3].

[1]: https://lore.kernel.org/all/CAHk-=whn07tnDosPfn+UcAtWHBcLg=KqA16SHVv0GV4t8P1fHw@mail.gmail.com/
[2]: https://lore.kernel.org/all/20250105172613.1204781-1-jic23@kernel.org/
[3]: https://lore.kernel.org/all/20250202210045.1a9e85d7@jic23-huawei/

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake
  2025-02-04 16:30   ` Michal Wajdeczko
@ 2025-02-04 22:28     ` Maarten Lankhorst
  2025-02-04 22:49       ` Rodrigo Vivi
  0 siblings, 1 reply; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-04 22:28 UTC (permalink / raw)
  To: Michal Wajdeczko, intel-xe
  Cc: dri-devel, linux-kernel, Ingo Molnar, David Lechner,
	Peter Zijlstra, Will Deacon, Waiman Long, Boqun Feng

Hey,


On 2025-02-04 17:30, Michal Wajdeczko wrote:
> Hi Maarten,
> 
> On 04.02.2025 14:22, Maarten Lankhorst wrote:
>> Instead of finding bugs where we may or may not release force_wake, I've
>> decided to be inspired by the spinlock guards, and use the same ones to
>> do xe_force_wake handling.
> 
> You may want to take a look at [1], which was based on [2], that
> introduce fw guard class (and it was already acked and reviewed).
> Merging was postponed only due to a request to prepare larger series
> that would convert all existing usages to the new model.
> 
> And similar guard approach for our RPM was proposed in [3]
> 
> Michal
> 
> [1] https://patchwork.freedesktop.org/series/141516/
> [2] https://patchwork.freedesktop.org/series/134958/
> [3] https://patchwork.freedesktop.org/series/134955/

Excellent. I'm glad we're in agreement that doing forcewake handling in 
guard handlers is a good thing. :-)

I have taken a look at the patch series. I think the approach I've taken 
is a refinement of your series. Yours is already nearly there, but it 
still keeps the rough edges of the original API.

To smooth them, I have created 2 constructors, xe_force_wake, and 
xe_force_wake_get. The former is used if you want to run code regardless 
whether it succeeds, the latter is when you do.

This allows code like:
scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, fw, 
XE_FORCE_WAKE_ALL) {}
to work flawlessly as intended, without having to check 
xe_force_wake_ref_has_domain(XE_FORCE_WAKE_ALL);

I think this cleanup removes a nasty source of errors.

When you don't care about failure:
scoped_guard(xe_force_wake, fw, XE_FORCE_WAKE_ALL) {
	if (!xe_force_wake_scope_has_domain(XE_FORCE_WAKE_ALL))
		printk("Oh noez, anyway..\n");

	/* Continue and pretend nothing happened */
}

And for optional code, same as scoped_cond_guard, but as scoped_guard:

scoped_guard(xe_force_wake_get, fw, XE_FORCE_WAKE_ALL) {
	/* Only runs this block if acquire completely succeeded, otherwise use 
xe_force_wake */
}

All in all, I'm open for bikesheds, but I think this has the potential 
to improve xe_force_wake handling even further!

I wasn't aware of your previous attempt when I wrote this and fought 
linux/cleanup.h, otherwise I would have taken that as a base and credit 
your work.

Cheers,
~Maarten


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake
  2025-02-04 22:28     ` Maarten Lankhorst
@ 2025-02-04 22:49       ` Rodrigo Vivi
  0 siblings, 0 replies; 17+ messages in thread
From: Rodrigo Vivi @ 2025-02-04 22:49 UTC (permalink / raw)
  To: Maarten Lankhorst
  Cc: Michal Wajdeczko, intel-xe, dri-devel, linux-kernel, Ingo Molnar,
	David Lechner, Peter Zijlstra, Will Deacon, Waiman Long,
	Boqun Feng

On Tue, Feb 04, 2025 at 11:28:03PM +0100, Maarten Lankhorst wrote:
> Hey,
> 
> 
> On 2025-02-04 17:30, Michal Wajdeczko wrote:
> > Hi Maarten,
> > 
> > On 04.02.2025 14:22, Maarten Lankhorst wrote:
> > > Instead of finding bugs where we may or may not release force_wake, I've
> > > decided to be inspired by the spinlock guards, and use the same ones to
> > > do xe_force_wake handling.
> > 
> > You may want to take a look at [1], which was based on [2], that
> > introduce fw guard class (and it was already acked and reviewed).
> > Merging was postponed only due to a request to prepare larger series
> > that would convert all existing usages to the new model.
> > 
> > And similar guard approach for our RPM was proposed in [3]
> > 
> > Michal
> > 
> > [1] https://patchwork.freedesktop.org/series/141516/
> > [2] https://patchwork.freedesktop.org/series/134958/
> > [3] https://patchwork.freedesktop.org/series/134955/
> 
> Excellent. I'm glad we're in agreement that doing forcewake handling in
> guard handlers is a good thing. :-)

Just for the record. I had a similar feeling back there and
also now with the new series: I believe the code itself keeps harder
to read and follow.

But if that's really a big advantage on the protection like you are
all advocating for, let's go ahead.

> 
> I have taken a look at the patch series. I think the approach I've taken is
> a refinement of your series. Yours is already nearly there, but it still
> keeps the rough edges of the original API.
> 
> To smooth them, I have created 2 constructors, xe_force_wake, and
> xe_force_wake_get. The former is used if you want to run code regardless
> whether it succeeds, the latter is when you do.
> 
> This allows code like:
> scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, fw,
> XE_FORCE_WAKE_ALL) {}
> to work flawlessly as intended, without having to check
> xe_force_wake_ref_has_domain(XE_FORCE_WAKE_ALL);
> 
> I think this cleanup removes a nasty source of errors.
> 
> When you don't care about failure:
> scoped_guard(xe_force_wake, fw, XE_FORCE_WAKE_ALL) {
> 	if (!xe_force_wake_scope_has_domain(XE_FORCE_WAKE_ALL))
> 		printk("Oh noez, anyway..\n");
> 
> 	/* Continue and pretend nothing happened */
> }
> 
> And for optional code, same as scoped_cond_guard, but as scoped_guard:
> 
> scoped_guard(xe_force_wake_get, fw, XE_FORCE_WAKE_ALL) {
> 	/* Only runs this block if acquire completely succeeded, otherwise use
> xe_force_wake */
> }
> 
> All in all, I'm open for bikesheds, but I think this has the potential to
> improve xe_force_wake handling even further!
> 
> I wasn't aware of your previous attempt when I wrote this and fought
> linux/cleanup.h, otherwise I would have taken that as a base and credit your
> work.
> 
> Cheers,
> ~Maarten
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers.
  2025-02-04 17:40 ` [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to " David Lechner
@ 2025-02-05 20:11   ` Maarten Lankhorst
  0 siblings, 0 replies; 17+ messages in thread
From: Maarten Lankhorst @ 2025-02-05 20:11 UTC (permalink / raw)
  To: David Lechner, intel-xe
  Cc: dri-devel, linux-kernel, Ingo Molnar, Peter Zijlstra, Will Deacon,
	Waiman Long, Boqun Feng

Hey,

Thanks for your feedback. I think in this case you are right, and it 
will become unreadable.

I'll try to make sparse annotations work, and then see if we can enable 
it in CI for all.

Cheers,
~Maarten

On 2025-02-04 18:40, David Lechner wrote:
> On 2/4/25 7:22 AM, Maarten Lankhorst wrote:
>> Ignore my previous series please, it should have been sent to intel-xe, was sent to intel-gfx.
>>
>> Instead of all this repetition of
>>
>> {
>> 	unsigned fw_ref;
>>
>> 	fw_ref = xe_force_wake_get(fw, domain);
>> 	if (!xe_force_wake_ref_has_domain(..))
>> 		return -ETIMEDOUT;
>>
>> 	...
>>
>> out:
>> 	xe_force_wake_put(fw_ref);
>> 	return ret;
>> }
>>
>> I thought I would look at how to replace it with the guard helpers.
>> It is easy, but it required some minor fixes to make DEFINE_LOCK_GUARD_1
>> work with extra init arguments.
>>
>> So I changed the function signature slightly to make the first optional argument
>> a struct member (current behavior), and any additional argument goes to the init
>> call.
>>
>> This replaces the previous code with
>> {
>> 	scoped_cond_guard(xe_force_wake_get, return -ETIMEDOUT, fw, domain) {
>> 		....
>>
>> 		return ret;
>> 	}
>> }
>>
>> I' ve thought also of playing with this:
>> {
>> 	CLASS(xe_force_wake_get, fw_ref)(fw, domain);
>> 	if (!fw_ref.lock))
>> 		return -ETIMEDOUT;
>>
>> 	...
>> 	return ret;
>> }
>>
>> I'm just fearing that the scoped_cond_guard makes it imposssible to get this
>> wrong, while in the second example it's not clear that it can fail, and that
>> you have to check.
>>
>> Let me know what you think!
>> Feedback welcome for the header change as well, should probably go into the locking tree..
>>
> When I tried to make a similar improvement, Linus said to please stop trying
> with the conditional guard stuff [1]. So my advice is don't do it.
> 
> Just replace the:
> 
>> 	...
>>
>> out:
> 
> with a helper function if you want to get rid of the gotos.
> 
> That is what we are doing in the iio subsystem [2][3].
> 
> [1]: https://lore.kernel.org/all/CAHk-=whn07tnDosPfn+UcAtWHBcLg=KqA16SHVv0GV4t8P1fHw@mail.gmail.com/
> [2]: https://lore.kernel.org/all/20250105172613.1204781-1-jic23@kernel.org/
> [3]: https://lore.kernel.org/all/20250202210045.1a9e85d7@jic23-huawei/


^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2025-02-05 20:11 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-04 13:22 [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to guard helpers Maarten Lankhorst
2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 1/8] header/cleanup.h: Add _init_args to DEFINE_LOCK_GUARD_1(_COND) Maarten Lankhorst
2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 2/8] drm/xe/gt: Unify xe_hw_fence_irq_finish() calls Maarten Lankhorst
2025-02-04 15:20   ` Lucas De Marchi
2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 3/8] drm/xe: Add scoped guards for xe_force_wake Maarten Lankhorst
2025-02-04 15:28   ` Lucas De Marchi
2025-02-04 16:30   ` Michal Wajdeczko
2025-02-04 22:28     ` Maarten Lankhorst
2025-02-04 22:49       ` Rodrigo Vivi
2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 4/8] drm/xe: Add xe_force_wake_get_all Maarten Lankhorst
2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 5/8] drm/xe/coredump: Use guard helpers for xe_force_wake Maarten Lankhorst
2025-02-04 15:40   ` Lucas De Marchi
2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 6/8] drm/xe/gsc: Use guard helper for xe_gsc_print_info Maarten Lankhorst
2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 7/8] drm/xe/vram: Use xe_force_wake guard helper Maarten Lankhorst
2025-02-04 13:22 ` [PATCH-resent-to-correct-ml 8/8] drm/xe/gt: Convert to xe_force_wake guard helpers Maarten Lankhorst
2025-02-04 17:40 ` [PATCH-resent-to-correct-ml 0/8] drm/xe: Convert xe_force_wake calls to " David Lechner
2025-02-05 20:11   ` Maarten Lankhorst

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).