* [PATCH v2 01/30] drm/xe/forcewake: Improve kerneldoc
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-12 14:04 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 02/30] drm/xe/eustall: Store forcewake reference in stream structure Matt Roper
` (33 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Improve the kerneldoc for forcewake a bit to give more detail about what
the structures represent.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_force_wake_types.h | 26 ++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h b/drivers/gpu/drm/xe/xe_force_wake_types.h
index 12d6e2367455..9cfa28faf7bc 100644
--- a/drivers/gpu/drm/xe/xe_force_wake_types.h
+++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
@@ -52,7 +52,22 @@ enum xe_force_wake_domains {
};
/**
- * struct xe_force_wake_domain - Xe force wake domains
+ * struct xe_force_wake_domain - Xe force wake power domain
+ *
+ * Represents a individual device-internal power domain. The driver must
+ * ensure the power domain is awake before accessing registers or other
+ * hardware functionality that is part of the power domain. Since different
+ * driver threads may access hardware units simultaneously, a reference count
+ * is used to ensure that the domain remains awake as long as any software
+ * is using the part of the hardware covered by the power domain.
+ *
+ * Hardware provides a register interface to allow the driver to request
+ * wake/sleep of power domains, although in most cases the actual action of
+ * powering the hardware up/down is handled by firmware (and may be subject to
+ * requirements and constraints outside of the driver's visibility) so the
+ * driver needs to wait for an acknowledgment that a wake request has been
+ * acted upon before accessing the parts of the hardware that reside within the
+ * power domain.
*/
struct xe_force_wake_domain {
/** @id: domain force wake id */
@@ -70,7 +85,14 @@ struct xe_force_wake_domain {
};
/**
- * struct xe_force_wake - Xe force wake
+ * struct xe_force_wake - Xe force wake collection
+ *
+ * Represents a collection of related power domains (struct
+ * xe_force_wake_domain) associated with a subunit of the device.
+ *
+ * Currently only used for GT power domains (where the term "forcewake" is used
+ * in the hardware documentation), although the interface could be extended to
+ * power wells in other parts of the hardware in the future.
*/
struct xe_force_wake {
/** @gt: back pointers to GT */
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 01/30] drm/xe/forcewake: Improve kerneldoc
2025-11-10 23:20 ` [PATCH v2 01/30] drm/xe/forcewake: Improve kerneldoc Matt Roper
@ 2025-11-12 14:04 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-12 14:04 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:19-03:00)
>Improve the kerneldoc for forcewake a bit to give more detail about what
>the structures represent.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/xe_force_wake_types.h | 26 ++++++++++++++++++++++--
> 1 file changed, 24 insertions(+), 2 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h b/drivers/gpu/drm/xe/xe_force_wake_types.h
>index 12d6e2367455..9cfa28faf7bc 100644
>--- a/drivers/gpu/drm/xe/xe_force_wake_types.h
>+++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
>@@ -52,7 +52,22 @@ enum xe_force_wake_domains {
> };
>
> /**
>- * struct xe_force_wake_domain - Xe force wake domains
>+ * struct xe_force_wake_domain - Xe force wake power domain
>+ *
>+ * Represents a individual device-internal power domain. The driver must
s/a individual/an individual/
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>+ * ensure the power domain is awake before accessing registers or other
>+ * hardware functionality that is part of the power domain. Since different
>+ * driver threads may access hardware units simultaneously, a reference count
>+ * is used to ensure that the domain remains awake as long as any software
>+ * is using the part of the hardware covered by the power domain.
>+ *
>+ * Hardware provides a register interface to allow the driver to request
>+ * wake/sleep of power domains, although in most cases the actual action of
>+ * powering the hardware up/down is handled by firmware (and may be subject to
>+ * requirements and constraints outside of the driver's visibility) so the
>+ * driver needs to wait for an acknowledgment that a wake request has been
>+ * acted upon before accessing the parts of the hardware that reside within the
>+ * power domain.
> */
> struct xe_force_wake_domain {
> /** @id: domain force wake id */
>@@ -70,7 +85,14 @@ struct xe_force_wake_domain {
> };
>
> /**
>- * struct xe_force_wake - Xe force wake
>+ * struct xe_force_wake - Xe force wake collection
>+ *
>+ * Represents a collection of related power domains (struct
>+ * xe_force_wake_domain) associated with a subunit of the device.
>+ *
>+ * Currently only used for GT power domains (where the term "forcewake" is used
>+ * in the hardware documentation), although the interface could be extended to
>+ * power wells in other parts of the hardware in the future.
> */
> struct xe_force_wake {
> /** @gt: back pointers to GT */
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 02/30] drm/xe/eustall: Store forcewake reference in stream structure
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
2025-11-10 23:20 ` [PATCH v2 01/30] drm/xe/forcewake: Improve kerneldoc Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-12 15:36 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 03/30] drm/xe/oa: " Matt Roper
` (32 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper, Harish Chegondi
Calls to xe_force_wake_put() should generally pass the exact reference
returned by xe_force_wake_get(). Since EU stall grabs and releases
forcewake in different functions, xe_eu_stall_disable_locked() is
currently calling put with a hardcoded RENDER domain. Although this
works for now, it's somewhat fragile in case the power domain(s)
required by stall sampling change in the future, or if workarounds show
up that require us to obtain additional domains.
Stash the original reference obtained during stream enable inside the
stream structure so that we can use it directly when the stream is
disabled.
Cc: Harish Chegondi <harish.chegondi@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_eu_stall.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_eu_stall.c b/drivers/gpu/drm/xe/xe_eu_stall.c
index 650e45f6a7c7..97dfb7945b7a 100644
--- a/drivers/gpu/drm/xe/xe_eu_stall.c
+++ b/drivers/gpu/drm/xe/xe_eu_stall.c
@@ -49,6 +49,7 @@ struct xe_eu_stall_data_stream {
wait_queue_head_t poll_wq;
size_t data_record_size;
size_t per_xecore_buf_size;
+ unsigned int fw_ref;
struct xe_gt *gt;
struct xe_bo *bo;
@@ -660,13 +661,12 @@ static int xe_eu_stall_stream_enable(struct xe_eu_stall_data_stream *stream)
struct per_xecore_buf *xecore_buf;
struct xe_gt *gt = stream->gt;
u16 group, instance;
- unsigned int fw_ref;
int xecore;
/* Take runtime pm ref and forcewake to disable RC6 */
xe_pm_runtime_get(gt_to_xe(gt));
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_RENDER);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_RENDER)) {
+ stream->fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_RENDER);
+ if (!xe_force_wake_ref_has_domain(stream->fw_ref, XE_FW_RENDER)) {
xe_gt_err(gt, "Failed to get RENDER forcewake\n");
xe_pm_runtime_put(gt_to_xe(gt));
return -ETIMEDOUT;
@@ -832,7 +832,7 @@ static int xe_eu_stall_disable_locked(struct xe_eu_stall_data_stream *stream)
xe_gt_mcr_multicast_write(gt, ROW_CHICKEN2,
_MASKED_BIT_DISABLE(DISABLE_DOP_GATING));
- xe_force_wake_put(gt_to_fw(gt), XE_FW_RENDER);
+ xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
xe_pm_runtime_put(gt_to_xe(gt));
return 0;
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 02/30] drm/xe/eustall: Store forcewake reference in stream structure
2025-11-10 23:20 ` [PATCH v2 02/30] drm/xe/eustall: Store forcewake reference in stream structure Matt Roper
@ 2025-11-12 15:36 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-12 15:36 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper, Harish Chegondi
Quoting Matt Roper (2025-11-10 20:20:20-03:00)
>Calls to xe_force_wake_put() should generally pass the exact reference
>returned by xe_force_wake_get(). Since EU stall grabs and releases
>forcewake in different functions, xe_eu_stall_disable_locked() is
>currently calling put with a hardcoded RENDER domain. Although this
>works for now, it's somewhat fragile in case the power domain(s)
>required by stall sampling change in the future, or if workarounds show
>up that require us to obtain additional domains.
>
>Stash the original reference obtained during stream enable inside the
>stream structure so that we can use it directly when the stream is
>disabled.
>
>Cc: Harish Chegondi <harish.chegondi@intel.com>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/xe_eu_stall.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_eu_stall.c b/drivers/gpu/drm/xe/xe_eu_stall.c
>index 650e45f6a7c7..97dfb7945b7a 100644
>--- a/drivers/gpu/drm/xe/xe_eu_stall.c
>+++ b/drivers/gpu/drm/xe/xe_eu_stall.c
>@@ -49,6 +49,7 @@ struct xe_eu_stall_data_stream {
> wait_queue_head_t poll_wq;
> size_t data_record_size;
> size_t per_xecore_buf_size;
>+ unsigned int fw_ref;
>
> struct xe_gt *gt;
> struct xe_bo *bo;
>@@ -660,13 +661,12 @@ static int xe_eu_stall_stream_enable(struct xe_eu_stall_data_stream *stream)
> struct per_xecore_buf *xecore_buf;
> struct xe_gt *gt = stream->gt;
> u16 group, instance;
>- unsigned int fw_ref;
> int xecore;
>
> /* Take runtime pm ref and forcewake to disable RC6 */
> xe_pm_runtime_get(gt_to_xe(gt));
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_RENDER);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_RENDER)) {
>+ stream->fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_RENDER);
>+ if (!xe_force_wake_ref_has_domain(stream->fw_ref, XE_FW_RENDER)) {
> xe_gt_err(gt, "Failed to get RENDER forcewake\n");
> xe_pm_runtime_put(gt_to_xe(gt));
> return -ETIMEDOUT;
>@@ -832,7 +832,7 @@ static int xe_eu_stall_disable_locked(struct xe_eu_stall_data_stream *stream)
> xe_gt_mcr_multicast_write(gt, ROW_CHICKEN2,
> _MASKED_BIT_DISABLE(DISABLE_DOP_GATING));
>
>- xe_force_wake_put(gt_to_fw(gt), XE_FW_RENDER);
>+ xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
I did a quick look and I would even say that this is also good to avoid
decrementing the refcount if the corresponding get failed and we somehow
ended up making this put call.
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
> xe_pm_runtime_put(gt_to_xe(gt));
>
> return 0;
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 03/30] drm/xe/oa: Store forcewake reference in stream structure
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
2025-11-10 23:20 ` [PATCH v2 01/30] drm/xe/forcewake: Improve kerneldoc Matt Roper
2025-11-10 23:20 ` [PATCH v2 02/30] drm/xe/eustall: Store forcewake reference in stream structure Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-12 16:11 ` Gustavo Sousa
2025-11-13 17:10 ` Dixit, Ashutosh
2025-11-10 23:20 ` [PATCH v2 04/30] drm/xe/forcewake: Add scope-based cleanup for forcewake Matt Roper
` (31 subsequent siblings)
34 siblings, 2 replies; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper, Ashutosh Dixit
Calls to xe_force_wake_put() should generally pass the exact reference
returned by xe_force_wake_get(). Since OA grabs and releases forcewake
in different functions, xe_oa_stream_destroy() is currently calling put
with a hardcoded ALL mask. Although this works for now, it's somewhat
fragile in case OA moves to more precise power domain management in the
future.
Stash the original reference obtained during stream initialization
inside the stream structure so that we can use it directly when the
stream is destroyed.
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_oa.c | 9 ++++-----
drivers/gpu/drm/xe/xe_oa_types.h | 3 +++
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
index 7a13a7bd99a6..87a2bf53d661 100644
--- a/drivers/gpu/drm/xe/xe_oa.c
+++ b/drivers/gpu/drm/xe/xe_oa.c
@@ -870,7 +870,7 @@ static void xe_oa_stream_destroy(struct xe_oa_stream *stream)
xe_oa_free_oa_buffer(stream);
- xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
xe_pm_runtime_put(stream->oa->xe);
/* Wa_1509372804:pvc: Unset the override of GUCRC mode to enable rc6 */
@@ -1717,7 +1717,6 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
struct xe_oa_open_param *param)
{
struct xe_gt *gt = param->hwe->gt;
- unsigned int fw_ref;
int ret;
stream->exec_q = param->exec_q;
@@ -1772,8 +1771,8 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
/* Take runtime pm ref and forcewake to disable RC6 */
xe_pm_runtime_get(stream->oa->xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
+ stream->fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(stream->fw_ref, XE_FORCEWAKE_ALL)) {
ret = -ETIMEDOUT;
goto err_fw_put;
}
@@ -1818,7 +1817,7 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
err_free_oa_buf:
xe_oa_free_oa_buffer(stream);
err_fw_put:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
xe_pm_runtime_put(stream->oa->xe);
if (stream->override_gucrc)
xe_gt_WARN_ON(gt, xe_guc_pc_unset_gucrc_mode(>->uc.guc.pc));
diff --git a/drivers/gpu/drm/xe/xe_oa_types.h b/drivers/gpu/drm/xe/xe_oa_types.h
index daf701b5d48b..cf080f412189 100644
--- a/drivers/gpu/drm/xe/xe_oa_types.h
+++ b/drivers/gpu/drm/xe/xe_oa_types.h
@@ -264,5 +264,8 @@ struct xe_oa_stream {
/** @syncs: syncs to wait on and to signal */
struct xe_sync_entry *syncs;
+
+ /** @fw_ref: Forcewake reference */
+ unsigned int fw_ref;
};
#endif
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 03/30] drm/xe/oa: Store forcewake reference in stream structure
2025-11-10 23:20 ` [PATCH v2 03/30] drm/xe/oa: " Matt Roper
@ 2025-11-12 16:11 ` Gustavo Sousa
2025-11-13 17:10 ` Dixit, Ashutosh
1 sibling, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-12 16:11 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper, Ashutosh Dixit
Quoting Matt Roper (2025-11-10 20:20:21-03:00)
>Calls to xe_force_wake_put() should generally pass the exact reference
>returned by xe_force_wake_get(). Since OA grabs and releases forcewake
>in different functions, xe_oa_stream_destroy() is currently calling put
>with a hardcoded ALL mask. Although this works for now, it's somewhat
>fragile in case OA moves to more precise power domain management in the
>future.
>
>Stash the original reference obtained during stream initialization
>inside the stream structure so that we can use it directly when the
>stream is destroyed.
>
>Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_oa.c | 9 ++++-----
> drivers/gpu/drm/xe/xe_oa_types.h | 3 +++
> 2 files changed, 7 insertions(+), 5 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
>index 7a13a7bd99a6..87a2bf53d661 100644
>--- a/drivers/gpu/drm/xe/xe_oa.c
>+++ b/drivers/gpu/drm/xe/xe_oa.c
>@@ -870,7 +870,7 @@ static void xe_oa_stream_destroy(struct xe_oa_stream *stream)
>
> xe_oa_free_oa_buffer(stream);
>
>- xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
> xe_pm_runtime_put(stream->oa->xe);
>
> /* Wa_1509372804:pvc: Unset the override of GUCRC mode to enable rc6 */
>@@ -1717,7 +1717,6 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
> struct xe_oa_open_param *param)
> {
> struct xe_gt *gt = param->hwe->gt;
>- unsigned int fw_ref;
> int ret;
>
> stream->exec_q = param->exec_q;
>@@ -1772,8 +1771,8 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
>
> /* Take runtime pm ref and forcewake to disable RC6 */
> xe_pm_runtime_get(stream->oa->xe);
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
>+ stream->fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ if (!xe_force_wake_ref_has_domain(stream->fw_ref, XE_FORCEWAKE_ALL)) {
> ret = -ETIMEDOUT;
> goto err_fw_put;
> }
>@@ -1818,7 +1817,7 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
> err_free_oa_buf:
> xe_oa_free_oa_buffer(stream);
> err_fw_put:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>+ xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
> xe_pm_runtime_put(stream->oa->xe);
> if (stream->override_gucrc)
> xe_gt_WARN_ON(gt, xe_guc_pc_unset_gucrc_mode(>->uc.guc.pc));
>diff --git a/drivers/gpu/drm/xe/xe_oa_types.h b/drivers/gpu/drm/xe/xe_oa_types.h
>index daf701b5d48b..cf080f412189 100644
>--- a/drivers/gpu/drm/xe/xe_oa_types.h
>+++ b/drivers/gpu/drm/xe/xe_oa_types.h
>@@ -264,5 +264,8 @@ struct xe_oa_stream {
>
> /** @syncs: syncs to wait on and to signal */
> struct xe_sync_entry *syncs;
>+
>+ /** @fw_ref: Forcewake reference */
>+ unsigned int fw_ref;
> };
> #endif
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread* Re: [PATCH v2 03/30] drm/xe/oa: Store forcewake reference in stream structure
2025-11-10 23:20 ` [PATCH v2 03/30] drm/xe/oa: " Matt Roper
2025-11-12 16:11 ` Gustavo Sousa
@ 2025-11-13 17:10 ` Dixit, Ashutosh
1 sibling, 0 replies; 74+ messages in thread
From: Dixit, Ashutosh @ 2025-11-13 17:10 UTC (permalink / raw)
To: Matt Roper; +Cc: intel-xe
On Mon, 10 Nov 2025 15:20:21 -0800, Matt Roper wrote:
>
> Calls to xe_force_wake_put() should generally pass the exact reference
> returned by xe_force_wake_get(). Since OA grabs and releases forcewake
> in different functions, xe_oa_stream_destroy() is currently calling put
> with a hardcoded ALL mask. Although this works for now, it's somewhat
> fragile in case OA moves to more precise power domain management in the
> future.
>
> Stash the original reference obtained during stream initialization
> inside the stream structure so that we can use it directly when the
> stream is destroyed.
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 04/30] drm/xe/forcewake: Add scope-based cleanup for forcewake
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (2 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 03/30] drm/xe/oa: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-12 20:00 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 05/30] drm/xe/pm: Add scope-based cleanup helper for runtime PM Matt Roper
` (30 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper, Michal Wajdeczko
Since forcewake uses a reference counting get/put model, there are many
places where we need to be careful to drop the forcewake reference when
bailing out of a function early on an error path. Add scope-based
cleanup options that can be used in place of explicit get/put to help
prevent mistakes in this area.
Examples:
CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
Obtain forcewake on the XE_FW_GT domain and hold it until the
end of the current block. The wakeref will be dropped
automatically when the current scope is exited by any means
(return, break, reaching the end of the block, etc.).
xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
...
}
Hold all forcewake domains for the following block. As with the
CLASS usage, forcewake will be dropped automatically when the
block is exited by any means.
Use of these cleanup helpers should allow us to remove some ugly
goto-based error handling and help avoid mistakes in functions with lots
of early error exits.
v2:
- Create a separate constructor that just wraps xe_force_wake_get for
use in the class. This eliminates the need to update the signature
of xe_force_wake_get(). (Michal)
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_force_wake.h | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
index 0e3e84bfa51c..0204503b53a0 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.h
+++ b/drivers/gpu/drm/xe/xe_force_wake.h
@@ -61,4 +61,32 @@ xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains dom
return fw_ref & domain;
}
+struct xe_force_wake_ref {
+ struct xe_force_wake *fw;
+ unsigned int domains;
+};
+
+static struct xe_force_wake_ref
+xe_force_wake_constructor(struct xe_force_wake *fw, unsigned int domains)
+{
+ struct xe_force_wake_ref fw_ref = { .fw = fw };
+
+ fw_ref.domains = xe_force_wake_get(fw, domains);
+
+ return fw_ref;
+}
+
+DEFINE_CLASS(xe_force_wake, struct xe_force_wake_ref,
+ xe_force_wake_put(_T.fw, _T.domains),
+ xe_force_wake_constructor(fw, domains),
+ struct xe_force_wake *fw, unsigned int domains);
+
+/*
+ * Scoped helper for the forcewake class, using the same trick as scoped_guard()
+ * to bind the lifetime to the next statement/block.
+ */
+#define xe_with_force_wake(ref, fw, domains) \
+ for (CLASS(xe_force_wake, ref)(fw, domains), *done = NULL; \
+ !done; done = (void *)1)
+
#endif
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 04/30] drm/xe/forcewake: Add scope-based cleanup for forcewake
2025-11-10 23:20 ` [PATCH v2 04/30] drm/xe/forcewake: Add scope-based cleanup for forcewake Matt Roper
@ 2025-11-12 20:00 ` Gustavo Sousa
2025-11-12 21:01 ` Matt Roper
2025-11-12 21:16 ` Gustavo Sousa
0 siblings, 2 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-12 20:00 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper, Michal Wajdeczko
Quoting Matt Roper (2025-11-10 20:20:22-03:00)
>Since forcewake uses a reference counting get/put model, there are many
>places where we need to be careful to drop the forcewake reference when
>bailing out of a function early on an error path. Add scope-based
>cleanup options that can be used in place of explicit get/put to help
>prevent mistakes in this area.
>
>Examples:
>
> CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>
> Obtain forcewake on the XE_FW_GT domain and hold it until the
> end of the current block. The wakeref will be dropped
> automatically when the current scope is exited by any means
> (return, break, reaching the end of the block, etc.).
Looking further down in the patches, I see that fw_ref is only being
used to check for success. I wish there was a way for us to do
something like:
if (xe_force_wake_require(gt_to_fw(gt), XE_FW_GT))
return; /* ... or whatever that handles the force wake error. */
, but I can't think of a way define such a macro, since we need a
variable in the scope where xe_force_wake_require() would be called for
the cleanup to work properly.
>
> xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
In display code, we usually have with_intel_something(). I wonder if we
should follow suit and use with_xe_force_wake().
> ...
> }
>
> Hold all forcewake domains for the following block. As with the
> CLASS usage, forcewake will be dropped automatically when the
> block is exited by any means.
>
>Use of these cleanup helpers should allow us to remove some ugly
>goto-based error handling and help avoid mistakes in functions with lots
>of early error exits.
>
>v2:
> - Create a separate constructor that just wraps xe_force_wake_get for
> use in the class. This eliminates the need to update the signature
> of xe_force_wake_get(). (Michal)
>
>Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/xe_force_wake.h | 28 ++++++++++++++++++++++++++++
> 1 file changed, 28 insertions(+)
>
>diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
>index 0e3e84bfa51c..0204503b53a0 100644
>--- a/drivers/gpu/drm/xe/xe_force_wake.h
>+++ b/drivers/gpu/drm/xe/xe_force_wake.h
>@@ -61,4 +61,32 @@ xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains dom
> return fw_ref & domain;
> }
>
>+struct xe_force_wake_ref {
>+ struct xe_force_wake *fw;
>+ unsigned int domains;
>+};
>+
>+static struct xe_force_wake_ref
>+xe_force_wake_constructor(struct xe_force_wake *fw, unsigned int domains)
>+{
>+ struct xe_force_wake_ref fw_ref = { .fw = fw };
>+
>+ fw_ref.domains = xe_force_wake_get(fw, domains);
>+
>+ return fw_ref;
>+}
>+
>+DEFINE_CLASS(xe_force_wake, struct xe_force_wake_ref,
>+ xe_force_wake_put(_T.fw, _T.domains),
>+ xe_force_wake_constructor(fw, domains),
>+ struct xe_force_wake *fw, unsigned int domains);
>+
>+/*
>+ * Scoped helper for the forcewake class, using the same trick as scoped_guard()
>+ * to bind the lifetime to the next statement/block.
>+ */
>+#define xe_with_force_wake(ref, fw, domains) \
>+ for (CLASS(xe_force_wake, ref)(fw, domains), *done = NULL; \
>+ !done; done = (void *)1)
I think it would be good to use __UNIQUE_ID() for "done" here, to avoid
shadowing any existing variable from the outer scope. We could do it
with something like:
#define xe_with_force_wake(ref, fw, domains, done) \
for (CLASS(xe_force_wake, ref)(fw, domains), *done = NULL; \
!done; done = (void *)1)
#define xe_with_force_wake(ref, fw, domains) \
__xe_with_force_wake(ref, fw, domains, __UNIQUE_ID(done))
--
Gustavo Sousa
>+
> #endif
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread* Re: [PATCH v2 04/30] drm/xe/forcewake: Add scope-based cleanup for forcewake
2025-11-12 20:00 ` Gustavo Sousa
@ 2025-11-12 21:01 ` Matt Roper
2025-11-12 21:16 ` Gustavo Sousa
1 sibling, 0 replies; 74+ messages in thread
From: Matt Roper @ 2025-11-12 21:01 UTC (permalink / raw)
To: Gustavo Sousa; +Cc: intel-xe, Michal Wajdeczko
On Wed, Nov 12, 2025 at 05:00:15PM -0300, Gustavo Sousa wrote:
> Quoting Matt Roper (2025-11-10 20:20:22-03:00)
> >Since forcewake uses a reference counting get/put model, there are many
> >places where we need to be careful to drop the forcewake reference when
> >bailing out of a function early on an error path. Add scope-based
> >cleanup options that can be used in place of explicit get/put to help
> >prevent mistakes in this area.
> >
> >Examples:
> >
> > CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
> >
> > Obtain forcewake on the XE_FW_GT domain and hold it until the
> > end of the current block. The wakeref will be dropped
> > automatically when the current scope is exited by any means
> > (return, break, reaching the end of the block, etc.).
>
> Looking further down in the patches, I see that fw_ref is only being
> used to check for success. I wish there was a way for us to do
> something like:
>
> if (xe_force_wake_require(gt_to_fw(gt), XE_FW_GT))
> return; /* ... or whatever that handles the force wake error. */
>
> , but I can't think of a way define such a macro, since we need a
> variable in the scope where xe_force_wake_require() would be called for
> the cleanup to work properly.
There's also the complication that forcewake doesn't have a strict
success/failure results, but can also be partially successful. If you
request XE_FORCEWAKE_ALL, then it's possible that a subset of the
available domains will fail to wake, while others will not. In theory
the calling code can do something intelligent in this "partial success"
case based on which specific domain(s) did/didn't wake up (although in
reality I don't know if that ever happens today).
Personally I think it might be better to convert forcewake to a strict
success/fail model and eliminate partial successes as an option. But
that's a bit orthogonal to the scope-based cleanup, and one of the
changes we made in v2 here was to try to avoid adjusting the existing
forcewake interface like I was in v1.
Matt
>
> >
> > xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
>
> In display code, we usually have with_intel_something(). I wonder if we
> should follow suit and use with_xe_force_wake().
>
> > ...
> > }
> >
> > Hold all forcewake domains for the following block. As with the
> > CLASS usage, forcewake will be dropped automatically when the
> > block is exited by any means.
> >
> >Use of these cleanup helpers should allow us to remove some ugly
> >goto-based error handling and help avoid mistakes in functions with lots
> >of early error exits.
> >
> >v2:
> > - Create a separate constructor that just wraps xe_force_wake_get for
> > use in the class. This eliminates the need to update the signature
> > of xe_force_wake_get(). (Michal)
> >
> >Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> >Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> >---
> > drivers/gpu/drm/xe/xe_force_wake.h | 28 ++++++++++++++++++++++++++++
> > 1 file changed, 28 insertions(+)
> >
> >diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
> >index 0e3e84bfa51c..0204503b53a0 100644
> >--- a/drivers/gpu/drm/xe/xe_force_wake.h
> >+++ b/drivers/gpu/drm/xe/xe_force_wake.h
> >@@ -61,4 +61,32 @@ xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains dom
> > return fw_ref & domain;
> > }
> >
> >+struct xe_force_wake_ref {
> >+ struct xe_force_wake *fw;
> >+ unsigned int domains;
> >+};
> >+
> >+static struct xe_force_wake_ref
> >+xe_force_wake_constructor(struct xe_force_wake *fw, unsigned int domains)
> >+{
> >+ struct xe_force_wake_ref fw_ref = { .fw = fw };
> >+
> >+ fw_ref.domains = xe_force_wake_get(fw, domains);
> >+
> >+ return fw_ref;
> >+}
> >+
> >+DEFINE_CLASS(xe_force_wake, struct xe_force_wake_ref,
> >+ xe_force_wake_put(_T.fw, _T.domains),
> >+ xe_force_wake_constructor(fw, domains),
> >+ struct xe_force_wake *fw, unsigned int domains);
> >+
> >+/*
> >+ * Scoped helper for the forcewake class, using the same trick as scoped_guard()
> >+ * to bind the lifetime to the next statement/block.
> >+ */
> >+#define xe_with_force_wake(ref, fw, domains) \
> >+ for (CLASS(xe_force_wake, ref)(fw, domains), *done = NULL; \
> >+ !done; done = (void *)1)
>
> I think it would be good to use __UNIQUE_ID() for "done" here, to avoid
> shadowing any existing variable from the outer scope. We could do it
> with something like:
>
> #define xe_with_force_wake(ref, fw, domains, done) \
> for (CLASS(xe_force_wake, ref)(fw, domains), *done = NULL; \
> !done; done = (void *)1)
>
> #define xe_with_force_wake(ref, fw, domains) \
> __xe_with_force_wake(ref, fw, domains, __UNIQUE_ID(done))
>
> --
> Gustavo Sousa
>
> >+
> > #endif
> >--
> >2.51.1
> >
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
^ permalink raw reply [flat|nested] 74+ messages in thread* Re: [PATCH v2 04/30] drm/xe/forcewake: Add scope-based cleanup for forcewake
2025-11-12 20:00 ` Gustavo Sousa
2025-11-12 21:01 ` Matt Roper
@ 2025-11-12 21:16 ` Gustavo Sousa
1 sibling, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-12 21:16 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper, Michal Wajdeczko
Quoting Gustavo Sousa (2025-11-12 17:00:15-03:00)
>Quoting Matt Roper (2025-11-10 20:20:22-03:00)
>>Since forcewake uses a reference counting get/put model, there are many
>>places where we need to be careful to drop the forcewake reference when
>>bailing out of a function early on an error path. Add scope-based
>>cleanup options that can be used in place of explicit get/put to help
>>prevent mistakes in this area.
>>
>>Examples:
>>
>> CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>>
>> Obtain forcewake on the XE_FW_GT domain and hold it until the
>> end of the current block. The wakeref will be dropped
>> automatically when the current scope is exited by any means
>> (return, break, reaching the end of the block, etc.).
>
>Looking further down in the patches, I see that fw_ref is only being
>used to check for success. I wish there was a way for us to do
>something like:
>
> if (xe_force_wake_require(gt_to_fw(gt), XE_FW_GT))
> return; /* ... or whatever that handles the force wake error. */
>
>, but I can't think of a way define such a macro, since we need a
>variable in the scope where xe_force_wake_require() would be called for
>the cleanup to work properly.
>
>>
>> xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
>
>In display code, we usually have with_intel_something(). I wonder if we
>should follow suit and use with_xe_force_wake().
>
>> ...
>> }
>>
>> Hold all forcewake domains for the following block. As with the
>> CLASS usage, forcewake will be dropped automatically when the
>> block is exited by any means.
>>
>>Use of these cleanup helpers should allow us to remove some ugly
>>goto-based error handling and help avoid mistakes in functions with lots
>>of early error exits.
>>
>>v2:
>> - Create a separate constructor that just wraps xe_force_wake_get for
>> use in the class. This eliminates the need to update the signature
>> of xe_force_wake_get(). (Michal)
>>
>>Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>>---
>> drivers/gpu/drm/xe/xe_force_wake.h | 28 ++++++++++++++++++++++++++++
>> 1 file changed, 28 insertions(+)
>>
>>diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
>>index 0e3e84bfa51c..0204503b53a0 100644
>>--- a/drivers/gpu/drm/xe/xe_force_wake.h
>>+++ b/drivers/gpu/drm/xe/xe_force_wake.h
>>@@ -61,4 +61,32 @@ xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains dom
>> return fw_ref & domain;
>> }
>>
>>+struct xe_force_wake_ref {
>>+ struct xe_force_wake *fw;
>>+ unsigned int domains;
>>+};
>>+
>>+static struct xe_force_wake_ref
>>+xe_force_wake_constructor(struct xe_force_wake *fw, unsigned int domains)
>>+{
>>+ struct xe_force_wake_ref fw_ref = { .fw = fw };
>>+
>>+ fw_ref.domains = xe_force_wake_get(fw, domains);
>>+
>>+ return fw_ref;
>>+}
>>+
>>+DEFINE_CLASS(xe_force_wake, struct xe_force_wake_ref,
>>+ xe_force_wake_put(_T.fw, _T.domains),
>>+ xe_force_wake_constructor(fw, domains),
>>+ struct xe_force_wake *fw, unsigned int domains);
Another thing: I think it would be nice to have a note in
xe_force_wake_get()'s documentation about using CLASS(xe_force_wake,
ref)(fw, domains) and xe_with_force_wake() when it makes sense.
--
Gustavo Sousa
>>+
>>+/*
>>+ * Scoped helper for the forcewake class, using the same trick as scoped_guard()
>>+ * to bind the lifetime to the next statement/block.
>>+ */
>>+#define xe_with_force_wake(ref, fw, domains) \
>>+ for (CLASS(xe_force_wake, ref)(fw, domains), *done = NULL; \
>>+ !done; done = (void *)1)
>
>I think it would be good to use __UNIQUE_ID() for "done" here, to avoid
>shadowing any existing variable from the outer scope. We could do it
>with something like:
>
> #define xe_with_force_wake(ref, fw, domains, done) \
> for (CLASS(xe_force_wake, ref)(fw, domains), *done = NULL; \
> !done; done = (void *)1)
>
> #define xe_with_force_wake(ref, fw, domains) \
> __xe_with_force_wake(ref, fw, domains, __UNIQUE_ID(done))
>
>--
>Gustavo Sousa
>
>>+
>> #endif
>>--
>>2.51.1
>>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 05/30] drm/xe/pm: Add scope-based cleanup helper for runtime PM
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (3 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 04/30] drm/xe/forcewake: Add scope-based cleanup for forcewake Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-12 19:53 ` Michal Wajdeczko
2025-11-10 23:20 ` [PATCH v2 06/30] drm/xe/gt: Use scope-based cleanup Matt Roper
` (29 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Add a scope-based helpers for runtime PM that may be used to simplify
cleanup logic and potentially avoid goto-based cleanup.
For example, using
guard(xe_pm_runtime)(xe);
will get runtime PM and cause a corresponding put to occur automatically
when the current scope is exited. 'xe_pm_runtime_noresume' can be used
as a guard replacement for the corresponding 'noresume' variant.
There's also an xe_pm_runtime_ioctl conditional guard that can be used
as a replacement for xe_runtime_ioctl():
ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
if ((ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm)) < 0)
/* failed */
In a few rare cases (such as gt_reset_worker()) we need to ensure that
runtime PM is dropped when the function is exited by any means
(including error paths), but the function does not need to acquire
runtime PM because that has already been done earlier by a different
function. For these special cases, an 'xe_pm_runtime_release_only'
guard can be used to handle the release without doing an acquisition.
These guards will be used in future patches to eliminate some of our
goto-based cleanup.
v2:
- Specify success condition for xe_pm runtime_ioctl as _RET >= 0 so
that positive values will be properly identified as success and
trigger destructor cleanup properly.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_pm.h | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
index f7f89a18b6fc..6b27039e7b2d 100644
--- a/drivers/gpu/drm/xe/xe_pm.h
+++ b/drivers/gpu/drm/xe/xe_pm.h
@@ -6,6 +6,7 @@
#ifndef _XE_PM_H_
#define _XE_PM_H_
+#include <linux/cleanup.h>
#include <linux/pm_runtime.h>
#define DEFAULT_VRAM_THRESHOLD 300 /* in MB */
@@ -37,4 +38,20 @@ int xe_pm_block_on_suspend(struct xe_device *xe);
void xe_pm_might_block_on_suspend(void);
int xe_pm_module_init(void);
+static inline void __xe_pm_runtime_noop(struct xe_device *xe) {}
+
+DEFINE_GUARD(xe_pm_runtime, struct xe_device *,
+ xe_pm_runtime_get(_T), xe_pm_runtime_put(_T))
+DEFINE_GUARD(xe_pm_runtime_noresume, struct xe_device *,
+ xe_pm_runtime_get_noresume(_T), xe_pm_runtime_put(_T))
+DEFINE_GUARD_COND(xe_pm_runtime, _ioctl, xe_pm_runtime_get_ioctl(_T), _RET >= 0)
+
+/*
+ * Used when a function needs to release runtime PM in all possible cases
+ * and error paths, but the wakeref was already acquired by a different
+ * function (i.e., get() has already happened so only a put() is needed).
+ */
+DEFINE_GUARD(xe_pm_runtime_release_only, struct xe_device *,
+ __xe_pm_runtime_noop(_T), xe_pm_runtime_put(_T));
+
#endif
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 05/30] drm/xe/pm: Add scope-based cleanup helper for runtime PM
2025-11-10 23:20 ` [PATCH v2 05/30] drm/xe/pm: Add scope-based cleanup helper for runtime PM Matt Roper
@ 2025-11-12 19:53 ` Michal Wajdeczko
2025-11-12 21:48 ` Gustavo Sousa
0 siblings, 1 reply; 74+ messages in thread
From: Michal Wajdeczko @ 2025-11-12 19:53 UTC (permalink / raw)
To: Matt Roper, intel-xe
On 11/11/2025 12:20 AM, Matt Roper wrote:
> Add a scope-based helpers for runtime PM that may be used to simplify
> cleanup logic and potentially avoid goto-based cleanup.
>
> For example, using
>
> guard(xe_pm_runtime)(xe);
>
for the record:
last year [1] use of DEFINE_GUARD for our RPM was considered almost as API abuse ;)
but now [2] [3] it is used exactly for such scenarios
[1] https://patchwork.freedesktop.org/patch/599366/?series=134955&rev=1
[2] https://elixir.bootlin.com/linux/v6.18-rc5/source/include/linux/pm_runtime.h#L617
[3] https://elixir.bootlin.com/linux/v6.18-rc5/source/drivers/tty/serial/8250/8250.h#L191
> will get runtime PM and cause a corresponding put to occur automatically
> when the current scope is exited. 'xe_pm_runtime_noresume' can be used
> as a guard replacement for the corresponding 'noresume' variant.
> There's also an xe_pm_runtime_ioctl conditional guard that can be used
> as a replacement for xe_runtime_ioctl():
>
> ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
> if ((ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm)) < 0)
> /* failed */
>
> In a few rare cases (such as gt_reset_worker()) we need to ensure that
> runtime PM is dropped when the function is exited by any means
> (including error paths), but the function does not need to acquire
> runtime PM because that has already been done earlier by a different
> function. For these special cases, an 'xe_pm_runtime_release_only'
> guard can be used to handle the release without doing an acquisition.
>
> These guards will be used in future patches to eliminate some of our
> goto-based cleanup.
>
> v2:
> - Specify success condition for xe_pm runtime_ioctl as _RET >= 0 so
> that positive values will be properly identified as success and
> trigger destructor cleanup properly.
>
> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pm.h | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
> index f7f89a18b6fc..6b27039e7b2d 100644
> --- a/drivers/gpu/drm/xe/xe_pm.h
> +++ b/drivers/gpu/drm/xe/xe_pm.h
> @@ -6,6 +6,7 @@
> #ifndef _XE_PM_H_
> #define _XE_PM_H_
>
> +#include <linux/cleanup.h>
> #include <linux/pm_runtime.h>
>
> #define DEFAULT_VRAM_THRESHOLD 300 /* in MB */
> @@ -37,4 +38,20 @@ int xe_pm_block_on_suspend(struct xe_device *xe);
> void xe_pm_might_block_on_suspend(void);
> int xe_pm_module_init(void);
>
> +static inline void __xe_pm_runtime_noop(struct xe_device *xe) {}
> +
> +DEFINE_GUARD(xe_pm_runtime, struct xe_device *,
> + xe_pm_runtime_get(_T), xe_pm_runtime_put(_T))
> +DEFINE_GUARD(xe_pm_runtime_noresume, struct xe_device *,
> + xe_pm_runtime_get_noresume(_T), xe_pm_runtime_put(_T))
> +DEFINE_GUARD_COND(xe_pm_runtime, _ioctl, xe_pm_runtime_get_ioctl(_T), _RET >= 0)
> +
> +/*
> + * Used when a function needs to release runtime PM in all possible cases
> + * and error paths, but the wakeref was already acquired by a different
> + * function (i.e., get() has already happened so only a put() is needed).
> + */
> +DEFINE_GUARD(xe_pm_runtime_release_only, struct xe_device *,
> + __xe_pm_runtime_noop(_T), xe_pm_runtime_put(_T));
maybe instead defining the noop() helper, we can just skip this param:
DEFINE_GUARD(xe_pm_runtime_release_only, struct xe_device *, , xe_pm_runtime_put(_T));
> +
> #endif
either way,
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
^ permalink raw reply [flat|nested] 74+ messages in thread* Re: [PATCH v2 05/30] drm/xe/pm: Add scope-based cleanup helper for runtime PM
2025-11-12 19:53 ` Michal Wajdeczko
@ 2025-11-12 21:48 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-12 21:48 UTC (permalink / raw)
To: Matt Roper, Michal Wajdeczko, intel-xe
Quoting Michal Wajdeczko (2025-11-12 16:53:46-03:00)
>
>
>On 11/11/2025 12:20 AM, Matt Roper wrote:
>> Add a scope-based helpers for runtime PM that may be used to simplify
>> cleanup logic and potentially avoid goto-based cleanup.
>>
>> For example, using
>>
>> guard(xe_pm_runtime)(xe);
>>
>
>for the record:
>
>last year [1] use of DEFINE_GUARD for our RPM was considered almost as API abuse ;)
>but now [2] [3] it is used exactly for such scenarios
>
>[1] https://patchwork.freedesktop.org/patch/599366/?series=134955&rev=1
>[2] https://elixir.bootlin.com/linux/v6.18-rc5/source/include/linux/pm_runtime.h#L617
>[3] https://elixir.bootlin.com/linux/v6.18-rc5/source/drivers/tty/serial/8250/8250.h#L191
If something breaks in the future, we are not the only ones to be
blamed? :-)
Maybe we should ask for an ack from people maintaining
include/linux/cleanup.h?
>
>> will get runtime PM and cause a corresponding put to occur automatically
>> when the current scope is exited. 'xe_pm_runtime_noresume' can be used
>> as a guard replacement for the corresponding 'noresume' variant.
>> There's also an xe_pm_runtime_ioctl conditional guard that can be used
>> as a replacement for xe_runtime_ioctl():
>>
>> ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
>> if ((ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm)) < 0)
>> /* failed */
>>
>> In a few rare cases (such as gt_reset_worker()) we need to ensure that
>> runtime PM is dropped when the function is exited by any means
>> (including error paths), but the function does not need to acquire
>> runtime PM because that has already been done earlier by a different
>> function. For these special cases, an 'xe_pm_runtime_release_only'
>> guard can be used to handle the release without doing an acquisition.
>>
>> These guards will be used in future patches to eliminate some of our
>> goto-based cleanup.
>>
>> v2:
>> - Specify success condition for xe_pm runtime_ioctl as _RET >= 0 so
>> that positive values will be properly identified as success and
>> trigger destructor cleanup properly.
>>
>> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_pm.h | 17 +++++++++++++++++
>> 1 file changed, 17 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
>> index f7f89a18b6fc..6b27039e7b2d 100644
>> --- a/drivers/gpu/drm/xe/xe_pm.h
>> +++ b/drivers/gpu/drm/xe/xe_pm.h
>> @@ -6,6 +6,7 @@
>> #ifndef _XE_PM_H_
>> #define _XE_PM_H_
>>
>> +#include <linux/cleanup.h>
>> #include <linux/pm_runtime.h>
>>
>> #define DEFAULT_VRAM_THRESHOLD 300 /* in MB */
>> @@ -37,4 +38,20 @@ int xe_pm_block_on_suspend(struct xe_device *xe);
>> void xe_pm_might_block_on_suspend(void);
>> int xe_pm_module_init(void);
>>
>> +static inline void __xe_pm_runtime_noop(struct xe_device *xe) {}
>> +
>> +DEFINE_GUARD(xe_pm_runtime, struct xe_device *,
>> + xe_pm_runtime_get(_T), xe_pm_runtime_put(_T))
>> +DEFINE_GUARD(xe_pm_runtime_noresume, struct xe_device *,
>> + xe_pm_runtime_get_noresume(_T), xe_pm_runtime_put(_T))
>> +DEFINE_GUARD_COND(xe_pm_runtime, _ioctl, xe_pm_runtime_get_ioctl(_T), _RET >= 0)
It would also be good to document about these in the kerneldoc for the
get variants.
--
Gustavo Sousa
>> +
>> +/*
>> + * Used when a function needs to release runtime PM in all possible cases
>> + * and error paths, but the wakeref was already acquired by a different
>> + * function (i.e., get() has already happened so only a put() is needed).
>> + */
>> +DEFINE_GUARD(xe_pm_runtime_release_only, struct xe_device *,
>> + __xe_pm_runtime_noop(_T), xe_pm_runtime_put(_T));
>
>maybe instead defining the noop() helper, we can just skip this param:
>
>DEFINE_GUARD(xe_pm_runtime_release_only, struct xe_device *, , xe_pm_runtime_put(_T));
>
>> +
>> #endif
>
>either way,
>
>Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 06/30] drm/xe/gt: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (4 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 05/30] drm/xe/pm: Add scope-based cleanup helper for runtime PM Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 12:26 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 07/30] drm/xe/gt_idle: " Matt Roper
` (28 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Using scope-based cleanup for forcewake and runtime PM allows us to
reduce or eliminate some of the goto-based error handling and simplify
several functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_gt.c | 151 ++++++++++++-------------------------
1 file changed, 48 insertions(+), 103 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index 6d479948bf21..e81674c40e57 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -103,14 +103,13 @@ void xe_gt_sanitize(struct xe_gt *gt)
static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
{
- unsigned int fw_ref;
u32 reg;
if (!XE_GT_WA(gt, 16023588340))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return;
if (xe_gt_is_main_type(gt)) {
@@ -120,12 +119,10 @@ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
}
xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0xF);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
{
- unsigned int fw_ref;
u32 reg;
if (!XE_GT_WA(gt, 16023588340))
@@ -134,15 +131,13 @@ static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
if (xe_gt_is_media_type(gt))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return;
reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
reg &= ~CG_DIS_CNTLBUS;
xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
static void gt_reset_worker(struct work_struct *w);
@@ -389,7 +384,6 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
int xe_gt_init_early(struct xe_gt *gt)
{
- unsigned int fw_ref;
int err;
if (IS_SRIOV_PF(gt_to_xe(gt))) {
@@ -436,13 +430,12 @@ int xe_gt_init_early(struct xe_gt *gt)
if (err)
return err;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
xe_gt_mcr_init_early(gt);
xe_pat_init(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return 0;
}
@@ -460,16 +453,15 @@ static void dump_pat_on_error(struct xe_gt *gt)
static int gt_init_with_gt_forcewake(struct xe_gt *gt)
{
- unsigned int fw_ref;
int err;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
err = xe_uc_init(>->uc);
if (err)
- goto err_force_wake;
+ return err;
xe_gt_topology_init(gt);
xe_gt_mcr_init(gt);
@@ -478,7 +470,7 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
if (xe_gt_is_main_type(gt)) {
err = xe_ggtt_init(gt_to_tile(gt)->mem.ggtt);
if (err)
- goto err_force_wake;
+ return err;
if (IS_SRIOV_PF(gt_to_xe(gt)))
xe_lmtt_init(>_to_tile(gt)->sriov.pf.lmtt);
}
@@ -492,17 +484,17 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
err = xe_hw_engines_init_early(gt);
if (err) {
dump_pat_on_error(gt);
- goto err_force_wake;
+ return err;
}
err = xe_hw_engine_class_sysfs_init(gt);
if (err)
- goto err_force_wake;
+ return err;
/* Initialize CCS mode sysfs after early initialization of HW engines */
err = xe_gt_ccs_mode_sysfs_init(gt);
if (err)
- goto err_force_wake;
+ return err;
/*
* Stash hardware-reported version. Since this register does not exist
@@ -510,25 +502,16 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
*/
gt->info.gmdid = xe_mmio_read32(>->mmio, GMD_ID);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return 0;
-
-err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
- return err;
}
static int gt_init_with_all_forcewake(struct xe_gt *gt)
{
- unsigned int fw_ref;
int err;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- err = -ETIMEDOUT;
- goto err_force_wake;
- }
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
+ return -ETIMEDOUT;
xe_gt_mcr_set_implicit_defaults(gt);
xe_wa_process_gt(gt);
@@ -537,20 +520,20 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
err = xe_gt_clock_init(gt);
if (err)
- goto err_force_wake;
+ return err;
xe_mocs_init(gt);
err = xe_execlist_init(gt);
if (err)
- goto err_force_wake;
+ return err;
err = xe_hw_engines_init(gt);
if (err)
- goto err_force_wake;
+ return err;
err = xe_uc_init_post_hwconfig(>->uc);
if (err)
- goto err_force_wake;
+ return err;
if (xe_gt_is_main_type(gt)) {
/*
@@ -561,10 +544,8 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
gt->usm.bb_pool = xe_sa_bo_manager_init(gt_to_tile(gt),
IS_DGFX(xe) ? SZ_1M : SZ_512K, 16);
- if (IS_ERR(gt->usm.bb_pool)) {
- err = PTR_ERR(gt->usm.bb_pool);
- goto err_force_wake;
- }
+ if (IS_ERR(gt->usm.bb_pool))
+ return PTR_ERR(gt->usm.bb_pool);
}
}
@@ -573,12 +554,12 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
err = xe_migrate_init(tile->migrate);
if (err)
- goto err_force_wake;
+ return err;
}
err = xe_uc_load_hw(>->uc);
if (err)
- goto err_force_wake;
+ return err;
/* Configure default CCS mode of 1 engine with all resources */
if (xe_gt_ccs_mode_enabled(gt)) {
@@ -592,14 +573,7 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
if (IS_SRIOV_PF(gt_to_xe(gt)))
xe_gt_sriov_pf_init_hw(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
return 0;
-
-err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
- return err;
}
static void xe_gt_fini(void *arg)
@@ -819,15 +793,17 @@ static int do_gt_restart(struct xe_gt *gt)
static void gt_reset_worker(struct work_struct *w)
{
struct xe_gt *gt = container_of(w, typeof(*gt), reset.worker);
- unsigned int fw_ref;
int err;
+ /* Drop the existing runtime PM reference when exiting this function */
+ guard(xe_pm_runtime_release_only)(gt_to_xe(gt));
+
if (xe_device_wedged(gt_to_xe(gt)))
- goto err_pm_put;
+ return;
/* We only support GT resets with GuC submission */
if (!xe_device_uc_enabled(gt_to_xe(gt)))
- goto err_pm_put;
+ return;
xe_gt_info(gt, "reset started\n");
@@ -838,8 +814,8 @@ static void gt_reset_worker(struct work_struct *w)
xe_gt_sanitize(gt);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
err = -ETIMEDOUT;
goto err_out;
}
@@ -863,25 +839,16 @@ static void gt_reset_worker(struct work_struct *w)
if (err)
goto err_out;
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
- /* Pair with get while enqueueing the work in xe_gt_reset_async() */
- xe_pm_runtime_put(gt_to_xe(gt));
-
xe_gt_info(gt, "reset done\n");
return;
err_out:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
XE_WARN_ON(xe_uc_start(>->uc));
err_fail:
xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err));
xe_device_declare_wedged(gt_to_xe(gt));
-
-err_pm_put:
- xe_pm_runtime_put(gt_to_xe(gt));
}
void xe_gt_reset_async(struct xe_gt *gt)
@@ -902,56 +869,42 @@ void xe_gt_reset_async(struct xe_gt *gt)
void xe_gt_suspend_prepare(struct xe_gt *gt)
{
- unsigned int fw_ref;
-
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
-
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
xe_uc_suspend_prepare(>->uc);
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
int xe_gt_suspend(struct xe_gt *gt)
{
- unsigned int fw_ref;
int err;
xe_gt_dbg(gt, "suspending\n");
xe_gt_sanitize(gt);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- goto err_msg;
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
+ xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(-ETIMEDOUT));
+ return -ETIMEDOUT;
+ }
err = xe_uc_suspend(>->uc);
- if (err)
- goto err_force_wake;
+ if (err) {
+ xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
+ return err;
+ }
xe_gt_idle_disable_pg(gt);
xe_gt_disable_host_l2_vram(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
xe_gt_dbg(gt, "suspended\n");
return 0;
-
-err_msg:
- err = -ETIMEDOUT;
-err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
- xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
-
- return err;
}
void xe_gt_shutdown(struct xe_gt *gt)
{
- unsigned int fw_ref;
-
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
do_gt_reset(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
/**
@@ -976,32 +929,24 @@ int xe_gt_sanitize_freq(struct xe_gt *gt)
int xe_gt_resume(struct xe_gt *gt)
{
- unsigned int fw_ref;
int err;
xe_gt_dbg(gt, "resuming\n");
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- goto err_msg;
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
+ xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(-ETIMEDOUT));
+ return -ETIMEDOUT;
+ }
err = do_gt_restart(gt);
if (err)
- goto err_force_wake;
+ return err;
xe_gt_idle_enable_pg(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
xe_gt_dbg(gt, "resumed\n");
return 0;
-
-err_msg:
- err = -ETIMEDOUT;
-err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
- xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(err));
-
- return err;
}
struct xe_hw_engine *xe_gt_hw_engine(struct xe_gt *gt,
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 06/30] drm/xe/gt: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 06/30] drm/xe/gt: Use scope-based cleanup Matt Roper
@ 2025-11-13 12:26 ` Gustavo Sousa
2025-11-13 22:58 ` Matt Roper
0 siblings, 1 reply; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 12:26 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:24-03:00)
>Using scope-based cleanup for forcewake and runtime PM allows us to
>reduce or eliminate some of the goto-based error handling and simplify
>several functions.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
I provided some suggestions below. With or without them,
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_gt.c | 151 ++++++++++++-------------------------
> 1 file changed, 48 insertions(+), 103 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
>index 6d479948bf21..e81674c40e57 100644
>--- a/drivers/gpu/drm/xe/xe_gt.c
>+++ b/drivers/gpu/drm/xe/xe_gt.c
>@@ -103,14 +103,13 @@ void xe_gt_sanitize(struct xe_gt *gt)
>
> static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
> u32 reg;
>
> if (!XE_GT_WA(gt, 16023588340))
> return;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return;
>
> if (xe_gt_is_main_type(gt)) {
>@@ -120,12 +119,10 @@ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
> }
>
> xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0xF);
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
> u32 reg;
>
> if (!XE_GT_WA(gt, 16023588340))
>@@ -134,15 +131,13 @@ static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
> if (xe_gt_is_media_type(gt))
> return;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return;
>
> reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
> reg &= ~CG_DIS_CNTLBUS;
> xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> static void gt_reset_worker(struct work_struct *w);
>@@ -389,7 +384,6 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
>
> int xe_gt_init_early(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
> int err;
>
> if (IS_SRIOV_PF(gt_to_xe(gt))) {
>@@ -436,13 +430,12 @@ int xe_gt_init_early(struct xe_gt *gt)
> if (err)
> return err;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> xe_gt_mcr_init_early(gt);
> xe_pat_init(gt);
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>
> return 0;
> }
>@@ -460,16 +453,15 @@ static void dump_pat_on_error(struct xe_gt *gt)
>
> static int gt_init_with_gt_forcewake(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
> int err;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> err = xe_uc_init(>->uc);
> if (err)
>- goto err_force_wake;
>+ return err;
>
> xe_gt_topology_init(gt);
> xe_gt_mcr_init(gt);
>@@ -478,7 +470,7 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
> if (xe_gt_is_main_type(gt)) {
> err = xe_ggtt_init(gt_to_tile(gt)->mem.ggtt);
> if (err)
>- goto err_force_wake;
>+ return err;
> if (IS_SRIOV_PF(gt_to_xe(gt)))
> xe_lmtt_init(>_to_tile(gt)->sriov.pf.lmtt);
> }
>@@ -492,17 +484,17 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
> err = xe_hw_engines_init_early(gt);
> if (err) {
> dump_pat_on_error(gt);
>- goto err_force_wake;
>+ return err;
> }
>
> err = xe_hw_engine_class_sysfs_init(gt);
> if (err)
>- goto err_force_wake;
>+ return err;
>
> /* Initialize CCS mode sysfs after early initialization of HW engines */
> err = xe_gt_ccs_mode_sysfs_init(gt);
> if (err)
>- goto err_force_wake;
>+ return err;
>
> /*
> * Stash hardware-reported version. Since this register does not exist
>@@ -510,25 +502,16 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
> */
> gt->info.gmdid = xe_mmio_read32(>->mmio, GMD_ID);
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> return 0;
>-
>-err_force_wake:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
>- return err;
> }
>
> static int gt_init_with_all_forcewake(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
> int err;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
>- err = -ETIMEDOUT;
>- goto err_force_wake;
>- }
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
>+ return -ETIMEDOUT;
>
> xe_gt_mcr_set_implicit_defaults(gt);
> xe_wa_process_gt(gt);
>@@ -537,20 +520,20 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
>
> err = xe_gt_clock_init(gt);
> if (err)
>- goto err_force_wake;
>+ return err;
>
> xe_mocs_init(gt);
> err = xe_execlist_init(gt);
> if (err)
>- goto err_force_wake;
>+ return err;
>
> err = xe_hw_engines_init(gt);
> if (err)
>- goto err_force_wake;
>+ return err;
>
> err = xe_uc_init_post_hwconfig(>->uc);
> if (err)
>- goto err_force_wake;
>+ return err;
>
> if (xe_gt_is_main_type(gt)) {
> /*
>@@ -561,10 +544,8 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
>
> gt->usm.bb_pool = xe_sa_bo_manager_init(gt_to_tile(gt),
> IS_DGFX(xe) ? SZ_1M : SZ_512K, 16);
>- if (IS_ERR(gt->usm.bb_pool)) {
>- err = PTR_ERR(gt->usm.bb_pool);
>- goto err_force_wake;
>- }
>+ if (IS_ERR(gt->usm.bb_pool))
>+ return PTR_ERR(gt->usm.bb_pool);
> }
> }
>
>@@ -573,12 +554,12 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
>
> err = xe_migrate_init(tile->migrate);
> if (err)
>- goto err_force_wake;
>+ return err;
> }
>
> err = xe_uc_load_hw(>->uc);
> if (err)
>- goto err_force_wake;
>+ return err;
>
> /* Configure default CCS mode of 1 engine with all resources */
> if (xe_gt_ccs_mode_enabled(gt)) {
>@@ -592,14 +573,7 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
> if (IS_SRIOV_PF(gt_to_xe(gt)))
> xe_gt_sriov_pf_init_hw(gt);
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
> return 0;
>-
>-err_force_wake:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
>- return err;
> }
>
> static void xe_gt_fini(void *arg)
>@@ -819,15 +793,17 @@ static int do_gt_restart(struct xe_gt *gt)
> static void gt_reset_worker(struct work_struct *w)
> {
> struct xe_gt *gt = container_of(w, typeof(*gt), reset.worker);
>- unsigned int fw_ref;
> int err;
>
>+ /* Drop the existing runtime PM reference when exiting this function */
I believe the comment above is kind of already implied. I think I would
prefer to keep the previous comment, i.e.:
/* Pair with get while enqueueing the work in xe_gt_reset_async() */
, which adds info about where the ref came from. Hmm, maybe that's
implied as well? Not sure...
>+ guard(xe_pm_runtime_release_only)(gt_to_xe(gt));
>+
> if (xe_device_wedged(gt_to_xe(gt)))
>- goto err_pm_put;
>+ return;
>
> /* We only support GT resets with GuC submission */
> if (!xe_device_uc_enabled(gt_to_xe(gt)))
>- goto err_pm_put;
>+ return;
>
> xe_gt_info(gt, "reset started\n");
>
>@@ -838,8 +814,8 @@ static void gt_reset_worker(struct work_struct *w)
>
> xe_gt_sanitize(gt);
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
> err = -ETIMEDOUT;
> goto err_out;
> }
>@@ -863,25 +839,16 @@ static void gt_reset_worker(struct work_struct *w)
> if (err)
> goto err_out;
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
>- /* Pair with get while enqueueing the work in xe_gt_reset_async() */
>- xe_pm_runtime_put(gt_to_xe(gt));
>-
> xe_gt_info(gt, "reset done\n");
>
> return;
>
> err_out:
Aren't we are violating one of the expectations of cleanup.h here? Quoting
below:
* Lastly, given that the benefit of cleanup helpers is removal of
* "goto", and that the "goto" statement can jump between scopes, the
* expectation is that usage of "goto" and cleanup helpers is never
* mixed in the same function. I.e. for a given routine, convert all
* resources that need a "goto" cleanup to scope-based cleanup, or
* convert none of them.
That said, I do believe this is a bit too restrictive. In simple cases,
like this one, I believe the mix is okay.
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> XE_WARN_ON(xe_uc_start(>->uc));
>
> err_fail:
> xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err));
> xe_device_declare_wedged(gt_to_xe(gt));
>-
>-err_pm_put:
>- xe_pm_runtime_put(gt_to_xe(gt));
> }
>
> void xe_gt_reset_async(struct xe_gt *gt)
>@@ -902,56 +869,42 @@ void xe_gt_reset_async(struct xe_gt *gt)
>
> void xe_gt_suspend_prepare(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
>-
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>-
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> xe_uc_suspend_prepare(>->uc);
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> int xe_gt_suspend(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
> int err;
>
> xe_gt_dbg(gt, "suspending\n");
> xe_gt_sanitize(gt);
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
>- goto err_msg;
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
>+ xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(-ETIMEDOUT));
Since we are touching this part, we could add a more specific message
here. Something like "suspend failed due to force wake error: (%pe)\n".
The same suggestion also applies to xe_gt_resume().
>+ return -ETIMEDOUT;
>+ }
>
> err = xe_uc_suspend(>->uc);
>- if (err)
>- goto err_force_wake;
>+ if (err) {
>+ xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
>+ return err;
>+ }
>
> xe_gt_idle_disable_pg(gt);
>
> xe_gt_disable_host_l2_vram(gt);
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> xe_gt_dbg(gt, "suspended\n");
>
> return 0;
>-
>-err_msg:
>- err = -ETIMEDOUT;
>-err_force_wake:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>- xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
>-
>- return err;
> }
>
> void xe_gt_shutdown(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
>-
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> do_gt_reset(gt);
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> /**
>@@ -976,32 +929,24 @@ int xe_gt_sanitize_freq(struct xe_gt *gt)
>
> int xe_gt_resume(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
> int err;
>
> xe_gt_dbg(gt, "resuming\n");
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
>- goto err_msg;
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
>+ xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(-ETIMEDOUT));
>+ return -ETIMEDOUT;
>+ }
>
> err = do_gt_restart(gt);
> if (err)
>- goto err_force_wake;
>+ return err;
>
> xe_gt_idle_enable_pg(gt);
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> xe_gt_dbg(gt, "resumed\n");
>
> return 0;
>-
>-err_msg:
>- err = -ETIMEDOUT;
>-err_force_wake:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>- xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(err));
>-
>- return err;
> }
>
> struct xe_hw_engine *xe_gt_hw_engine(struct xe_gt *gt,
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread* Re: [PATCH v2 06/30] drm/xe/gt: Use scope-based cleanup
2025-11-13 12:26 ` Gustavo Sousa
@ 2025-11-13 22:58 ` Matt Roper
0 siblings, 0 replies; 74+ messages in thread
From: Matt Roper @ 2025-11-13 22:58 UTC (permalink / raw)
To: Gustavo Sousa; +Cc: intel-xe
On Thu, Nov 13, 2025 at 09:26:47AM -0300, Gustavo Sousa wrote:
> Quoting Matt Roper (2025-11-10 20:20:24-03:00)
> >Using scope-based cleanup for forcewake and runtime PM allows us to
> >reduce or eliminate some of the goto-based error handling and simplify
> >several functions.
> >
> >Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>
> I provided some suggestions below. With or without them,
>
> Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>
> >---
> > drivers/gpu/drm/xe/xe_gt.c | 151 ++++++++++++-------------------------
> > 1 file changed, 48 insertions(+), 103 deletions(-)
> >
> >diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
> >index 6d479948bf21..e81674c40e57 100644
> >--- a/drivers/gpu/drm/xe/xe_gt.c
> >+++ b/drivers/gpu/drm/xe/xe_gt.c
> >@@ -103,14 +103,13 @@ void xe_gt_sanitize(struct xe_gt *gt)
> >
> > static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
> > {
> >- unsigned int fw_ref;
> > u32 reg;
> >
> > if (!XE_GT_WA(gt, 16023588340))
> > return;
> >
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
> >- if (!fw_ref)
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
> >+ if (!fw_ref.domains)
> > return;
> >
> > if (xe_gt_is_main_type(gt)) {
> >@@ -120,12 +119,10 @@ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
> > }
> >
> > xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0xF);
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > }
> >
> > static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
> > {
> >- unsigned int fw_ref;
> > u32 reg;
> >
> > if (!XE_GT_WA(gt, 16023588340))
> >@@ -134,15 +131,13 @@ static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
> > if (xe_gt_is_media_type(gt))
> > return;
> >
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
> >- if (!fw_ref)
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
> >+ if (!fw_ref.domains)
> > return;
> >
> > reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
> > reg &= ~CG_DIS_CNTLBUS;
> > xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
> >-
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > }
> >
> > static void gt_reset_worker(struct work_struct *w);
> >@@ -389,7 +384,6 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
> >
> > int xe_gt_init_early(struct xe_gt *gt)
> > {
> >- unsigned int fw_ref;
> > int err;
> >
> > if (IS_SRIOV_PF(gt_to_xe(gt))) {
> >@@ -436,13 +430,12 @@ int xe_gt_init_early(struct xe_gt *gt)
> > if (err)
> > return err;
> >
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
> >- if (!fw_ref)
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
> >+ if (!fw_ref.domains)
> > return -ETIMEDOUT;
> >
> > xe_gt_mcr_init_early(gt);
> > xe_pat_init(gt);
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> >
> > return 0;
> > }
> >@@ -460,16 +453,15 @@ static void dump_pat_on_error(struct xe_gt *gt)
> >
> > static int gt_init_with_gt_forcewake(struct xe_gt *gt)
> > {
> >- unsigned int fw_ref;
> > int err;
> >
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
> >- if (!fw_ref)
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
> >+ if (!fw_ref.domains)
> > return -ETIMEDOUT;
> >
> > err = xe_uc_init(>->uc);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> >
> > xe_gt_topology_init(gt);
> > xe_gt_mcr_init(gt);
> >@@ -478,7 +470,7 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
> > if (xe_gt_is_main_type(gt)) {
> > err = xe_ggtt_init(gt_to_tile(gt)->mem.ggtt);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> > if (IS_SRIOV_PF(gt_to_xe(gt)))
> > xe_lmtt_init(>_to_tile(gt)->sriov.pf.lmtt);
> > }
> >@@ -492,17 +484,17 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
> > err = xe_hw_engines_init_early(gt);
> > if (err) {
> > dump_pat_on_error(gt);
> >- goto err_force_wake;
> >+ return err;
> > }
> >
> > err = xe_hw_engine_class_sysfs_init(gt);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> >
> > /* Initialize CCS mode sysfs after early initialization of HW engines */
> > err = xe_gt_ccs_mode_sysfs_init(gt);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> >
> > /*
> > * Stash hardware-reported version. Since this register does not exist
> >@@ -510,25 +502,16 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
> > */
> > gt->info.gmdid = xe_mmio_read32(>->mmio, GMD_ID);
> >
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > return 0;
> >-
> >-err_force_wake:
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> >-
> >- return err;
> > }
> >
> > static int gt_init_with_all_forcewake(struct xe_gt *gt)
> > {
> >- unsigned int fw_ref;
> > int err;
> >
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
> >- err = -ETIMEDOUT;
> >- goto err_force_wake;
> >- }
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
> >+ return -ETIMEDOUT;
> >
> > xe_gt_mcr_set_implicit_defaults(gt);
> > xe_wa_process_gt(gt);
> >@@ -537,20 +520,20 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
> >
> > err = xe_gt_clock_init(gt);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> >
> > xe_mocs_init(gt);
> > err = xe_execlist_init(gt);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> >
> > err = xe_hw_engines_init(gt);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> >
> > err = xe_uc_init_post_hwconfig(>->uc);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> >
> > if (xe_gt_is_main_type(gt)) {
> > /*
> >@@ -561,10 +544,8 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
> >
> > gt->usm.bb_pool = xe_sa_bo_manager_init(gt_to_tile(gt),
> > IS_DGFX(xe) ? SZ_1M : SZ_512K, 16);
> >- if (IS_ERR(gt->usm.bb_pool)) {
> >- err = PTR_ERR(gt->usm.bb_pool);
> >- goto err_force_wake;
> >- }
> >+ if (IS_ERR(gt->usm.bb_pool))
> >+ return PTR_ERR(gt->usm.bb_pool);
> > }
> > }
> >
> >@@ -573,12 +554,12 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
> >
> > err = xe_migrate_init(tile->migrate);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> > }
> >
> > err = xe_uc_load_hw(>->uc);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> >
> > /* Configure default CCS mode of 1 engine with all resources */
> > if (xe_gt_ccs_mode_enabled(gt)) {
> >@@ -592,14 +573,7 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
> > if (IS_SRIOV_PF(gt_to_xe(gt)))
> > xe_gt_sriov_pf_init_hw(gt);
> >
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> >-
> > return 0;
> >-
> >-err_force_wake:
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> >-
> >- return err;
> > }
> >
> > static void xe_gt_fini(void *arg)
> >@@ -819,15 +793,17 @@ static int do_gt_restart(struct xe_gt *gt)
> > static void gt_reset_worker(struct work_struct *w)
> > {
> > struct xe_gt *gt = container_of(w, typeof(*gt), reset.worker);
> >- unsigned int fw_ref;
> > int err;
> >
> >+ /* Drop the existing runtime PM reference when exiting this function */
>
> I believe the comment above is kind of already implied. I think I would
> prefer to keep the previous comment, i.e.:
>
> /* Pair with get while enqueueing the work in xe_gt_reset_async() */
>
> , which adds info about where the ref came from. Hmm, maybe that's
> implied as well? Not sure...
>
> >+ guard(xe_pm_runtime_release_only)(gt_to_xe(gt));
> >+
> > if (xe_device_wedged(gt_to_xe(gt)))
> >- goto err_pm_put;
> >+ return;
> >
> > /* We only support GT resets with GuC submission */
> > if (!xe_device_uc_enabled(gt_to_xe(gt)))
> >- goto err_pm_put;
> >+ return;
> >
> > xe_gt_info(gt, "reset started\n");
> >
> >@@ -838,8 +814,8 @@ static void gt_reset_worker(struct work_struct *w)
> >
> > xe_gt_sanitize(gt);
> >
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
> > err = -ETIMEDOUT;
> > goto err_out;
> > }
> >@@ -863,25 +839,16 @@ static void gt_reset_worker(struct work_struct *w)
> > if (err)
> > goto err_out;
> >
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> >-
> >- /* Pair with get while enqueueing the work in xe_gt_reset_async() */
> >- xe_pm_runtime_put(gt_to_xe(gt));
> >-
> > xe_gt_info(gt, "reset done\n");
> >
> > return;
> >
> > err_out:
>
> Aren't we are violating one of the expectations of cleanup.h here? Quoting
> below:
>
> * Lastly, given that the benefit of cleanup helpers is removal of
> * "goto", and that the "goto" statement can jump between scopes, the
> * expectation is that usage of "goto" and cleanup helpers is never
> * mixed in the same function. I.e. for a given routine, convert all
> * resources that need a "goto" cleanup to scope-based cleanup, or
> * convert none of them.
>
> That said, I do believe this is a bit too restrictive. In simple cases,
> like this one, I believe the mix is okay.
It's probably safe, but you're right that we're still breaking that
rule, which isn't good. I initially intended to make other changes to
eliminate the goto's, but decided to hold off on those since they go a
bit beyond the intention of this series. I'll just drop the changes to
this function for now; once other restructuring happens I can revisit it
with a different patch series.
Matt
>
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > XE_WARN_ON(xe_uc_start(>->uc));
> >
> > err_fail:
> > xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err));
> > xe_device_declare_wedged(gt_to_xe(gt));
> >-
> >-err_pm_put:
> >- xe_pm_runtime_put(gt_to_xe(gt));
> > }
> >
> > void xe_gt_reset_async(struct xe_gt *gt)
> >@@ -902,56 +869,42 @@ void xe_gt_reset_async(struct xe_gt *gt)
> >
> > void xe_gt_suspend_prepare(struct xe_gt *gt)
> > {
> >- unsigned int fw_ref;
> >-
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >-
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> > xe_uc_suspend_prepare(>->uc);
> >-
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > }
> >
> > int xe_gt_suspend(struct xe_gt *gt)
> > {
> >- unsigned int fw_ref;
> > int err;
> >
> > xe_gt_dbg(gt, "suspending\n");
> > xe_gt_sanitize(gt);
> >
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
> >- goto err_msg;
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
> >+ xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(-ETIMEDOUT));
>
> Since we are touching this part, we could add a more specific message
> here. Something like "suspend failed due to force wake error: (%pe)\n".
>
> The same suggestion also applies to xe_gt_resume().
>
> >+ return -ETIMEDOUT;
> >+ }
> >
> > err = xe_uc_suspend(>->uc);
> >- if (err)
> >- goto err_force_wake;
> >+ if (err) {
> >+ xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
> >+ return err;
> >+ }
> >
> > xe_gt_idle_disable_pg(gt);
> >
> > xe_gt_disable_host_l2_vram(gt);
> >
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > xe_gt_dbg(gt, "suspended\n");
> >
> > return 0;
> >-
> >-err_msg:
> >- err = -ETIMEDOUT;
> >-err_force_wake:
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> >- xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
> >-
> >- return err;
> > }
> >
> > void xe_gt_shutdown(struct xe_gt *gt)
> > {
> >- unsigned int fw_ref;
> >-
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> > do_gt_reset(gt);
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > }
> >
> > /**
> >@@ -976,32 +929,24 @@ int xe_gt_sanitize_freq(struct xe_gt *gt)
> >
> > int xe_gt_resume(struct xe_gt *gt)
> > {
> >- unsigned int fw_ref;
> > int err;
> >
> > xe_gt_dbg(gt, "resuming\n");
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
> >- goto err_msg;
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
> >+ xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(-ETIMEDOUT));
> >+ return -ETIMEDOUT;
> >+ }
> >
> > err = do_gt_restart(gt);
> > if (err)
> >- goto err_force_wake;
> >+ return err;
> >
> > xe_gt_idle_enable_pg(gt);
> >
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > xe_gt_dbg(gt, "resumed\n");
> >
> > return 0;
> >-
> >-err_msg:
> >- err = -ETIMEDOUT;
> >-err_force_wake:
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> >- xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(err));
> >-
> >- return err;
> > }
> >
> > struct xe_hw_engine *xe_gt_hw_engine(struct xe_gt *gt,
> >--
> >2.51.1
> >
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 07/30] drm/xe/gt_idle: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (5 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 06/30] drm/xe/gt: Use scope-based cleanup Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 12:39 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 08/30] drm/xe/guc: " Matt Roper
` (27 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for runtime PM and forcewake in the GT idle
code.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_gt_idle.c | 32 +++++++++-----------------------
1 file changed, 9 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt_idle.c b/drivers/gpu/drm/xe/xe_gt_idle.c
index bdc9d9877ec4..6a63b7ad69a7 100644
--- a/drivers/gpu/drm/xe/xe_gt_idle.c
+++ b/drivers/gpu/drm/xe/xe_gt_idle.c
@@ -103,7 +103,6 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
struct xe_gt_idle *gtidle = >->gtidle;
struct xe_mmio *mmio = >->mmio;
u32 vcs_mask, vecs_mask;
- unsigned int fw_ref;
int i, j;
if (IS_SRIOV_VF(xe))
@@ -135,7 +134,7 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
}
}
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (xe->info.skip_guc_pc) {
/*
* GuC sets the hysteresis value when GuC PC is enabled
@@ -146,13 +145,11 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
}
xe_mmio_write32(mmio, POWERGATE_ENABLE, gtidle->powergate_enable);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
void xe_gt_idle_disable_pg(struct xe_gt *gt)
{
struct xe_gt_idle *gtidle = >->gtidle;
- unsigned int fw_ref;
if (IS_SRIOV_VF(gt_to_xe(gt)))
return;
@@ -160,9 +157,8 @@ void xe_gt_idle_disable_pg(struct xe_gt *gt)
xe_device_assert_mem_access(gt_to_xe(gt));
gtidle->powergate_enable = 0;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
xe_mmio_write32(>->mmio, POWERGATE_ENABLE, gtidle->powergate_enable);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
/**
@@ -181,7 +177,6 @@ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p)
enum xe_gt_idle_state state;
u32 pg_enabled, pg_status = 0;
u32 vcs_mask, vecs_mask;
- unsigned int fw_ref;
int n;
/*
* Media Slices
@@ -218,14 +213,12 @@ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p)
/* Do not wake the GT to read powergating status */
if (state != GT_IDLE_C6) {
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
pg_enabled = xe_mmio_read32(>->mmio, POWERGATE_ENABLE);
pg_status = xe_mmio_read32(>->mmio, POWERGATE_DOMAIN_STATUS);
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
if (gt->info.engine_mask & XE_HW_ENGINE_RCS_MASK) {
@@ -265,9 +258,8 @@ static ssize_t name_show(struct kobject *kobj,
struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
ssize_t ret;
- xe_pm_runtime_get(pc_to_xe(pc));
+ guard(xe_pm_runtime)(pc_to_xe(pc));
ret = sysfs_emit(buff, "%s\n", gtidle->name);
- xe_pm_runtime_put(pc_to_xe(pc));
return ret;
}
@@ -281,9 +273,8 @@ static ssize_t idle_status_show(struct kobject *kobj,
struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
enum xe_gt_idle_state state;
- xe_pm_runtime_get(pc_to_xe(pc));
+ guard(xe_pm_runtime)(pc_to_xe(pc));
state = gtidle->idle_status(pc);
- xe_pm_runtime_put(pc_to_xe(pc));
return sysfs_emit(buff, "%s\n", gt_idle_state_to_string(state));
}
@@ -311,9 +302,8 @@ static ssize_t idle_residency_ms_show(struct kobject *kobj,
struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
u64 residency;
- xe_pm_runtime_get(pc_to_xe(pc));
+ guard(xe_pm_runtime)(pc_to_xe(pc));
residency = xe_gt_idle_residency_msec(gtidle);
- xe_pm_runtime_put(pc_to_xe(pc));
return sysfs_emit(buff, "%llu\n", residency);
}
@@ -396,21 +386,17 @@ void xe_gt_idle_enable_c6(struct xe_gt *gt)
int xe_gt_idle_disable_c6(struct xe_gt *gt)
{
- unsigned int fw_ref;
-
xe_device_assert_mem_access(gt_to_xe(gt));
if (IS_SRIOV_VF(gt_to_xe(gt)))
return 0;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
xe_mmio_write32(>->mmio, RC_CONTROL, 0);
xe_mmio_write32(>->mmio, RC_STATE, 0);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 07/30] drm/xe/gt_idle: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 07/30] drm/xe/gt_idle: " Matt Roper
@ 2025-11-13 12:39 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 12:39 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:25-03:00)
>Use scope-based cleanup for runtime PM and forcewake in the GT idle
>code.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/xe_gt_idle.c | 32 +++++++++-----------------------
> 1 file changed, 9 insertions(+), 23 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_gt_idle.c b/drivers/gpu/drm/xe/xe_gt_idle.c
>index bdc9d9877ec4..6a63b7ad69a7 100644
>--- a/drivers/gpu/drm/xe/xe_gt_idle.c
>+++ b/drivers/gpu/drm/xe/xe_gt_idle.c
>@@ -103,7 +103,6 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
> struct xe_gt_idle *gtidle = >->gtidle;
> struct xe_mmio *mmio = >->mmio;
> u32 vcs_mask, vecs_mask;
>- unsigned int fw_ref;
> int i, j;
>
> if (IS_SRIOV_VF(xe))
>@@ -135,7 +134,7 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
> }
> }
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
> if (xe->info.skip_guc_pc) {
> /*
> * GuC sets the hysteresis value when GuC PC is enabled
>@@ -146,13 +145,11 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
> }
>
> xe_mmio_write32(mmio, POWERGATE_ENABLE, gtidle->powergate_enable);
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> void xe_gt_idle_disable_pg(struct xe_gt *gt)
> {
> struct xe_gt_idle *gtidle = >->gtidle;
>- unsigned int fw_ref;
>
> if (IS_SRIOV_VF(gt_to_xe(gt)))
> return;
>@@ -160,9 +157,8 @@ void xe_gt_idle_disable_pg(struct xe_gt *gt)
> xe_device_assert_mem_access(gt_to_xe(gt));
> gtidle->powergate_enable = 0;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
> xe_mmio_write32(>->mmio, POWERGATE_ENABLE, gtidle->powergate_enable);
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> /**
>@@ -181,7 +177,6 @@ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p)
> enum xe_gt_idle_state state;
> u32 pg_enabled, pg_status = 0;
> u32 vcs_mask, vecs_mask;
>- unsigned int fw_ref;
> int n;
> /*
> * Media Slices
>@@ -218,14 +213,12 @@ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p)
>
> /* Do not wake the GT to read powergating status */
> if (state != GT_IDLE_C6) {
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> pg_enabled = xe_mmio_read32(>->mmio, POWERGATE_ENABLE);
> pg_status = xe_mmio_read32(>->mmio, POWERGATE_DOMAIN_STATUS);
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> if (gt->info.engine_mask & XE_HW_ENGINE_RCS_MASK) {
>@@ -265,9 +258,8 @@ static ssize_t name_show(struct kobject *kobj,
> struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
> ssize_t ret;
>
>- xe_pm_runtime_get(pc_to_xe(pc));
>+ guard(xe_pm_runtime)(pc_to_xe(pc));
> ret = sysfs_emit(buff, "%s\n", gtidle->name);
>- xe_pm_runtime_put(pc_to_xe(pc));
>
> return ret;
> }
>@@ -281,9 +273,8 @@ static ssize_t idle_status_show(struct kobject *kobj,
> struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
> enum xe_gt_idle_state state;
>
>- xe_pm_runtime_get(pc_to_xe(pc));
>+ guard(xe_pm_runtime)(pc_to_xe(pc));
> state = gtidle->idle_status(pc);
>- xe_pm_runtime_put(pc_to_xe(pc));
>
> return sysfs_emit(buff, "%s\n", gt_idle_state_to_string(state));
For this and also idle_residency_ms_show(): I wonder if would prefer to
use scoped_guard() before calling into sysfs_emit()...
Anyways,
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
> }
>@@ -311,9 +302,8 @@ static ssize_t idle_residency_ms_show(struct kobject *kobj,
> struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
> u64 residency;
>
>- xe_pm_runtime_get(pc_to_xe(pc));
>+ guard(xe_pm_runtime)(pc_to_xe(pc));
> residency = xe_gt_idle_residency_msec(gtidle);
>- xe_pm_runtime_put(pc_to_xe(pc));
>
> return sysfs_emit(buff, "%llu\n", residency);
> }
>@@ -396,21 +386,17 @@ void xe_gt_idle_enable_c6(struct xe_gt *gt)
>
> int xe_gt_idle_disable_c6(struct xe_gt *gt)
> {
>- unsigned int fw_ref;
>-
> xe_device_assert_mem_access(gt_to_xe(gt));
>
> if (IS_SRIOV_VF(gt_to_xe(gt)))
> return 0;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> xe_mmio_write32(>->mmio, RC_CONTROL, 0);
> xe_mmio_write32(>->mmio, RC_STATE, 0);
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
> return 0;
> }
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 08/30] drm/xe/guc: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (6 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 07/30] drm/xe/gt_idle: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 12:46 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 09/30] drm/xe/guc_pc: " Matt Roper
` (26 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_guc.c | 13 ++++---------
drivers/gpu/drm/xe/xe_guc_log.c | 10 ++++------
drivers/gpu/drm/xe/xe_guc_submit.c | 11 +++--------
drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 4 +---
4 files changed, 12 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
index ecc3e091b89e..e47292b2aab0 100644
--- a/drivers/gpu/drm/xe/xe_guc.c
+++ b/drivers/gpu/drm/xe/xe_guc.c
@@ -658,11 +658,9 @@ static void guc_fini_hw(void *arg)
{
struct xe_guc *guc = arg;
struct xe_gt *gt = guc_to_gt(guc);
- unsigned int fw_ref;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- xe_uc_sanitize_reset(&guc_to_gt(guc)->uc);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FORCEWAKE_ALL)
+ xe_uc_sanitize_reset(&guc_to_gt(guc)->uc);
guc_g2g_fini(guc);
}
@@ -1610,15 +1608,14 @@ int xe_guc_start(struct xe_guc *guc)
void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
{
struct xe_gt *gt = guc_to_gt(guc);
- unsigned int fw_ref;
u32 status;
int i;
xe_uc_fw_print(&guc->fw, p);
if (!IS_SRIOV_VF(gt_to_xe(gt))) {
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return;
status = xe_mmio_read32(>->mmio, GUC_STATUS);
@@ -1638,8 +1635,6 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
drm_printf(p, "\t%2d: \t0x%x\n",
i, xe_mmio_read32(>->mmio, SOFT_SCRATCH(i)));
}
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
drm_puts(p, "\n");
diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
index c01ccb35dc75..0c704a11078a 100644
--- a/drivers/gpu/drm/xe/xe_guc_log.c
+++ b/drivers/gpu/drm/xe/xe_guc_log.c
@@ -145,7 +145,6 @@ struct xe_guc_log_snapshot *xe_guc_log_snapshot_capture(struct xe_guc_log *log,
struct xe_device *xe = log_to_xe(log);
struct xe_guc *guc = log_to_guc(log);
struct xe_gt *gt = log_to_gt(log);
- unsigned int fw_ref;
size_t remain;
int i;
@@ -165,13 +164,12 @@ struct xe_guc_log_snapshot *xe_guc_log_snapshot_capture(struct xe_guc_log *log,
remain -= size;
}
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref) {
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
snapshot->stamp = ~0ULL;
- } else {
+ else
snapshot->stamp = xe_mmio_read64_2x32(>->mmio, GUC_PMTIMESTAMP_LO);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
- }
+
snapshot->ktime = ktime_get_boottime_ns();
snapshot->level = log->level;
snapshot->ver_found = guc->fw.versions.found[XE_UC_FW_VER_RELEASE];
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index d4ffdb71ef3d..7e0882074a99 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1225,7 +1225,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
struct xe_guc *guc = exec_queue_to_guc(q);
const char *process_name = "no process";
struct xe_device *xe = guc_to_xe(guc);
- unsigned int fw_ref;
int err = -ETIME;
pid_t pid = -1;
int i = 0;
@@ -1258,13 +1257,11 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
if (!exec_queue_killed(q) && !xe->devcoredump.captured &&
!xe_guc_capture_get_matching_and_lock(q)) {
/* take force wake before engine register manual capture */
- fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
xe_gt_info(q->gt, "failed to get forcewake for coredump capture\n");
xe_engine_snapshot_capture_for_queue(q);
-
- xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
}
/*
@@ -1455,7 +1452,7 @@ static void __guc_exec_queue_destroy_async(struct work_struct *w)
struct xe_exec_queue *q = ge->q;
struct xe_guc *guc = exec_queue_to_guc(q);
- xe_pm_runtime_get(guc_to_xe(guc));
+ guard(xe_pm_runtime)(guc_to_xe(guc));
trace_xe_exec_queue_destroy(q);
if (xe_exec_queue_is_lr(q))
@@ -1464,8 +1461,6 @@ static void __guc_exec_queue_destroy_async(struct work_struct *w)
cancel_delayed_work_sync(&ge->sched.base.work_tdr);
xe_exec_queue_fini(q);
-
- xe_pm_runtime_put(guc_to_xe(guc));
}
static void guc_exec_queue_destroy_async(struct xe_exec_queue *q)
diff --git a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
index a80175c7c478..848d3493df10 100644
--- a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
+++ b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
@@ -71,12 +71,11 @@ static int send_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval, u32 seqno)
return send_tlb_inval(guc, action, ARRAY_SIZE(action));
} else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
struct xe_mmio *mmio = >->mmio;
- unsigned int fw_ref;
if (IS_SRIOV_VF(xe))
return -ECANCELED;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (xe->info.platform == XE_PVC || GRAPHICS_VER(xe) >= 20) {
xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC1,
PVC_GUC_TLB_INV_DESC1_INVALIDATE);
@@ -86,7 +85,6 @@ static int send_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval, u32 seqno)
xe_mmio_write32(mmio, GUC_TLB_INV_CR,
GUC_TLB_INV_CR_INVALIDATE);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
return -ECANCELED;
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 08/30] drm/xe/guc: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 08/30] drm/xe/guc: " Matt Roper
@ 2025-11-13 12:46 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 12:46 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:26-03:00)
>Use scope-based cleanup for forcewake and runtime PM.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_guc.c | 13 ++++---------
> drivers/gpu/drm/xe/xe_guc_log.c | 10 ++++------
> drivers/gpu/drm/xe/xe_guc_submit.c | 11 +++--------
> drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 4 +---
> 4 files changed, 12 insertions(+), 26 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
>index ecc3e091b89e..e47292b2aab0 100644
>--- a/drivers/gpu/drm/xe/xe_guc.c
>+++ b/drivers/gpu/drm/xe/xe_guc.c
>@@ -658,11 +658,9 @@ static void guc_fini_hw(void *arg)
> {
> struct xe_guc *guc = arg;
> struct xe_gt *gt = guc_to_gt(guc);
>- unsigned int fw_ref;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- xe_uc_sanitize_reset(&guc_to_gt(guc)->uc);
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>+ xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FORCEWAKE_ALL)
>+ xe_uc_sanitize_reset(&guc_to_gt(guc)->uc);
>
> guc_g2g_fini(guc);
> }
>@@ -1610,15 +1608,14 @@ int xe_guc_start(struct xe_guc *guc)
> void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
> {
> struct xe_gt *gt = guc_to_gt(guc);
>- unsigned int fw_ref;
> u32 status;
> int i;
>
> xe_uc_fw_print(&guc->fw, p);
>
> if (!IS_SRIOV_VF(gt_to_xe(gt))) {
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return;
>
> status = xe_mmio_read32(>->mmio, GUC_STATUS);
>@@ -1638,8 +1635,6 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
> drm_printf(p, "\t%2d: \t0x%x\n",
> i, xe_mmio_read32(>->mmio, SOFT_SCRATCH(i)));
> }
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> drm_puts(p, "\n");
>diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
>index c01ccb35dc75..0c704a11078a 100644
>--- a/drivers/gpu/drm/xe/xe_guc_log.c
>+++ b/drivers/gpu/drm/xe/xe_guc_log.c
>@@ -145,7 +145,6 @@ struct xe_guc_log_snapshot *xe_guc_log_snapshot_capture(struct xe_guc_log *log,
> struct xe_device *xe = log_to_xe(log);
> struct xe_guc *guc = log_to_guc(log);
> struct xe_gt *gt = log_to_gt(log);
>- unsigned int fw_ref;
> size_t remain;
> int i;
>
>@@ -165,13 +164,12 @@ struct xe_guc_log_snapshot *xe_guc_log_snapshot_capture(struct xe_guc_log *log,
> remain -= size;
> }
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref) {
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> snapshot->stamp = ~0ULL;
>- } else {
>+ else
> snapshot->stamp = xe_mmio_read64_2x32(>->mmio, GUC_PMTIMESTAMP_LO);
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>- }
>+
> snapshot->ktime = ktime_get_boottime_ns();
> snapshot->level = log->level;
> snapshot->ver_found = guc->fw.versions.found[XE_UC_FW_VER_RELEASE];
>diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
>index d4ffdb71ef3d..7e0882074a99 100644
>--- a/drivers/gpu/drm/xe/xe_guc_submit.c
>+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
>@@ -1225,7 +1225,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> struct xe_guc *guc = exec_queue_to_guc(q);
> const char *process_name = "no process";
> struct xe_device *xe = guc_to_xe(guc);
>- unsigned int fw_ref;
> int err = -ETIME;
> pid_t pid = -1;
> int i = 0;
>@@ -1258,13 +1257,11 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> if (!exec_queue_killed(q) && !xe->devcoredump.captured &&
> !xe_guc_capture_get_matching_and_lock(q)) {
> /* take force wake before engine register manual capture */
>- fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
> xe_gt_info(q->gt, "failed to get forcewake for coredump capture\n");
>
> xe_engine_snapshot_capture_for_queue(q);
>-
>- xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
> }
>
> /*
>@@ -1455,7 +1452,7 @@ static void __guc_exec_queue_destroy_async(struct work_struct *w)
> struct xe_exec_queue *q = ge->q;
> struct xe_guc *guc = exec_queue_to_guc(q);
>
>- xe_pm_runtime_get(guc_to_xe(guc));
>+ guard(xe_pm_runtime)(guc_to_xe(guc));
> trace_xe_exec_queue_destroy(q);
>
> if (xe_exec_queue_is_lr(q))
>@@ -1464,8 +1461,6 @@ static void __guc_exec_queue_destroy_async(struct work_struct *w)
> cancel_delayed_work_sync(&ge->sched.base.work_tdr);
>
> xe_exec_queue_fini(q);
>-
>- xe_pm_runtime_put(guc_to_xe(guc));
> }
>
> static void guc_exec_queue_destroy_async(struct xe_exec_queue *q)
>diff --git a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
>index a80175c7c478..848d3493df10 100644
>--- a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
>+++ b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
>@@ -71,12 +71,11 @@ static int send_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval, u32 seqno)
> return send_tlb_inval(guc, action, ARRAY_SIZE(action));
> } else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
> struct xe_mmio *mmio = >->mmio;
>- unsigned int fw_ref;
>
> if (IS_SRIOV_VF(xe))
> return -ECANCELED;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
> if (xe->info.platform == XE_PVC || GRAPHICS_VER(xe) >= 20) {
> xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC1,
> PVC_GUC_TLB_INV_DESC1_INVALIDATE);
>@@ -86,7 +85,6 @@ static int send_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval, u32 seqno)
> xe_mmio_write32(mmio, GUC_TLB_INV_CR,
> GUC_TLB_INV_CR_INVALIDATE);
> }
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> return -ECANCELED;
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 09/30] drm/xe/guc_pc: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (7 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 08/30] drm/xe/guc: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 13:00 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 10/30] drm/xe/mocs: " Matt Roper
` (25 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM in the GuC PC code.
This allows us to eliminate to goto-based cleanup and simplifies some
other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_guc_pc.c | 62 ++++++++++------------------------
1 file changed, 17 insertions(+), 45 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
index ff22235857f8..b2e10359e019 100644
--- a/drivers/gpu/drm/xe/xe_guc_pc.c
+++ b/drivers/gpu/drm/xe/xe_guc_pc.c
@@ -511,21 +511,17 @@ u32 xe_guc_pc_get_cur_freq_fw(struct xe_guc_pc *pc)
int xe_guc_pc_get_cur_freq(struct xe_guc_pc *pc, u32 *freq)
{
struct xe_gt *gt = pc_to_gt(pc);
- unsigned int fw_ref;
/*
* GuC SLPC plays with cur freq request when GuCRC is enabled
* Block RC6 for a more reliable read.
*/
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
return -ETIMEDOUT;
- }
*freq = get_cur_freq(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return 0;
}
@@ -1085,13 +1081,8 @@ int xe_guc_pc_gucrc_disable(struct xe_guc_pc *pc)
*/
int xe_guc_pc_override_gucrc_mode(struct xe_guc_pc *pc, enum slpc_gucrc_mode mode)
{
- int ret;
-
- xe_pm_runtime_get(pc_to_xe(pc));
- ret = pc_action_set_param(pc, SLPC_PARAM_PWRGATE_RC_MODE, mode);
- xe_pm_runtime_put(pc_to_xe(pc));
-
- return ret;
+ guard(xe_pm_runtime)(pc_to_xe(pc));
+ return pc_action_set_param(pc, SLPC_PARAM_PWRGATE_RC_MODE, mode);
}
/**
@@ -1102,13 +1093,8 @@ int xe_guc_pc_override_gucrc_mode(struct xe_guc_pc *pc, enum slpc_gucrc_mode mod
*/
int xe_guc_pc_unset_gucrc_mode(struct xe_guc_pc *pc)
{
- int ret;
-
- xe_pm_runtime_get(pc_to_xe(pc));
- ret = pc_action_unset_param(pc, SLPC_PARAM_PWRGATE_RC_MODE);
- xe_pm_runtime_put(pc_to_xe(pc));
-
- return ret;
+ guard(xe_pm_runtime)(pc_to_xe(pc));
+ return pc_action_unset_param(pc, SLPC_PARAM_PWRGATE_RC_MODE);
}
static void pc_init_pcode_freq(struct xe_guc_pc *pc)
@@ -1198,7 +1184,7 @@ int xe_guc_pc_set_power_profile(struct xe_guc_pc *pc, const char *buf)
return -EINVAL;
guard(mutex)(&pc->freq_lock);
- xe_pm_runtime_get_noresume(pc_to_xe(pc));
+ guard(xe_pm_runtime_noresume)(pc_to_xe(pc));
ret = pc_action_set_param(pc,
SLPC_PARAM_POWER_PROFILE,
@@ -1209,8 +1195,6 @@ int xe_guc_pc_set_power_profile(struct xe_guc_pc *pc, const char *buf)
else
pc->power_profile = val;
- xe_pm_runtime_put(pc_to_xe(pc));
-
return ret;
}
@@ -1223,17 +1207,14 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
struct xe_device *xe = pc_to_xe(pc);
struct xe_gt *gt = pc_to_gt(pc);
u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data));
- unsigned int fw_ref;
ktime_t earlier;
int ret;
xe_gt_assert(gt, xe_device_uc_enabled(xe));
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
return -ETIMEDOUT;
- }
if (xe->info.skip_guc_pc) {
if (xe->info.platform != XE_PVC)
@@ -1241,9 +1222,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
/* Request max possible since dynamic freq mgmt is not enabled */
pc_set_cur_freq(pc, UINT_MAX);
-
- ret = 0;
- goto out;
+ return 0;
}
xe_map_memset(xe, &pc->bo->vmap, 0, 0, size);
@@ -1252,7 +1231,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
earlier = ktime_get();
ret = pc_action_reset(pc);
if (ret)
- goto out;
+ return ret;
if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING,
SLPC_RESET_TIMEOUT_MS)) {
@@ -1263,8 +1242,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING,
SLPC_RESET_EXTENDED_TIMEOUT_MS)) {
xe_gt_err(gt, "GuC PC Start failed: Dynamic GT frequency control and GT sleep states are now disabled.\n");
- ret = -EIO;
- goto out;
+ return -EIO;
}
xe_gt_warn(gt, "GuC PC excessive start time: %lldms",
@@ -1273,21 +1251,20 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
ret = pc_init_freqs(pc);
if (ret)
- goto out;
+ return ret;
ret = pc_set_mert_freq_cap(pc);
if (ret)
- goto out;
+ return ret;
if (xe->info.platform == XE_PVC) {
xe_guc_pc_gucrc_disable(pc);
- ret = 0;
- goto out;
+ return 0;
}
ret = pc_action_setup_gucrc(pc, GUCRC_FIRMWARE_CONTROL);
if (ret)
- goto out;
+ return ret;
/* Enable SLPC Optimized Strategy for compute */
ret = pc_action_set_strategy(pc, SLPC_OPTIMIZED_STRATEGY_COMPUTE);
@@ -1297,8 +1274,6 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
if (unlikely(ret))
xe_gt_err(gt, "Failed to set SLPC power profile: %pe\n", ERR_PTR(ret));
-out:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return ret;
}
@@ -1330,19 +1305,16 @@ static void xe_guc_pc_fini_hw(void *arg)
{
struct xe_guc_pc *pc = arg;
struct xe_device *xe = pc_to_xe(pc);
- unsigned int fw_ref;
if (xe_device_wedged(xe))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL);
xe_guc_pc_gucrc_disable(pc);
XE_WARN_ON(xe_guc_pc_stop(pc));
/* Bind requested freq to mert_freq_cap before unload */
pc_set_cur_freq(pc, min(pc_max_freq_cap(pc), pc->rpe_freq));
-
- xe_force_wake_put(gt_to_fw(pc_to_gt(pc)), fw_ref);
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 09/30] drm/xe/guc_pc: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 09/30] drm/xe/guc_pc: " Matt Roper
@ 2025-11-13 13:00 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 13:00 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:27-03:00)
>Use scope-based cleanup for forcewake and runtime PM in the GuC PC code.
>This allows us to eliminate to goto-based cleanup and simplifies some
>other functions.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_guc_pc.c | 62 ++++++++++------------------------
> 1 file changed, 17 insertions(+), 45 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
>index ff22235857f8..b2e10359e019 100644
>--- a/drivers/gpu/drm/xe/xe_guc_pc.c
>+++ b/drivers/gpu/drm/xe/xe_guc_pc.c
>@@ -511,21 +511,17 @@ u32 xe_guc_pc_get_cur_freq_fw(struct xe_guc_pc *pc)
> int xe_guc_pc_get_cur_freq(struct xe_guc_pc *pc, u32 *freq)
> {
> struct xe_gt *gt = pc_to_gt(pc);
>- unsigned int fw_ref;
>
> /*
> * GuC SLPC plays with cur freq request when GuCRC is enabled
> * Block RC6 for a more reliable read.
> */
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
> return -ETIMEDOUT;
>- }
>
> *freq = get_cur_freq(gt);
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> return 0;
> }
>
>@@ -1085,13 +1081,8 @@ int xe_guc_pc_gucrc_disable(struct xe_guc_pc *pc)
> */
> int xe_guc_pc_override_gucrc_mode(struct xe_guc_pc *pc, enum slpc_gucrc_mode mode)
> {
>- int ret;
>-
>- xe_pm_runtime_get(pc_to_xe(pc));
>- ret = pc_action_set_param(pc, SLPC_PARAM_PWRGATE_RC_MODE, mode);
>- xe_pm_runtime_put(pc_to_xe(pc));
>-
>- return ret;
>+ guard(xe_pm_runtime)(pc_to_xe(pc));
>+ return pc_action_set_param(pc, SLPC_PARAM_PWRGATE_RC_MODE, mode);
> }
>
> /**
>@@ -1102,13 +1093,8 @@ int xe_guc_pc_override_gucrc_mode(struct xe_guc_pc *pc, enum slpc_gucrc_mode mod
> */
> int xe_guc_pc_unset_gucrc_mode(struct xe_guc_pc *pc)
> {
>- int ret;
>-
>- xe_pm_runtime_get(pc_to_xe(pc));
>- ret = pc_action_unset_param(pc, SLPC_PARAM_PWRGATE_RC_MODE);
>- xe_pm_runtime_put(pc_to_xe(pc));
>-
>- return ret;
>+ guard(xe_pm_runtime)(pc_to_xe(pc));
>+ return pc_action_unset_param(pc, SLPC_PARAM_PWRGATE_RC_MODE);
> }
>
> static void pc_init_pcode_freq(struct xe_guc_pc *pc)
>@@ -1198,7 +1184,7 @@ int xe_guc_pc_set_power_profile(struct xe_guc_pc *pc, const char *buf)
> return -EINVAL;
>
> guard(mutex)(&pc->freq_lock);
>- xe_pm_runtime_get_noresume(pc_to_xe(pc));
>+ guard(xe_pm_runtime_noresume)(pc_to_xe(pc));
>
> ret = pc_action_set_param(pc,
> SLPC_PARAM_POWER_PROFILE,
>@@ -1209,8 +1195,6 @@ int xe_guc_pc_set_power_profile(struct xe_guc_pc *pc, const char *buf)
> else
> pc->power_profile = val;
>
>- xe_pm_runtime_put(pc_to_xe(pc));
>-
> return ret;
> }
>
>@@ -1223,17 +1207,14 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
> struct xe_device *xe = pc_to_xe(pc);
> struct xe_gt *gt = pc_to_gt(pc);
> u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data));
>- unsigned int fw_ref;
> ktime_t earlier;
> int ret;
>
> xe_gt_assert(gt, xe_device_uc_enabled(xe));
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
> return -ETIMEDOUT;
>- }
>
> if (xe->info.skip_guc_pc) {
> if (xe->info.platform != XE_PVC)
>@@ -1241,9 +1222,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
>
> /* Request max possible since dynamic freq mgmt is not enabled */
> pc_set_cur_freq(pc, UINT_MAX);
>-
>- ret = 0;
>- goto out;
>+ return 0;
> }
>
> xe_map_memset(xe, &pc->bo->vmap, 0, 0, size);
>@@ -1252,7 +1231,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
> earlier = ktime_get();
> ret = pc_action_reset(pc);
> if (ret)
>- goto out;
>+ return ret;
>
> if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING,
> SLPC_RESET_TIMEOUT_MS)) {
>@@ -1263,8 +1242,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
> if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING,
> SLPC_RESET_EXTENDED_TIMEOUT_MS)) {
> xe_gt_err(gt, "GuC PC Start failed: Dynamic GT frequency control and GT sleep states are now disabled.\n");
>- ret = -EIO;
>- goto out;
>+ return -EIO;
> }
>
> xe_gt_warn(gt, "GuC PC excessive start time: %lldms",
>@@ -1273,21 +1251,20 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
>
> ret = pc_init_freqs(pc);
> if (ret)
>- goto out;
>+ return ret;
>
> ret = pc_set_mert_freq_cap(pc);
> if (ret)
>- goto out;
>+ return ret;
>
> if (xe->info.platform == XE_PVC) {
> xe_guc_pc_gucrc_disable(pc);
>- ret = 0;
>- goto out;
>+ return 0;
> }
>
> ret = pc_action_setup_gucrc(pc, GUCRC_FIRMWARE_CONTROL);
> if (ret)
>- goto out;
>+ return ret;
>
> /* Enable SLPC Optimized Strategy for compute */
> ret = pc_action_set_strategy(pc, SLPC_OPTIMIZED_STRATEGY_COMPUTE);
>@@ -1297,8 +1274,6 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
> if (unlikely(ret))
> xe_gt_err(gt, "Failed to set SLPC power profile: %pe\n", ERR_PTR(ret));
>
>-out:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> return ret;
> }
>
>@@ -1330,19 +1305,16 @@ static void xe_guc_pc_fini_hw(void *arg)
> {
> struct xe_guc_pc *pc = arg;
> struct xe_device *xe = pc_to_xe(pc);
>- unsigned int fw_ref;
>
> if (xe_device_wedged(xe))
> return;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL);
> xe_guc_pc_gucrc_disable(pc);
> XE_WARN_ON(xe_guc_pc_stop(pc));
>
> /* Bind requested freq to mert_freq_cap before unload */
> pc_set_cur_freq(pc, min(pc_max_freq_cap(pc), pc->rpe_freq));
>-
>- xe_force_wake_put(gt_to_fw(pc_to_gt(pc)), fw_ref);
> }
>
> /**
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 10/30] drm/xe/mocs: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (8 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 09/30] drm/xe/guc_pc: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 13:30 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 11/30] drm/xe/pat: Use scope-based forcewake Matt Roper
` (24 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Using scope-based cleanup for runtime PM and forcewake in the MOCS code
allows us to eliminate some goto-based error handling and simplify some
other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/tests/xe_mocs.c | 17 ++++++-----------
drivers/gpu/drm/xe/xe_mocs.c | 18 ++++++------------
2 files changed, 12 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
index 0e502feaca81..53a0c9c49f85 100644
--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
+++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
@@ -43,14 +43,12 @@ static void read_l3cc_table(struct xe_gt *gt,
{
struct kunit *test = kunit_get_current_test();
u32 l3cc, l3cc_expected;
- unsigned int fw_ref, i;
+ unsigned int i;
u32 reg_val;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
KUNIT_ASSERT_TRUE_MSG(test, true, "Forcewake Failed.\n");
- }
for (i = 0; i < info->num_mocs_regs; i++) {
if (!(i & 1)) {
@@ -74,7 +72,6 @@ static void read_l3cc_table(struct xe_gt *gt,
KUNIT_EXPECT_EQ_MSG(test, l3cc_expected, l3cc,
"l3cc idx=%u has incorrect val.\n", i);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
static void read_mocs_table(struct xe_gt *gt,
@@ -82,14 +79,14 @@ static void read_mocs_table(struct xe_gt *gt,
{
struct kunit *test = kunit_get_current_test();
u32 mocs, mocs_expected;
- unsigned int fw_ref, i;
+ unsigned int i;
u32 reg_val;
KUNIT_EXPECT_TRUE_MSG(test, info->unused_entries_index,
"Unused entries index should have been defined\n");
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- KUNIT_ASSERT_NE_MSG(test, fw_ref, 0, "Forcewake Failed.\n");
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ KUNIT_ASSERT_NE_MSG(test, fw_ref.domains, 0, "Forcewake Failed.\n");
for (i = 0; i < info->num_mocs_regs; i++) {
if (regs_are_mcr(gt))
@@ -106,8 +103,6 @@ static void read_mocs_table(struct xe_gt *gt,
KUNIT_EXPECT_EQ_MSG(test, mocs_expected, mocs,
"mocs reg 0x%x has incorrect val.\n", i);
}
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
static int mocs_kernel_test_run_device(struct xe_device *xe)
diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
index 6613d3b48a84..0b7225bd77e0 100644
--- a/drivers/gpu/drm/xe/xe_mocs.c
+++ b/drivers/gpu/drm/xe/xe_mocs.c
@@ -811,26 +811,20 @@ int xe_mocs_dump(struct xe_gt *gt, struct drm_printer *p)
struct xe_device *xe = gt_to_xe(gt);
enum xe_force_wake_domains domain;
struct xe_mocs_info table;
- unsigned int fw_ref, flags;
- int err = 0;
+ unsigned int flags;
flags = get_mocs_settings(xe, &table);
domain = flags & HAS_LNCF_MOCS ? XE_FORCEWAKE_ALL : XE_FW_GT;
- xe_pm_runtime_get_noresume(xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), domain);
- if (!xe_force_wake_ref_has_domain(fw_ref, domain)) {
- err = -ETIMEDOUT;
- goto err_fw;
- }
+ guard(xe_pm_runtime_noresume)(xe);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), domain);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, domain))
+ return -ETIMEDOUT;
table.ops->dump(&table, flags, gt, p);
-err_fw:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
- xe_pm_runtime_put(xe);
- return err;
+ return 0;
}
#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 10/30] drm/xe/mocs: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 10/30] drm/xe/mocs: " Matt Roper
@ 2025-11-13 13:30 ` Gustavo Sousa
2025-11-13 23:28 ` Matt Roper
0 siblings, 1 reply; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 13:30 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:28-03:00)
>Using scope-based cleanup for runtime PM and forcewake in the MOCS code
>allows us to eliminate some goto-based error handling and simplify some
>other functions.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/tests/xe_mocs.c | 17 ++++++-----------
> drivers/gpu/drm/xe/xe_mocs.c | 18 ++++++------------
> 2 files changed, 12 insertions(+), 23 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
>index 0e502feaca81..53a0c9c49f85 100644
>--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
>+++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
>@@ -43,14 +43,12 @@ static void read_l3cc_table(struct xe_gt *gt,
> {
> struct kunit *test = kunit_get_current_test();
> u32 l3cc, l3cc_expected;
>- unsigned int fw_ref, i;
>+ unsigned int i;
> u32 reg_val;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
> KUNIT_ASSERT_TRUE_MSG(test, true, "Forcewake Failed.\n");
Shouldn't KUNIT_ASSERT_TRUE_MSG() be called with false here?
This issue was already present and is unrelated to the patch. However,
since we are touching this, maybe we could fix it with something like:
KUNIT_ASSERT_TRUE(test,
xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL));
The patch it self looks good to me, so
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
The suggested fix could be sneaked into this patch or we could leave it
for a separate patch. Your call.
--
Gustavo Sousa
>- }
>
> for (i = 0; i < info->num_mocs_regs; i++) {
> if (!(i & 1)) {
>@@ -74,7 +72,6 @@ static void read_l3cc_table(struct xe_gt *gt,
> KUNIT_EXPECT_EQ_MSG(test, l3cc_expected, l3cc,
> "l3cc idx=%u has incorrect val.\n", i);
> }
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> static void read_mocs_table(struct xe_gt *gt,
>@@ -82,14 +79,14 @@ static void read_mocs_table(struct xe_gt *gt,
> {
> struct kunit *test = kunit_get_current_test();
> u32 mocs, mocs_expected;
>- unsigned int fw_ref, i;
>+ unsigned int i;
> u32 reg_val;
>
> KUNIT_EXPECT_TRUE_MSG(test, info->unused_entries_index,
> "Unused entries index should have been defined\n");
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- KUNIT_ASSERT_NE_MSG(test, fw_ref, 0, "Forcewake Failed.\n");
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ KUNIT_ASSERT_NE_MSG(test, fw_ref.domains, 0, "Forcewake Failed.\n");
>
> for (i = 0; i < info->num_mocs_regs; i++) {
> if (regs_are_mcr(gt))
>@@ -106,8 +103,6 @@ static void read_mocs_table(struct xe_gt *gt,
> KUNIT_EXPECT_EQ_MSG(test, mocs_expected, mocs,
> "mocs reg 0x%x has incorrect val.\n", i);
> }
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> static int mocs_kernel_test_run_device(struct xe_device *xe)
>diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
>index 6613d3b48a84..0b7225bd77e0 100644
>--- a/drivers/gpu/drm/xe/xe_mocs.c
>+++ b/drivers/gpu/drm/xe/xe_mocs.c
>@@ -811,26 +811,20 @@ int xe_mocs_dump(struct xe_gt *gt, struct drm_printer *p)
> struct xe_device *xe = gt_to_xe(gt);
> enum xe_force_wake_domains domain;
> struct xe_mocs_info table;
>- unsigned int fw_ref, flags;
>- int err = 0;
>+ unsigned int flags;
>
> flags = get_mocs_settings(xe, &table);
>
> domain = flags & HAS_LNCF_MOCS ? XE_FORCEWAKE_ALL : XE_FW_GT;
>- xe_pm_runtime_get_noresume(xe);
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), domain);
>
>- if (!xe_force_wake_ref_has_domain(fw_ref, domain)) {
>- err = -ETIMEDOUT;
>- goto err_fw;
>- }
>+ guard(xe_pm_runtime_noresume)(xe);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), domain);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, domain))
>+ return -ETIMEDOUT;
>
> table.ops->dump(&table, flags, gt, p);
>
>-err_fw:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>- xe_pm_runtime_put(xe);
>- return err;
>+ return 0;
> }
>
> #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread* Re: [PATCH v2 10/30] drm/xe/mocs: Use scope-based cleanup
2025-11-13 13:30 ` Gustavo Sousa
@ 2025-11-13 23:28 ` Matt Roper
0 siblings, 0 replies; 74+ messages in thread
From: Matt Roper @ 2025-11-13 23:28 UTC (permalink / raw)
To: Gustavo Sousa; +Cc: intel-xe
On Thu, Nov 13, 2025 at 10:30:16AM -0300, Gustavo Sousa wrote:
> Quoting Matt Roper (2025-11-10 20:20:28-03:00)
> >Using scope-based cleanup for runtime PM and forcewake in the MOCS code
> >allows us to eliminate some goto-based error handling and simplify some
> >other functions.
> >
> >Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> >---
> > drivers/gpu/drm/xe/tests/xe_mocs.c | 17 ++++++-----------
> > drivers/gpu/drm/xe/xe_mocs.c | 18 ++++++------------
> > 2 files changed, 12 insertions(+), 23 deletions(-)
> >
> >diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
> >index 0e502feaca81..53a0c9c49f85 100644
> >--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
> >+++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
> >@@ -43,14 +43,12 @@ static void read_l3cc_table(struct xe_gt *gt,
> > {
> > struct kunit *test = kunit_get_current_test();
> > u32 l3cc, l3cc_expected;
> >- unsigned int fw_ref, i;
> >+ unsigned int i;
> > u32 reg_val;
> >
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
> >+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
> > KUNIT_ASSERT_TRUE_MSG(test, true, "Forcewake Failed.\n");
>
> Shouldn't KUNIT_ASSERT_TRUE_MSG() be called with false here?
>
> This issue was already present and is unrelated to the patch. However,
> since we are touching this, maybe we could fix it with something like:
>
> KUNIT_ASSERT_TRUE(test,
> xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL));
I noticed this earlier, but figured the fix should probably be a
separate patch. Either moving the forcewake check into the condition
like you mention here, or replacing the assertion with
KUNIT_FAIL_AND_ABORT should fix it. I'll include a patch that does one
of those with the next revision of the series.
Matt
>
> The patch it self looks good to me, so
>
> Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>
> The suggested fix could be sneaked into this patch or we could leave it
> for a separate patch. Your call.
>
> --
> Gustavo Sousa
>
> >- }
> >
> > for (i = 0; i < info->num_mocs_regs; i++) {
> > if (!(i & 1)) {
> >@@ -74,7 +72,6 @@ static void read_l3cc_table(struct xe_gt *gt,
> > KUNIT_EXPECT_EQ_MSG(test, l3cc_expected, l3cc,
> > "l3cc idx=%u has incorrect val.\n", i);
> > }
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > }
> >
> > static void read_mocs_table(struct xe_gt *gt,
> >@@ -82,14 +79,14 @@ static void read_mocs_table(struct xe_gt *gt,
> > {
> > struct kunit *test = kunit_get_current_test();
> > u32 mocs, mocs_expected;
> >- unsigned int fw_ref, i;
> >+ unsigned int i;
> > u32 reg_val;
> >
> > KUNIT_EXPECT_TRUE_MSG(test, info->unused_entries_index,
> > "Unused entries index should have been defined\n");
> >
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
> >- KUNIT_ASSERT_NE_MSG(test, fw_ref, 0, "Forcewake Failed.\n");
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
> >+ KUNIT_ASSERT_NE_MSG(test, fw_ref.domains, 0, "Forcewake Failed.\n");
> >
> > for (i = 0; i < info->num_mocs_regs; i++) {
> > if (regs_are_mcr(gt))
> >@@ -106,8 +103,6 @@ static void read_mocs_table(struct xe_gt *gt,
> > KUNIT_EXPECT_EQ_MSG(test, mocs_expected, mocs,
> > "mocs reg 0x%x has incorrect val.\n", i);
> > }
> >-
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> > }
> >
> > static int mocs_kernel_test_run_device(struct xe_device *xe)
> >diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
> >index 6613d3b48a84..0b7225bd77e0 100644
> >--- a/drivers/gpu/drm/xe/xe_mocs.c
> >+++ b/drivers/gpu/drm/xe/xe_mocs.c
> >@@ -811,26 +811,20 @@ int xe_mocs_dump(struct xe_gt *gt, struct drm_printer *p)
> > struct xe_device *xe = gt_to_xe(gt);
> > enum xe_force_wake_domains domain;
> > struct xe_mocs_info table;
> >- unsigned int fw_ref, flags;
> >- int err = 0;
> >+ unsigned int flags;
> >
> > flags = get_mocs_settings(xe, &table);
> >
> > domain = flags & HAS_LNCF_MOCS ? XE_FORCEWAKE_ALL : XE_FW_GT;
> >- xe_pm_runtime_get_noresume(xe);
> >- fw_ref = xe_force_wake_get(gt_to_fw(gt), domain);
> >
> >- if (!xe_force_wake_ref_has_domain(fw_ref, domain)) {
> >- err = -ETIMEDOUT;
> >- goto err_fw;
> >- }
> >+ guard(xe_pm_runtime_noresume)(xe);
> >+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), domain);
> >+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, domain))
> >+ return -ETIMEDOUT;
> >
> > table.ops->dump(&table, flags, gt, p);
> >
> >-err_fw:
> >- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> >- xe_pm_runtime_put(xe);
> >- return err;
> >+ return 0;
> > }
> >
> > #if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
> >--
> >2.51.1
> >
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 11/30] drm/xe/pat: Use scope-based forcewake
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (9 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 10/30] drm/xe/mocs: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 13:37 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 12/30] drm/xe/pxp: Use scope-based cleanup Matt Roper
` (23 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake in the PAT code to slightly
simplify the code.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_pat.c | 36 ++++++++++++------------------------
1 file changed, 12 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
index 68171cceea18..97b5e995f7d7 100644
--- a/drivers/gpu/drm/xe/xe_pat.c
+++ b/drivers/gpu/drm/xe/xe_pat.c
@@ -233,11 +233,10 @@ static void program_pat_mcr(struct xe_gt *gt, const struct xe_pat_table_entry ta
static int xelp_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table:\n");
@@ -250,7 +249,6 @@ static int xelp_dump(struct xe_gt *gt, struct drm_printer *p)
XELP_MEM_TYPE_STR_MAP[mem_type], pat);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return 0;
}
@@ -262,11 +260,10 @@ static const struct xe_pat_ops xelp_pat_ops = {
static int xehp_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table:\n");
@@ -281,7 +278,6 @@ static int xehp_dump(struct xe_gt *gt, struct drm_printer *p)
XELP_MEM_TYPE_STR_MAP[mem_type], pat);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return 0;
}
@@ -293,11 +289,10 @@ static const struct xe_pat_ops xehp_pat_ops = {
static int xehpc_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table:\n");
@@ -310,7 +305,6 @@ static int xehpc_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XEHPC_CLOS_LEVEL_MASK, pat), pat);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return 0;
}
@@ -322,11 +316,10 @@ static const struct xe_pat_ops xehpc_pat_ops = {
static int xelpg_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table:\n");
@@ -344,7 +337,6 @@ static int xelpg_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XELPG_INDEX_COH_MODE_MASK, pat), pat);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return 0;
}
@@ -361,12 +353,11 @@ static const struct xe_pat_ops xelpg_pat_ops = {
static int xe2_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
u32 pat;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table: (* = reserved entry)\n");
@@ -406,7 +397,6 @@ static int xe2_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XE2_COH_MODE, pat),
pat);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return 0;
}
@@ -419,12 +409,11 @@ static const struct xe_pat_ops xe2_pat_ops = {
static int xe3p_xpc_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
u32 pat;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table: (* = reserved entry)\n");
@@ -456,7 +445,6 @@ static int xe3p_xpc_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XE2_COH_MODE, pat),
pat);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 11/30] drm/xe/pat: Use scope-based forcewake
2025-11-10 23:20 ` [PATCH v2 11/30] drm/xe/pat: Use scope-based forcewake Matt Roper
@ 2025-11-13 13:37 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 13:37 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:29-03:00)
>Use scope-based cleanup for forcewake in the PAT code to slightly
>simplify the code.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_pat.c | 36 ++++++++++++------------------------
> 1 file changed, 12 insertions(+), 24 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
>index 68171cceea18..97b5e995f7d7 100644
>--- a/drivers/gpu/drm/xe/xe_pat.c
>+++ b/drivers/gpu/drm/xe/xe_pat.c
>@@ -233,11 +233,10 @@ static void program_pat_mcr(struct xe_gt *gt, const struct xe_pat_table_entry ta
> static int xelp_dump(struct xe_gt *gt, struct drm_printer *p)
> {
> struct xe_device *xe = gt_to_xe(gt);
>- unsigned int fw_ref;
> int i;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> drm_printf(p, "PAT table:\n");
>@@ -250,7 +249,6 @@ static int xelp_dump(struct xe_gt *gt, struct drm_printer *p)
> XELP_MEM_TYPE_STR_MAP[mem_type], pat);
> }
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> return 0;
> }
>
>@@ -262,11 +260,10 @@ static const struct xe_pat_ops xelp_pat_ops = {
> static int xehp_dump(struct xe_gt *gt, struct drm_printer *p)
> {
> struct xe_device *xe = gt_to_xe(gt);
>- unsigned int fw_ref;
> int i;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> drm_printf(p, "PAT table:\n");
>@@ -281,7 +278,6 @@ static int xehp_dump(struct xe_gt *gt, struct drm_printer *p)
> XELP_MEM_TYPE_STR_MAP[mem_type], pat);
> }
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> return 0;
> }
>
>@@ -293,11 +289,10 @@ static const struct xe_pat_ops xehp_pat_ops = {
> static int xehpc_dump(struct xe_gt *gt, struct drm_printer *p)
> {
> struct xe_device *xe = gt_to_xe(gt);
>- unsigned int fw_ref;
> int i;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> drm_printf(p, "PAT table:\n");
>@@ -310,7 +305,6 @@ static int xehpc_dump(struct xe_gt *gt, struct drm_printer *p)
> REG_FIELD_GET(XEHPC_CLOS_LEVEL_MASK, pat), pat);
> }
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> return 0;
> }
>
>@@ -322,11 +316,10 @@ static const struct xe_pat_ops xehpc_pat_ops = {
> static int xelpg_dump(struct xe_gt *gt, struct drm_printer *p)
> {
> struct xe_device *xe = gt_to_xe(gt);
>- unsigned int fw_ref;
> int i;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> drm_printf(p, "PAT table:\n");
>@@ -344,7 +337,6 @@ static int xelpg_dump(struct xe_gt *gt, struct drm_printer *p)
> REG_FIELD_GET(XELPG_INDEX_COH_MODE_MASK, pat), pat);
> }
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> return 0;
> }
>
>@@ -361,12 +353,11 @@ static const struct xe_pat_ops xelpg_pat_ops = {
> static int xe2_dump(struct xe_gt *gt, struct drm_printer *p)
> {
> struct xe_device *xe = gt_to_xe(gt);
>- unsigned int fw_ref;
> u32 pat;
> int i;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> drm_printf(p, "PAT table: (* = reserved entry)\n");
>@@ -406,7 +397,6 @@ static int xe2_dump(struct xe_gt *gt, struct drm_printer *p)
> REG_FIELD_GET(XE2_COH_MODE, pat),
> pat);
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> return 0;
> }
>
>@@ -419,12 +409,11 @@ static const struct xe_pat_ops xe2_pat_ops = {
> static int xe3p_xpc_dump(struct xe_gt *gt, struct drm_printer *p)
> {
> struct xe_device *xe = gt_to_xe(gt);
>- unsigned int fw_ref;
> u32 pat;
> int i;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> drm_printf(p, "PAT table: (* = reserved entry)\n");
>@@ -456,7 +445,6 @@ static int xe3p_xpc_dump(struct xe_gt *gt, struct drm_printer *p)
> REG_FIELD_GET(XE2_COH_MODE, pat),
> pat);
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> return 0;
> }
>
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 12/30] drm/xe/pxp: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (10 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 11/30] drm/xe/pat: Use scope-based forcewake Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 13:40 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 13/30] drm/xe/gsc: " Matt Roper
` (22 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime pm. This allows us to
eliminate some goto-based error handling and simplify other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_pxp.c | 55 ++++++++++++-------------------------
1 file changed, 18 insertions(+), 37 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index bdbdbbf6a678..508f4c128a48 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -58,10 +58,9 @@ bool xe_pxp_is_enabled(const struct xe_pxp *pxp)
static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
{
struct xe_gt *gt = pxp->gt;
- unsigned int fw_ref;
bool ready;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
/*
* If force_wake fails we could falsely report the prerequisites as not
@@ -71,14 +70,12 @@ static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
* PXP. Therefore, we can just log the force_wake error and not escalate
* it.
*/
- XE_WARN_ON(!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL));
+ XE_WARN_ON(!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL));
/* PXP requires both HuC authentication via GSC and GSC proxy initialized */
ready = xe_huc_is_authenticated(>->uc.huc, XE_HUC_AUTH_VIA_GSC) &&
xe_gsc_proxy_init_done(>->uc.gsc);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
return ready;
}
@@ -104,13 +101,12 @@ int xe_pxp_get_readiness_status(struct xe_pxp *pxp)
xe_uc_fw_status_to_error(pxp->gt->uc.gsc.fw.status))
return -EIO;
- xe_pm_runtime_get(pxp->xe);
+ guard(xe_pm_runtime)(pxp->xe);
/* PXP requires both HuC loaded and GSC proxy initialized */
if (pxp_prerequisites_done(pxp))
ret = 1;
- xe_pm_runtime_put(pxp->xe);
return ret;
}
@@ -135,35 +131,28 @@ static void pxp_invalidate_queues(struct xe_pxp *pxp);
static int pxp_terminate_hw(struct xe_pxp *pxp)
{
struct xe_gt *gt = pxp->gt;
- unsigned int fw_ref;
int ret = 0;
drm_dbg(&pxp->xe->drm, "Terminating PXP\n");
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
- ret = -EIO;
- goto out;
- }
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
+ return -EIO;
/* terminate the hw session */
ret = xe_pxp_submit_session_termination(pxp, ARB_SESSION);
if (ret)
- goto out;
+ return ret;
ret = pxp_wait_for_session_state(pxp, ARB_SESSION, false);
if (ret)
- goto out;
+ return ret;
/* Trigger full HW cleanup */
xe_mmio_write32(>->mmio, KCR_GLOBAL_TERMINATE, 1);
/* now we can tell the GSC to clean up its own state */
- ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
-
-out:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
- return ret;
+ return xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
}
static void mark_termination_in_progress(struct xe_pxp *pxp)
@@ -326,14 +315,12 @@ static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
{
u32 val = enable ? _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
_MASKED_BIT_DISABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES);
- unsigned int fw_ref;
- fw_ref = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(pxp->gt), XE_FW_GT);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
return -EIO;
xe_mmio_write32(&pxp->gt->mmio, KCR_INIT, val);
- xe_force_wake_put(gt_to_fw(pxp->gt), fw_ref);
return 0;
}
@@ -453,34 +440,28 @@ int xe_pxp_init(struct xe_device *xe)
static int __pxp_start_arb_session(struct xe_pxp *pxp)
{
int ret;
- unsigned int fw_ref;
- fw_ref = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(pxp->gt), XE_FW_GT);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
return -EIO;
- if (pxp_session_is_in_play(pxp, ARB_SESSION)) {
- ret = -EEXIST;
- goto out_force_wake;
- }
+ if (pxp_session_is_in_play(pxp, ARB_SESSION))
+ return -EEXIST;
ret = xe_pxp_submit_session_init(&pxp->gsc_res, ARB_SESSION);
if (ret) {
drm_err(&pxp->xe->drm, "Failed to init PXP arb session: %pe\n", ERR_PTR(ret));
- goto out_force_wake;
+ return ret;
}
ret = pxp_wait_for_session_state(pxp, ARB_SESSION, true);
if (ret) {
drm_err(&pxp->xe->drm, "PXP ARB session failed to go in play%pe\n", ERR_PTR(ret));
- goto out_force_wake;
+ return ret;
}
drm_dbg(&pxp->xe->drm, "PXP ARB session is active\n");
-
-out_force_wake:
- xe_force_wake_put(gt_to_fw(pxp->gt), fw_ref);
- return ret;
+ return 0;
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 12/30] drm/xe/pxp: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 12/30] drm/xe/pxp: Use scope-based cleanup Matt Roper
@ 2025-11-13 13:40 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 13:40 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:30-03:00)
>Use scope-based cleanup for forcewake and runtime pm. This allows us to
>eliminate some goto-based error handling and simplify other functions.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_pxp.c | 55 ++++++++++++-------------------------
> 1 file changed, 18 insertions(+), 37 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
>index bdbdbbf6a678..508f4c128a48 100644
>--- a/drivers/gpu/drm/xe/xe_pxp.c
>+++ b/drivers/gpu/drm/xe/xe_pxp.c
>@@ -58,10 +58,9 @@ bool xe_pxp_is_enabled(const struct xe_pxp *pxp)
> static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
> {
> struct xe_gt *gt = pxp->gt;
>- unsigned int fw_ref;
> bool ready;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>
> /*
> * If force_wake fails we could falsely report the prerequisites as not
>@@ -71,14 +70,12 @@ static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
> * PXP. Therefore, we can just log the force_wake error and not escalate
> * it.
> */
>- XE_WARN_ON(!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL));
>+ XE_WARN_ON(!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL));
>
> /* PXP requires both HuC authentication via GSC and GSC proxy initialized */
> ready = xe_huc_is_authenticated(>->uc.huc, XE_HUC_AUTH_VIA_GSC) &&
> xe_gsc_proxy_init_done(>->uc.gsc);
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
> return ready;
> }
>
>@@ -104,13 +101,12 @@ int xe_pxp_get_readiness_status(struct xe_pxp *pxp)
> xe_uc_fw_status_to_error(pxp->gt->uc.gsc.fw.status))
> return -EIO;
>
>- xe_pm_runtime_get(pxp->xe);
>+ guard(xe_pm_runtime)(pxp->xe);
>
> /* PXP requires both HuC loaded and GSC proxy initialized */
> if (pxp_prerequisites_done(pxp))
> ret = 1;
>
>- xe_pm_runtime_put(pxp->xe);
> return ret;
> }
>
>@@ -135,35 +131,28 @@ static void pxp_invalidate_queues(struct xe_pxp *pxp);
> static int pxp_terminate_hw(struct xe_pxp *pxp)
> {
> struct xe_gt *gt = pxp->gt;
>- unsigned int fw_ref;
> int ret = 0;
>
> drm_dbg(&pxp->xe->drm, "Terminating PXP\n");
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
>- ret = -EIO;
>- goto out;
>- }
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
>+ return -EIO;
>
> /* terminate the hw session */
> ret = xe_pxp_submit_session_termination(pxp, ARB_SESSION);
> if (ret)
>- goto out;
>+ return ret;
>
> ret = pxp_wait_for_session_state(pxp, ARB_SESSION, false);
> if (ret)
>- goto out;
>+ return ret;
>
> /* Trigger full HW cleanup */
> xe_mmio_write32(>->mmio, KCR_GLOBAL_TERMINATE, 1);
>
> /* now we can tell the GSC to clean up its own state */
>- ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
>-
>-out:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>- return ret;
>+ return xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
> }
>
> static void mark_termination_in_progress(struct xe_pxp *pxp)
>@@ -326,14 +315,12 @@ static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
> {
> u32 val = enable ? _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
> _MASKED_BIT_DISABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES);
>- unsigned int fw_ref;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(pxp->gt), XE_FW_GT);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
> return -EIO;
>
> xe_mmio_write32(&pxp->gt->mmio, KCR_INIT, val);
>- xe_force_wake_put(gt_to_fw(pxp->gt), fw_ref);
>
> return 0;
> }
>@@ -453,34 +440,28 @@ int xe_pxp_init(struct xe_device *xe)
> static int __pxp_start_arb_session(struct xe_pxp *pxp)
> {
> int ret;
>- unsigned int fw_ref;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(pxp->gt), XE_FW_GT);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FW_GT))
> return -EIO;
>
>- if (pxp_session_is_in_play(pxp, ARB_SESSION)) {
>- ret = -EEXIST;
>- goto out_force_wake;
>- }
>+ if (pxp_session_is_in_play(pxp, ARB_SESSION))
>+ return -EEXIST;
>
> ret = xe_pxp_submit_session_init(&pxp->gsc_res, ARB_SESSION);
> if (ret) {
> drm_err(&pxp->xe->drm, "Failed to init PXP arb session: %pe\n", ERR_PTR(ret));
>- goto out_force_wake;
>+ return ret;
> }
>
> ret = pxp_wait_for_session_state(pxp, ARB_SESSION, true);
> if (ret) {
> drm_err(&pxp->xe->drm, "PXP ARB session failed to go in play%pe\n", ERR_PTR(ret));
>- goto out_force_wake;
>+ return ret;
> }
>
> drm_dbg(&pxp->xe->drm, "PXP ARB session is active\n");
>-
>-out_force_wake:
>- xe_force_wake_put(gt_to_fw(pxp->gt), fw_ref);
>- return ret;
>+ return 0;
> }
>
> /**
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 13/30] drm/xe/gsc: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (11 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 12/30] drm/xe/pxp: Use scope-based cleanup Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 13:46 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 14/30] drm/xe/device: " Matt Roper
` (21 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM to eliminate some
goto-based error handling and simplify other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_gsc.c | 21 ++++++---------------
drivers/gpu/drm/xe/xe_gsc_proxy.c | 17 +++++++----------
2 files changed, 13 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gsc.c b/drivers/gpu/drm/xe/xe_gsc.c
index dd69cb834f8e..a3157b0fe791 100644
--- a/drivers/gpu/drm/xe/xe_gsc.c
+++ b/drivers/gpu/drm/xe/xe_gsc.c
@@ -352,7 +352,6 @@ static void gsc_work(struct work_struct *work)
struct xe_gsc *gsc = container_of(work, typeof(*gsc), work);
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
u32 actions;
int ret;
@@ -361,13 +360,12 @@ static void gsc_work(struct work_struct *work)
gsc->work_actions = 0;
spin_unlock_irq(&gsc->lock);
- xe_pm_runtime_get(xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
+ guard(xe_pm_runtime)(xe);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
if (actions & GSC_ACTION_ER_COMPLETE) {
- ret = gsc_er_complete(gt);
- if (ret)
- goto out;
+ if (gsc_er_complete(gt))
+ return;
}
if (actions & GSC_ACTION_FW_LOAD) {
@@ -380,10 +378,6 @@ static void gsc_work(struct work_struct *work)
if (actions & GSC_ACTION_SW_PROXY)
xe_gsc_proxy_request_handler(gsc);
-
-out:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
- xe_pm_runtime_put(xe);
}
void xe_gsc_hwe_irq_handler(struct xe_hw_engine *hwe, u16 intr_vec)
@@ -615,7 +609,6 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
{
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_mmio *mmio = >->mmio;
- unsigned int fw_ref;
xe_uc_fw_print(&gsc->fw, p);
@@ -624,8 +617,8 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
if (!xe_uc_fw_is_enabled(&gsc->fw))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
+ if (!fw_ref.domains)
return;
drm_printf(p, "\nHECI1 FWSTS: 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x\n",
@@ -635,6 +628,4 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
xe_mmio_read32(mmio, HECI_FWSTS4(MTL_GSC_HECI1_BASE)),
xe_mmio_read32(mmio, HECI_FWSTS5(MTL_GSC_HECI1_BASE)),
xe_mmio_read32(mmio, HECI_FWSTS6(MTL_GSC_HECI1_BASE)));
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
diff --git a/drivers/gpu/drm/xe/xe_gsc_proxy.c b/drivers/gpu/drm/xe/xe_gsc_proxy.c
index 464282a89eef..e7573a0c5e5d 100644
--- a/drivers/gpu/drm/xe/xe_gsc_proxy.c
+++ b/drivers/gpu/drm/xe/xe_gsc_proxy.c
@@ -440,22 +440,19 @@ static void xe_gsc_proxy_remove(void *arg)
struct xe_gsc *gsc = arg;
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref = 0;
if (!gsc->proxy.component_added)
return;
/* disable HECI2 IRQs */
- xe_pm_runtime_get(xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
- if (!fw_ref)
- xe_gt_err(gt, "failed to get forcewake to disable GSC interrupts\n");
+ scoped_guard(xe_pm_runtime, xe) {
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
+ if (!fw_ref.domains)
+ xe_gt_err(gt, "failed to get forcewake to disable GSC interrupts\n");
- /* try do disable irq even if forcewake failed */
- gsc_proxy_irq_toggle(gsc, false);
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
- xe_pm_runtime_put(xe);
+ /* try do disable irq even if forcewake failed */
+ gsc_proxy_irq_toggle(gsc, false);
+ }
xe_gsc_wait_for_worker_completion(gsc);
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 13/30] drm/xe/gsc: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 13/30] drm/xe/gsc: " Matt Roper
@ 2025-11-13 13:46 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 13:46 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:31-03:00)
>Use scope-based cleanup for forcewake and runtime PM to eliminate some
>goto-based error handling and simplify other functions.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_gsc.c | 21 ++++++---------------
> drivers/gpu/drm/xe/xe_gsc_proxy.c | 17 +++++++----------
> 2 files changed, 13 insertions(+), 25 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_gsc.c b/drivers/gpu/drm/xe/xe_gsc.c
>index dd69cb834f8e..a3157b0fe791 100644
>--- a/drivers/gpu/drm/xe/xe_gsc.c
>+++ b/drivers/gpu/drm/xe/xe_gsc.c
>@@ -352,7 +352,6 @@ static void gsc_work(struct work_struct *work)
> struct xe_gsc *gsc = container_of(work, typeof(*gsc), work);
> struct xe_gt *gt = gsc_to_gt(gsc);
> struct xe_device *xe = gt_to_xe(gt);
>- unsigned int fw_ref;
> u32 actions;
> int ret;
>
>@@ -361,13 +360,12 @@ static void gsc_work(struct work_struct *work)
> gsc->work_actions = 0;
> spin_unlock_irq(&gsc->lock);
>
>- xe_pm_runtime_get(xe);
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
>+ guard(xe_pm_runtime)(xe);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
>
> if (actions & GSC_ACTION_ER_COMPLETE) {
>- ret = gsc_er_complete(gt);
>- if (ret)
>- goto out;
>+ if (gsc_er_complete(gt))
>+ return;
> }
>
> if (actions & GSC_ACTION_FW_LOAD) {
>@@ -380,10 +378,6 @@ static void gsc_work(struct work_struct *work)
>
> if (actions & GSC_ACTION_SW_PROXY)
> xe_gsc_proxy_request_handler(gsc);
>-
>-out:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>- xe_pm_runtime_put(xe);
> }
>
> void xe_gsc_hwe_irq_handler(struct xe_hw_engine *hwe, u16 intr_vec)
>@@ -615,7 +609,6 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
> {
> struct xe_gt *gt = gsc_to_gt(gsc);
> struct xe_mmio *mmio = >->mmio;
>- unsigned int fw_ref;
>
> xe_uc_fw_print(&gsc->fw, p);
>
>@@ -624,8 +617,8 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
> if (!xe_uc_fw_is_enabled(&gsc->fw))
> return;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
>+ if (!fw_ref.domains)
> return;
>
> drm_printf(p, "\nHECI1 FWSTS: 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x\n",
>@@ -635,6 +628,4 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
> xe_mmio_read32(mmio, HECI_FWSTS4(MTL_GSC_HECI1_BASE)),
> xe_mmio_read32(mmio, HECI_FWSTS5(MTL_GSC_HECI1_BASE)),
> xe_mmio_read32(mmio, HECI_FWSTS6(MTL_GSC_HECI1_BASE)));
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>diff --git a/drivers/gpu/drm/xe/xe_gsc_proxy.c b/drivers/gpu/drm/xe/xe_gsc_proxy.c
>index 464282a89eef..e7573a0c5e5d 100644
>--- a/drivers/gpu/drm/xe/xe_gsc_proxy.c
>+++ b/drivers/gpu/drm/xe/xe_gsc_proxy.c
>@@ -440,22 +440,19 @@ static void xe_gsc_proxy_remove(void *arg)
> struct xe_gsc *gsc = arg;
> struct xe_gt *gt = gsc_to_gt(gsc);
> struct xe_device *xe = gt_to_xe(gt);
>- unsigned int fw_ref = 0;
>
> if (!gsc->proxy.component_added)
> return;
>
> /* disable HECI2 IRQs */
>- xe_pm_runtime_get(xe);
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
>- if (!fw_ref)
>- xe_gt_err(gt, "failed to get forcewake to disable GSC interrupts\n");
>+ scoped_guard(xe_pm_runtime, xe) {
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
>+ if (!fw_ref.domains)
>+ xe_gt_err(gt, "failed to get forcewake to disable GSC interrupts\n");
>
>- /* try do disable irq even if forcewake failed */
>- gsc_proxy_irq_toggle(gsc, false);
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>- xe_pm_runtime_put(xe);
>+ /* try do disable irq even if forcewake failed */
>+ gsc_proxy_irq_toggle(gsc, false);
>+ }
>
> xe_gsc_wait_for_worker_completion(gsc);
>
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 14/30] drm/xe/device: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (12 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 13/30] drm/xe/gsc: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 14:04 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 15/30] drm/xe/devcoredump: " Matt Roper
` (20 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Convert device code to use scope-based forcewake and runtime PM.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 33 +++++++++++----------------------
1 file changed, 11 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index c7d373c70f0f..1197f914ef77 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -166,7 +166,7 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
struct xe_exec_queue *q;
unsigned long idx;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
/*
* No need for exec_queue.lock here as there is no contention for it
@@ -184,8 +184,6 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
xe_vm_close_and_put(vm);
xe_file_put(xef);
-
- xe_pm_runtime_put(xe);
}
static const struct drm_ioctl_desc xe_ioctls[] = {
@@ -220,10 +218,10 @@ static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
if (xe_device_wedged(xe))
return -ECANCELED;
- ret = xe_pm_runtime_get_ioctl(xe);
+ ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
+ ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm);
if (ret >= 0)
ret = drm_ioctl(file, cmd, arg);
- xe_pm_runtime_put(xe);
return ret;
}
@@ -238,10 +236,10 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo
if (xe_device_wedged(xe))
return -ECANCELED;
- ret = xe_pm_runtime_get_ioctl(xe);
+ ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
+ ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm);
if (ret >= 0)
ret = drm_compat_ioctl(file, cmd, arg);
- xe_pm_runtime_put(xe);
return ret;
}
@@ -775,7 +773,6 @@ ALLOW_ERROR_INJECTION(xe_device_probe_early, ERRNO); /* See xe_pci_probe() */
static int probe_has_flat_ccs(struct xe_device *xe)
{
struct xe_gt *gt;
- unsigned int fw_ref;
u32 reg;
/* Always enabled/disabled, no runtime check to do */
@@ -786,8 +783,8 @@ static int probe_has_flat_ccs(struct xe_device *xe)
if (!gt)
return 0;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
reg = xe_gt_mcr_unicast_read_any(gt, XE2_FLAT_CCS_BASE_RANGE_LOWER);
@@ -797,8 +794,6 @@ static int probe_has_flat_ccs(struct xe_device *xe)
drm_dbg(&xe->drm,
"Flat CCS has been disabled in bios, May lead to performance impact");
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
return 0;
}
@@ -1034,7 +1029,6 @@ void xe_device_wmb(struct xe_device *xe)
*/
static void tdf_request_sync(struct xe_device *xe)
{
- unsigned int fw_ref;
struct xe_gt *gt;
u8 id;
@@ -1042,8 +1036,8 @@ static void tdf_request_sync(struct xe_device *xe)
if (xe_gt_is_media_type(gt))
continue;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return;
xe_mmio_write32(>->mmio, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST);
@@ -1058,15 +1052,12 @@ static void tdf_request_sync(struct xe_device *xe)
if (xe_mmio_wait32(>->mmio, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST, 0,
150, NULL, false))
xe_gt_err_once(gt, "TD flush timeout\n");
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
}
void xe_device_l2_flush(struct xe_device *xe)
{
struct xe_gt *gt;
- unsigned int fw_ref;
gt = xe_root_mmio_gt(xe);
if (!gt)
@@ -1075,8 +1066,8 @@ void xe_device_l2_flush(struct xe_device *xe)
if (!XE_GT_WA(gt, 16023588340))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return;
spin_lock(>->global_invl_lock);
@@ -1086,8 +1077,6 @@ void xe_device_l2_flush(struct xe_device *xe)
xe_gt_err_once(gt, "Global invalidation timeout\n");
spin_unlock(>->global_invl_lock);
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 14/30] drm/xe/device: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 14/30] drm/xe/device: " Matt Roper
@ 2025-11-13 14:04 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 14:04 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:32-03:00)
>Convert device code to use scope-based forcewake and runtime PM.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_device.c | 33 +++++++++++----------------------
> 1 file changed, 11 insertions(+), 22 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
>index c7d373c70f0f..1197f914ef77 100644
>--- a/drivers/gpu/drm/xe/xe_device.c
>+++ b/drivers/gpu/drm/xe/xe_device.c
>@@ -166,7 +166,7 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
> struct xe_exec_queue *q;
> unsigned long idx;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
>
> /*
> * No need for exec_queue.lock here as there is no contention for it
>@@ -184,8 +184,6 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
> xe_vm_close_and_put(vm);
>
> xe_file_put(xef);
>-
>- xe_pm_runtime_put(xe);
> }
>
> static const struct drm_ioctl_desc xe_ioctls[] = {
>@@ -220,10 +218,10 @@ static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
> if (xe_device_wedged(xe))
> return -ECANCELED;
>
>- ret = xe_pm_runtime_get_ioctl(xe);
>+ ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
>+ ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm);
> if (ret >= 0)
> ret = drm_ioctl(file, cmd, arg);
>- xe_pm_runtime_put(xe);
>
> return ret;
> }
>@@ -238,10 +236,10 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo
> if (xe_device_wedged(xe))
> return -ECANCELED;
>
>- ret = xe_pm_runtime_get_ioctl(xe);
>+ ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
>+ ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm);
> if (ret >= 0)
> ret = drm_compat_ioctl(file, cmd, arg);
>- xe_pm_runtime_put(xe);
>
> return ret;
> }
>@@ -775,7 +773,6 @@ ALLOW_ERROR_INJECTION(xe_device_probe_early, ERRNO); /* See xe_pci_probe() */
> static int probe_has_flat_ccs(struct xe_device *xe)
> {
> struct xe_gt *gt;
>- unsigned int fw_ref;
> u32 reg;
>
> /* Always enabled/disabled, no runtime check to do */
>@@ -786,8 +783,8 @@ static int probe_has_flat_ccs(struct xe_device *xe)
> if (!gt)
> return 0;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> reg = xe_gt_mcr_unicast_read_any(gt, XE2_FLAT_CCS_BASE_RANGE_LOWER);
>@@ -797,8 +794,6 @@ static int probe_has_flat_ccs(struct xe_device *xe)
> drm_dbg(&xe->drm,
> "Flat CCS has been disabled in bios, May lead to performance impact");
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
> return 0;
> }
>
>@@ -1034,7 +1029,6 @@ void xe_device_wmb(struct xe_device *xe)
> */
> static void tdf_request_sync(struct xe_device *xe)
> {
>- unsigned int fw_ref;
> struct xe_gt *gt;
> u8 id;
>
>@@ -1042,8 +1036,8 @@ static void tdf_request_sync(struct xe_device *xe)
> if (xe_gt_is_media_type(gt))
> continue;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return;
>
> xe_mmio_write32(>->mmio, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST);
>@@ -1058,15 +1052,12 @@ static void tdf_request_sync(struct xe_device *xe)
> if (xe_mmio_wait32(>->mmio, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST, 0,
> 150, NULL, false))
> xe_gt_err_once(gt, "TD flush timeout\n");
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
> }
>
> void xe_device_l2_flush(struct xe_device *xe)
> {
> struct xe_gt *gt;
>- unsigned int fw_ref;
>
> gt = xe_root_mmio_gt(xe);
> if (!gt)
>@@ -1075,8 +1066,8 @@ void xe_device_l2_flush(struct xe_device *xe)
> if (!XE_GT_WA(gt, 16023588340))
> return;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return;
>
> spin_lock(>->global_invl_lock);
>@@ -1086,8 +1077,6 @@ void xe_device_l2_flush(struct xe_device *xe)
> xe_gt_err_once(gt, "Global invalidation timeout\n");
>
> spin_unlock(>->global_invl_lock);
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>
> /**
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 15/30] drm/xe/devcoredump: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (13 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 14/30] drm/xe/device: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 14:14 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 16/30] drm/xe/display: Use scoped-cleanup Matt Roper
` (19 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM in the devcoredump
code. This eliminates some goto-based error handling and slightly
simplifies other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_devcoredump.c | 26 ++++++++++----------------
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
index 203e3038cc81..599c886c865b 100644
--- a/drivers/gpu/drm/xe/xe_devcoredump.c
+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
@@ -276,7 +276,6 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
struct xe_device *xe = coredump_to_xe(coredump);
- unsigned int fw_ref;
/*
* NB: Despite passing a GFP_ flags parameter here, more allocations are done
@@ -287,15 +286,15 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
xe_devcoredump_read, xe_devcoredump_free,
XE_COREDUMP_TIMEOUT_JIFFIES);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
/* keep going if fw fails as we still want to save the memory and SW data */
- fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
- xe_vm_snapshot_capture_delayed(ss->vm);
- xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
- xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
+ xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
+ xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
+ xe_vm_snapshot_capture_delayed(ss->vm);
+ xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
+ }
ss->read.chunk_position = 0;
@@ -306,7 +305,7 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
ss->read.buffer = kvmalloc(XE_DEVCOREDUMP_CHUNK_MAX,
GFP_USER);
if (!ss->read.buffer)
- goto put_pm;
+ return;
__xe_devcoredump_read(ss->read.buffer,
XE_DEVCOREDUMP_CHUNK_MAX,
@@ -314,15 +313,12 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
} else {
ss->read.buffer = kvmalloc(ss->read.size, GFP_USER);
if (!ss->read.buffer)
- goto put_pm;
+ return;
__xe_devcoredump_read(ss->read.buffer, ss->read.size, 0,
coredump);
xe_devcoredump_snapshot_free(ss);
}
-
-put_pm:
- xe_pm_runtime_put(xe);
}
static void devcoredump_snapshot(struct xe_devcoredump *coredump,
@@ -332,7 +328,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
struct xe_guc *guc = exec_queue_to_guc(q);
const char *process_name = "no process";
- unsigned int fw_ref;
bool cookie;
ss->snapshot_time = ktime_get_real();
@@ -351,7 +346,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
cookie = dma_fence_begin_signalling();
/* keep going if fw fails as we still want to save the memory and SW data */
- fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
ss->guc.log = xe_guc_log_snapshot_capture(&guc->log, true);
ss->guc.ct = xe_guc_ct_snapshot_capture(&guc->ct);
@@ -364,7 +359,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
queue_work(system_unbound_wq, &ss->work);
- xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
dma_fence_end_signalling(cookie);
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 15/30] drm/xe/devcoredump: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 15/30] drm/xe/devcoredump: " Matt Roper
@ 2025-11-13 14:14 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 14:14 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:33-03:00)
>Use scope-based cleanup for forcewake and runtime PM in the devcoredump
>code. This eliminates some goto-based error handling and slightly
>simplifies other functions.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/xe_devcoredump.c | 26 ++++++++++----------------
> 1 file changed, 10 insertions(+), 16 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
>index 203e3038cc81..599c886c865b 100644
>--- a/drivers/gpu/drm/xe/xe_devcoredump.c
>+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
>@@ -276,7 +276,6 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
> struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
> struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
> struct xe_device *xe = coredump_to_xe(coredump);
>- unsigned int fw_ref;
>
> /*
> * NB: Despite passing a GFP_ flags parameter here, more allocations are done
>@@ -287,15 +286,15 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
> xe_devcoredump_read, xe_devcoredump_free,
> XE_COREDUMP_TIMEOUT_JIFFIES);
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
>
> /* keep going if fw fails as we still want to save the memory and SW data */
>- fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
>- xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
>- xe_vm_snapshot_capture_delayed(ss->vm);
>- xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
>- xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
>+ xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
>+ xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
>+ xe_vm_snapshot_capture_delayed(ss->vm);
>+ xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
>+ }
>
> ss->read.chunk_position = 0;
>
>@@ -306,7 +305,7 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
> ss->read.buffer = kvmalloc(XE_DEVCOREDUMP_CHUNK_MAX,
> GFP_USER);
> if (!ss->read.buffer)
>- goto put_pm;
>+ return;
>
> __xe_devcoredump_read(ss->read.buffer,
> XE_DEVCOREDUMP_CHUNK_MAX,
>@@ -314,15 +313,12 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
> } else {
> ss->read.buffer = kvmalloc(ss->read.size, GFP_USER);
> if (!ss->read.buffer)
>- goto put_pm;
>+ return;
>
> __xe_devcoredump_read(ss->read.buffer, ss->read.size, 0,
> coredump);
> xe_devcoredump_snapshot_free(ss);
> }
>-
>-put_pm:
>- xe_pm_runtime_put(xe);
> }
>
> static void devcoredump_snapshot(struct xe_devcoredump *coredump,
>@@ -332,7 +328,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
> struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
> struct xe_guc *guc = exec_queue_to_guc(q);
> const char *process_name = "no process";
>- unsigned int fw_ref;
> bool cookie;
>
> ss->snapshot_time = ktime_get_real();
>@@ -351,7 +346,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
> cookie = dma_fence_begin_signalling();
>
> /* keep going if fw fails as we still want to save the memory and SW data */
>- fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
Should we move this to happen before dma_fence_begin_signalling(), just
so we keep the LIFO cleanup order at the end of the function?
--
Gustavo Sousa
>
> ss->guc.log = xe_guc_log_snapshot_capture(&guc->log, true);
> ss->guc.ct = xe_guc_ct_snapshot_capture(&guc->ct);
>@@ -364,7 +359,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
>
> queue_work(system_unbound_wq, &ss->work);
>
>- xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
> dma_fence_end_signalling(cookie);
> }
>
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 16/30] drm/xe/display: Use scoped-cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (14 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 15/30] drm/xe/devcoredump: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 14:25 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 17/30] drm/xe: Create scoped cleanup class for force_wake_get_any_engine() Matt Roper
` (18 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Eliminate some goto-based cleanup by utilizing scoped cleanup helpers.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/display/xe_fb_pin.c | 23 +++++++++-------------
drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 25 ++++++++----------------
2 files changed, 17 insertions(+), 31 deletions(-)
diff --git a/drivers/gpu/drm/xe/display/xe_fb_pin.c b/drivers/gpu/drm/xe/display/xe_fb_pin.c
index 1fd4a815e784..6a935a75f2a4 100644
--- a/drivers/gpu/drm/xe/display/xe_fb_pin.c
+++ b/drivers/gpu/drm/xe/display/xe_fb_pin.c
@@ -210,10 +210,11 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
/* TODO: Consider sharing framebuffer mapping?
* embed i915_vma inside intel_framebuffer
*/
- xe_pm_runtime_get_noresume(xe);
- ret = mutex_lock_interruptible(&ggtt->lock);
+ guard(xe_pm_runtime_noresume)(xe);
+ ACQUIRE(mutex_intr, lock)(&ggtt->lock);
+ ret = ACQUIRE_ERR(mutex_intr, &lock);
if (ret)
- goto out;
+ return ret;
align = XE_PAGE_SIZE;
if (xe_bo_is_vram(bo) && ggtt->flags & XE_GGTT_FLAGS_64K)
@@ -223,15 +224,13 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
vma->node = bo->ggtt_node[tile0->id];
} else if (view->type == I915_GTT_VIEW_NORMAL) {
vma->node = xe_ggtt_node_init(ggtt);
- if (IS_ERR(vma->node)) {
- ret = PTR_ERR(vma->node);
- goto out_unlock;
- }
+ if (IS_ERR(vma->node))
+ return PTR_ERR(vma->node);
ret = xe_ggtt_node_insert_locked(vma->node, xe_bo_size(bo), align, 0);
if (ret) {
xe_ggtt_node_fini(vma->node);
- goto out_unlock;
+ return ret;
}
xe_ggtt_map_bo(ggtt, vma->node, bo, xe->pat.idx[XE_CACHE_NONE]);
@@ -245,13 +244,13 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
vma->node = xe_ggtt_node_init(ggtt);
if (IS_ERR(vma->node)) {
ret = PTR_ERR(vma->node);
- goto out_unlock;
+ return ret;
}
ret = xe_ggtt_node_insert_locked(vma->node, size, align, 0);
if (ret) {
xe_ggtt_node_fini(vma->node);
- goto out_unlock;
+ return ret;
}
ggtt_ofs = vma->node->base.start;
@@ -265,10 +264,6 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
rot_info->plane[i].dst_stride);
}
-out_unlock:
- mutex_unlock(&ggtt->lock);
-out:
- xe_pm_runtime_put(xe);
return ret;
}
diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
index 4ae847b628e2..084baddb160e 100644
--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
@@ -37,7 +37,6 @@ bool intel_hdcp_gsc_check_status(struct drm_device *drm)
struct xe_gt *gt = tile->media_gt;
struct xe_gsc *gsc = >->uc.gsc;
bool ret = true;
- unsigned int fw_ref;
if (!gsc || !xe_uc_fw_is_enabled(&gsc->fw)) {
drm_dbg_kms(&xe->drm,
@@ -45,21 +44,17 @@ bool intel_hdcp_gsc_check_status(struct drm_device *drm)
return false;
}
- xe_pm_runtime_get(xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
- if (!fw_ref) {
+ guard(xe_pm_runtime)(xe);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
+ if (!fw_ref.domains) {
drm_dbg_kms(&xe->drm,
"failed to get forcewake to check proxy status\n");
- ret = false;
- goto out;
+ return false;
}
if (!xe_gsc_proxy_init_done(gsc))
ret = false;
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-out:
- xe_pm_runtime_put(xe);
return ret;
}
@@ -166,17 +161,15 @@ ssize_t intel_hdcp_gsc_msg_send(struct intel_hdcp_gsc_context *gsc_context,
u32 addr_out_off, addr_in_wr_off = 0;
int ret, tries = 0;
- if (msg_in_len > max_msg_size || msg_out_len > max_msg_size) {
- ret = -ENOSPC;
- goto out;
- }
+ if (msg_in_len > max_msg_size || msg_out_len > max_msg_size)
+ return -ENOSPC;
msg_size_in = msg_in_len + HDCP_GSC_HEADER_SIZE;
msg_size_out = msg_out_len + HDCP_GSC_HEADER_SIZE;
addr_out_off = PAGE_SIZE;
host_session_id = xe_gsc_create_host_session_id();
- xe_pm_runtime_get_noresume(xe);
+ guard(xe_pm_runtime_noresume)(xe);
addr_in_wr_off = xe_gsc_emit_header(xe, &gsc_context->hdcp_bo->vmap,
addr_in_wr_off, HECI_MEADDRESS_HDCP,
host_session_id, msg_in_len);
@@ -201,13 +194,11 @@ ssize_t intel_hdcp_gsc_msg_send(struct intel_hdcp_gsc_context *gsc_context,
} while (++tries < 20);
if (ret)
- goto out;
+ return ret;
xe_map_memcpy_from(xe, msg_out, &gsc_context->hdcp_bo->vmap,
addr_out_off + HDCP_GSC_HEADER_SIZE,
msg_out_len);
-out:
- xe_pm_runtime_put(xe);
return ret;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 16/30] drm/xe/display: Use scoped-cleanup
2025-11-10 23:20 ` [PATCH v2 16/30] drm/xe/display: Use scoped-cleanup Matt Roper
@ 2025-11-13 14:25 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 14:25 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:34-03:00)
>Eliminate some goto-based cleanup by utilizing scoped cleanup helpers.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/display/xe_fb_pin.c | 23 +++++++++-------------
> drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 25 ++++++++----------------
> 2 files changed, 17 insertions(+), 31 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/display/xe_fb_pin.c b/drivers/gpu/drm/xe/display/xe_fb_pin.c
>index 1fd4a815e784..6a935a75f2a4 100644
>--- a/drivers/gpu/drm/xe/display/xe_fb_pin.c
>+++ b/drivers/gpu/drm/xe/display/xe_fb_pin.c
>@@ -210,10 +210,11 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
> /* TODO: Consider sharing framebuffer mapping?
> * embed i915_vma inside intel_framebuffer
> */
>- xe_pm_runtime_get_noresume(xe);
>- ret = mutex_lock_interruptible(&ggtt->lock);
>+ guard(xe_pm_runtime_noresume)(xe);
>+ ACQUIRE(mutex_intr, lock)(&ggtt->lock);
>+ ret = ACQUIRE_ERR(mutex_intr, &lock);
> if (ret)
>- goto out;
>+ return ret;
>
> align = XE_PAGE_SIZE;
> if (xe_bo_is_vram(bo) && ggtt->flags & XE_GGTT_FLAGS_64K)
>@@ -223,15 +224,13 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
> vma->node = bo->ggtt_node[tile0->id];
> } else if (view->type == I915_GTT_VIEW_NORMAL) {
> vma->node = xe_ggtt_node_init(ggtt);
>- if (IS_ERR(vma->node)) {
>- ret = PTR_ERR(vma->node);
>- goto out_unlock;
>- }
>+ if (IS_ERR(vma->node))
>+ return PTR_ERR(vma->node);
>
> ret = xe_ggtt_node_insert_locked(vma->node, xe_bo_size(bo), align, 0);
> if (ret) {
> xe_ggtt_node_fini(vma->node);
>- goto out_unlock;
>+ return ret;
> }
>
> xe_ggtt_map_bo(ggtt, vma->node, bo, xe->pat.idx[XE_CACHE_NONE]);
>@@ -245,13 +244,13 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
> vma->node = xe_ggtt_node_init(ggtt);
> if (IS_ERR(vma->node)) {
> ret = PTR_ERR(vma->node);
>- goto out_unlock;
>+ return ret;
> }
>
> ret = xe_ggtt_node_insert_locked(vma->node, size, align, 0);
> if (ret) {
> xe_ggtt_node_fini(vma->node);
>- goto out_unlock;
>+ return ret;
> }
>
> ggtt_ofs = vma->node->base.start;
>@@ -265,10 +264,6 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
> rot_info->plane[i].dst_stride);
> }
>
>-out_unlock:
>- mutex_unlock(&ggtt->lock);
>-out:
>- xe_pm_runtime_put(xe);
> return ret;
> }
>
>diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
>index 4ae847b628e2..084baddb160e 100644
>--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
>+++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
>@@ -37,7 +37,6 @@ bool intel_hdcp_gsc_check_status(struct drm_device *drm)
> struct xe_gt *gt = tile->media_gt;
> struct xe_gsc *gsc = >->uc.gsc;
> bool ret = true;
>- unsigned int fw_ref;
>
> if (!gsc || !xe_uc_fw_is_enabled(&gsc->fw)) {
> drm_dbg_kms(&xe->drm,
>@@ -45,21 +44,17 @@ bool intel_hdcp_gsc_check_status(struct drm_device *drm)
> return false;
> }
>
>- xe_pm_runtime_get(xe);
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
>- if (!fw_ref) {
>+ guard(xe_pm_runtime)(xe);
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
>+ if (!fw_ref.domains) {
> drm_dbg_kms(&xe->drm,
> "failed to get forcewake to check proxy status\n");
>- ret = false;
>- goto out;
>+ return false;
> }
>
> if (!xe_gsc_proxy_init_done(gsc))
> ret = false;
We don't need ret anymore, right?
I think we can just return xe_gsc_proxy_init_done(gsc).
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-out:
>- xe_pm_runtime_put(xe);
> return ret;
> }
>
>@@ -166,17 +161,15 @@ ssize_t intel_hdcp_gsc_msg_send(struct intel_hdcp_gsc_context *gsc_context,
> u32 addr_out_off, addr_in_wr_off = 0;
> int ret, tries = 0;
>
>- if (msg_in_len > max_msg_size || msg_out_len > max_msg_size) {
>- ret = -ENOSPC;
>- goto out;
>- }
>+ if (msg_in_len > max_msg_size || msg_out_len > max_msg_size)
>+ return -ENOSPC;
Huh.. Did we just fix a bug in this function?
It appears this function was putting without getting when the "if"
condition evaluates to true.
--
Gustavo Sousa
>
> msg_size_in = msg_in_len + HDCP_GSC_HEADER_SIZE;
> msg_size_out = msg_out_len + HDCP_GSC_HEADER_SIZE;
> addr_out_off = PAGE_SIZE;
>
> host_session_id = xe_gsc_create_host_session_id();
>- xe_pm_runtime_get_noresume(xe);
>+ guard(xe_pm_runtime_noresume)(xe);
> addr_in_wr_off = xe_gsc_emit_header(xe, &gsc_context->hdcp_bo->vmap,
> addr_in_wr_off, HECI_MEADDRESS_HDCP,
> host_session_id, msg_in_len);
>@@ -201,13 +194,11 @@ ssize_t intel_hdcp_gsc_msg_send(struct intel_hdcp_gsc_context *gsc_context,
> } while (++tries < 20);
>
> if (ret)
>- goto out;
>+ return ret;
>
> xe_map_memcpy_from(xe, msg_out, &gsc_context->hdcp_bo->vmap,
> addr_out_off + HDCP_GSC_HEADER_SIZE,
> msg_out_len);
>
>-out:
>- xe_pm_runtime_put(xe);
> return ret;
> }
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 17/30] drm/xe: Create scoped cleanup class for force_wake_get_any_engine()
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (15 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 16/30] drm/xe/display: Use scoped-cleanup Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 17:39 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 18/30] drm/xe/drm_client: Use scope-based cleanup Matt Roper
` (17 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
force_wake_get_any_engine() is a single-use function to pick any engine
present on the platform and grab its forcewake. The signature
(returning a boolean success and both the engine pointer and a forcewake
ref by reference) is a bit awkward. Rewrite it such that the
forcewake ref is the function's return value and the caller can
determine success/failure by checking the engine pointer against NULL.
With this new signature, the function can serve as a scoped cleanup
class constructor, so define the corresponding class. Note that if we
fail to obtain forcewake (or if the platform somehow has no engines),
the constructor can fail, returning an invalid fw_ref. In such cases,
fw_ref.fw will be NULL, making it clear that the reference is invalid;
this fact can be used to create a thin wrapper around xe_force_wake_put
that can be used as a destructor for this class.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_drm_client.c | 52 +++++++++++++++++++-----------
1 file changed, 34 insertions(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
index f931ff9b1ec0..9deb258ba204 100644
--- a/drivers/gpu/drm/xe/xe_drm_client.c
+++ b/drivers/gpu/drm/xe/xe_drm_client.c
@@ -6,6 +6,7 @@
#include <drm/drm_print.h>
#include <uapi/drm/xe_drm.h>
+#include <linux/cleanup.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/types.h>
@@ -285,34 +286,48 @@ static struct xe_hw_engine *any_engine(struct xe_device *xe)
return NULL;
}
-static bool force_wake_get_any_engine(struct xe_device *xe,
- struct xe_hw_engine **phwe,
- unsigned int *pfw_ref)
+/*
+ * Pick any engine and grab its forcewake. On error phwe will be NULL and
+ * the returned forcewake reference will be invalid. Callers should check
+ * phwe against NULL.
+ */
+static struct xe_force_wake_ref force_wake_get_any_engine(struct xe_device *xe,
+ struct xe_hw_engine **phwe)
{
enum xe_force_wake_domains domain;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref = {};
struct xe_hw_engine *hwe;
- struct xe_force_wake *fw;
+
+ *phwe = NULL;
hwe = any_engine(xe);
if (!hwe)
- return false;
+ return fw_ref; /* will be invalid */
domain = xe_hw_engine_to_fw_domain(hwe);
- fw = gt_to_fw(hwe->gt);
- fw_ref = xe_force_wake_get(fw, domain);
- if (!xe_force_wake_ref_has_domain(fw_ref, domain)) {
- xe_force_wake_put(fw, fw_ref);
- return false;
- }
+ fw_ref.fw = gt_to_fw(hwe->gt);
+ fw_ref.domains = xe_force_wake_get(fw_ref.fw, domain);
+ if (xe_force_wake_ref_has_domain(fw_ref.domains, domain))
+ *phwe = hwe; /* valid forcewake */
- *phwe = hwe;
- *pfw_ref = fw_ref;
+ return fw_ref;
+}
- return true;
+static void drop_fw_if_valid(struct xe_force_wake_ref fw_ref)
+{
+ /*
+ * If force_wake_get_any_engine() fails, there's no real forcewake
+ * reference to drop, and fw_ref.fw will be NULL.
+ */
+ if (fw_ref.fw)
+ xe_force_wake_put(fw_ref.fw, fw_ref.domains);
}
+DEFINE_CLASS(xe_force_wake_any_engine, struct xe_force_wake_ref,
+ drop_fw_if_valid(_T), force_wake_get_any_engine(xe, phwe),
+ struct xe_device *xe, struct xe_hw_engine **phwe);
+
static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
{
unsigned long class, i, gt_id, capacity[XE_ENGINE_CLASS_MAX] = { };
@@ -322,7 +337,7 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
struct xe_hw_engine *hwe;
struct xe_exec_queue *q;
u64 gpu_timestamp;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
/*
* RING_TIMESTAMP registers are inaccessible in VF mode.
@@ -340,7 +355,8 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
!atomic_read(&xef->exec_queue.pending_removal));
xe_pm_runtime_get(xe);
- if (!force_wake_get_any_engine(xe, &hwe, &fw_ref)) {
+ fw_ref = force_wake_get_any_engine(xe, &hwe);
+ if (!hwe) {
xe_pm_runtime_put(xe);
return;
}
@@ -360,7 +376,7 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
gpu_timestamp = xe_hw_engine_read_timestamp(hwe);
- xe_force_wake_put(gt_to_fw(hwe->gt), fw_ref);
+ xe_force_wake_put(gt_to_fw(hwe->gt), fw_ref.domains);
xe_pm_runtime_put(xe);
for (class = 0; class < XE_ENGINE_CLASS_MAX; class++) {
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 17/30] drm/xe: Create scoped cleanup class for force_wake_get_any_engine()
2025-11-10 23:20 ` [PATCH v2 17/30] drm/xe: Create scoped cleanup class for force_wake_get_any_engine() Matt Roper
@ 2025-11-13 17:39 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 17:39 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:35-03:00)
>force_wake_get_any_engine() is a single-use function to pick any engine
>present on the platform and grab its forcewake. The signature
>(returning a boolean success and both the engine pointer and a forcewake
>ref by reference) is a bit awkward. Rewrite it such that the
>forcewake ref is the function's return value and the caller can
>determine success/failure by checking the engine pointer against NULL.
>
>With this new signature, the function can serve as a scoped cleanup
>class constructor, so define the corresponding class. Note that if we
>fail to obtain forcewake (or if the platform somehow has no engines),
>the constructor can fail, returning an invalid fw_ref. In such cases,
>fw_ref.fw will be NULL, making it clear that the reference is invalid;
>this fact can be used to create a thin wrapper around xe_force_wake_put
>that can be used as a destructor for this class.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/xe_drm_client.c | 52 +++++++++++++++++++-----------
> 1 file changed, 34 insertions(+), 18 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
>index f931ff9b1ec0..9deb258ba204 100644
>--- a/drivers/gpu/drm/xe/xe_drm_client.c
>+++ b/drivers/gpu/drm/xe/xe_drm_client.c
>@@ -6,6 +6,7 @@
>
> #include <drm/drm_print.h>
> #include <uapi/drm/xe_drm.h>
>+#include <linux/cleanup.h>
> #include <linux/kernel.h>
> #include <linux/slab.h>
> #include <linux/types.h>
>@@ -285,34 +286,48 @@ static struct xe_hw_engine *any_engine(struct xe_device *xe)
> return NULL;
> }
>
>-static bool force_wake_get_any_engine(struct xe_device *xe,
>- struct xe_hw_engine **phwe,
>- unsigned int *pfw_ref)
>+/*
>+ * Pick any engine and grab its forcewake. On error phwe will be NULL and
>+ * the returned forcewake reference will be invalid. Callers should check
>+ * phwe against NULL.
>+ */
>+static struct xe_force_wake_ref force_wake_get_any_engine(struct xe_device *xe,
>+ struct xe_hw_engine **phwe)
> {
> enum xe_force_wake_domains domain;
>- unsigned int fw_ref;
>+ struct xe_force_wake_ref fw_ref = {};
> struct xe_hw_engine *hwe;
>- struct xe_force_wake *fw;
>+
>+ *phwe = NULL;
>
> hwe = any_engine(xe);
> if (!hwe)
>- return false;
>+ return fw_ref; /* will be invalid */
>
> domain = xe_hw_engine_to_fw_domain(hwe);
>- fw = gt_to_fw(hwe->gt);
>
>- fw_ref = xe_force_wake_get(fw, domain);
>- if (!xe_force_wake_ref_has_domain(fw_ref, domain)) {
>- xe_force_wake_put(fw, fw_ref);
>- return false;
>- }
>+ fw_ref.fw = gt_to_fw(hwe->gt);
>+ fw_ref.domains = xe_force_wake_get(fw_ref.fw, domain);
I think we should use xe_force_wake_constructor() here, to future-proof
this for any modification that we might decide to do in the way we force
wake CLASS constructors are implemented.
>+ if (xe_force_wake_ref_has_domain(fw_ref.domains, domain))
>+ *phwe = hwe; /* valid forcewake */
>
>- *phwe = hwe;
>- *pfw_ref = fw_ref;
>+ return fw_ref;
>+}
>
>- return true;
>+static void drop_fw_if_valid(struct xe_force_wake_ref fw_ref)
>+{
>+ /*
>+ * If force_wake_get_any_engine() fails, there's no real forcewake
>+ * reference to drop, and fw_ref.fw will be NULL.
>+ */
>+ if (fw_ref.fw)
>+ xe_force_wake_put(fw_ref.fw, fw_ref.domains);
> }
>
>+DEFINE_CLASS(xe_force_wake_any_engine, struct xe_force_wake_ref,
>+ drop_fw_if_valid(_T), force_wake_get_any_engine(xe, phwe),
>+ struct xe_device *xe, struct xe_hw_engine **phwe);
>+
An alternative approach could be for xe_force_wake.h to have:
DEFINE_CLASS(xe_force_wake_put_only, struct xe_force_wake_ref,
xe_force_wake_put(_T.fw, _T.domains),
fw_ref,
struct xe_force_wake_ref fw_ref);
Then, in this file, we would have something like:
static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
{
...
...
...
...
CLASS(xe_force_wake_put_only, fw_ref)(force_wake_get_any_engine(xe, &hwe));
...
...
...
}
With that, we wouldn't need to create custom classes for special one-off
cases like this one. What do you think?
PS: I also thought of using a DEFINE_FREE(), but I don't like the fact that
the variable declaration would be explicit in the middle of
show_run_ticks(). Using DEFINE_CLASS() we can hide that.
--
Gustavo Sousa
> static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
> {
> unsigned long class, i, gt_id, capacity[XE_ENGINE_CLASS_MAX] = { };
>@@ -322,7 +337,7 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
> struct xe_hw_engine *hwe;
> struct xe_exec_queue *q;
> u64 gpu_timestamp;
>- unsigned int fw_ref;
>+ struct xe_force_wake_ref fw_ref;
>
> /*
> * RING_TIMESTAMP registers are inaccessible in VF mode.
>@@ -340,7 +355,8 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
> !atomic_read(&xef->exec_queue.pending_removal));
>
> xe_pm_runtime_get(xe);
>- if (!force_wake_get_any_engine(xe, &hwe, &fw_ref)) {
>+ fw_ref = force_wake_get_any_engine(xe, &hwe);
>+ if (!hwe) {
> xe_pm_runtime_put(xe);
> return;
> }
>@@ -360,7 +376,7 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
>
> gpu_timestamp = xe_hw_engine_read_timestamp(hwe);
>
>- xe_force_wake_put(gt_to_fw(hwe->gt), fw_ref);
>+ xe_force_wake_put(gt_to_fw(hwe->gt), fw_ref.domains);
> xe_pm_runtime_put(xe);
>
> for (class = 0; class < XE_ENGINE_CLASS_MAX; class++) {
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 18/30] drm/xe/drm_client: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (16 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 17/30] drm/xe: Create scoped cleanup class for force_wake_get_any_engine() Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-10 23:20 ` [PATCH v2 19/30] drm/xe/gt_debugfs: " Matt Roper
` (16 subsequent siblings)
34 siblings, 0 replies; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_drm_client.c | 39 +++++++++++++-----------------
1 file changed, 17 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
index 9deb258ba204..72bf399915b7 100644
--- a/drivers/gpu/drm/xe/xe_drm_client.c
+++ b/drivers/gpu/drm/xe/xe_drm_client.c
@@ -337,7 +337,6 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
struct xe_hw_engine *hwe;
struct xe_exec_queue *q;
u64 gpu_timestamp;
- struct xe_force_wake_ref fw_ref;
/*
* RING_TIMESTAMP registers are inaccessible in VF mode.
@@ -354,30 +353,26 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
wait_var_event(&xef->exec_queue.pending_removal,
!atomic_read(&xef->exec_queue.pending_removal));
- xe_pm_runtime_get(xe);
- fw_ref = force_wake_get_any_engine(xe, &hwe);
- if (!hwe) {
- xe_pm_runtime_put(xe);
- return;
- }
-
- /* Accumulate all the exec queues from this client */
- mutex_lock(&xef->exec_queue.lock);
- xa_for_each(&xef->exec_queue.xa, i, q) {
- xe_exec_queue_get(q);
- mutex_unlock(&xef->exec_queue.lock);
-
- xe_exec_queue_update_run_ticks(q);
+ scoped_guard(xe_pm_runtime, xe) {
+ CLASS(xe_force_wake_any_engine, fw_ref)(xe, &hwe);
+ if (!hwe)
+ return;
+ /* Accumulate all the exec queues from this client */
mutex_lock(&xef->exec_queue.lock);
- xe_exec_queue_put(q);
+ xa_for_each(&xef->exec_queue.xa, i, q) {
+ xe_exec_queue_get(q);
+ mutex_unlock(&xef->exec_queue.lock);
+
+ xe_exec_queue_update_run_ticks(q);
+
+ mutex_lock(&xef->exec_queue.lock);
+ xe_exec_queue_put(q);
+ }
+ mutex_unlock(&xef->exec_queue.lock);
+
+ gpu_timestamp = xe_hw_engine_read_timestamp(hwe);
}
- mutex_unlock(&xef->exec_queue.lock);
-
- gpu_timestamp = xe_hw_engine_read_timestamp(hwe);
-
- xe_force_wake_put(gt_to_fw(hwe->gt), fw_ref.domains);
- xe_pm_runtime_put(xe);
for (class = 0; class < XE_ENGINE_CLASS_MAX; class++) {
const char *class_name;
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* [PATCH v2 19/30] drm/xe/gt_debugfs: Use scope-based cleanup
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (17 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 18/30] drm/xe/drm_client: Use scope-based cleanup Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 17:45 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 20/30] drm/xe/huc: Use scope-based forcewake Matt Roper
` (15 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM to simplify the
debugfs code slightly.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_gt_debugfs.c | 29 ++++++++---------------------
1 file changed, 8 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt_debugfs.c b/drivers/gpu/drm/xe/xe_gt_debugfs.c
index e4fd632f43cf..7c3de6539044 100644
--- a/drivers/gpu/drm/xe/xe_gt_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_gt_debugfs.c
@@ -105,35 +105,24 @@ int xe_gt_debugfs_show_with_rpm(struct seq_file *m, void *data)
struct drm_info_node *node = m->private;
struct xe_gt *gt = node_to_gt(node);
struct xe_device *xe = gt_to_xe(gt);
- int ret;
- xe_pm_runtime_get(xe);
- ret = xe_gt_debugfs_simple_show(m, data);
- xe_pm_runtime_put(xe);
-
- return ret;
+ guard(xe_pm_runtime)(xe);
+ return xe_gt_debugfs_simple_show(m, data);
}
static int hw_engines(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_hw_engine *hwe;
enum xe_hw_engine_id id;
- unsigned int fw_ref;
- int ret = 0;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- ret = -ETIMEDOUT;
- goto fw_put;
- }
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
+ return -ETIMEDOUT;
for_each_hw_engine(hwe, gt, id)
xe_hw_engine_print(hwe, p);
-fw_put:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
- return ret;
+ return 0;
}
static int steering(struct xe_gt *gt, struct drm_printer *p)
@@ -269,9 +258,8 @@ static void force_reset(struct xe_gt *gt)
{
struct xe_device *xe = gt_to_xe(gt);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_gt_reset_async(gt);
- xe_pm_runtime_put(xe);
}
static ssize_t force_reset_write(struct file *file,
@@ -297,9 +285,8 @@ static void force_reset_sync(struct xe_gt *gt)
{
struct xe_device *xe = gt_to_xe(gt);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_gt_reset(gt);
- xe_pm_runtime_put(xe);
}
static ssize_t force_reset_sync_write(struct file *file,
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 19/30] drm/xe/gt_debugfs: Use scope-based cleanup
2025-11-10 23:20 ` [PATCH v2 19/30] drm/xe/gt_debugfs: " Matt Roper
@ 2025-11-13 17:45 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 17:45 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:37-03:00)
>Use scope-based cleanup for forcewake and runtime PM to simplify the
>debugfs code slightly.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_gt_debugfs.c | 29 ++++++++---------------------
> 1 file changed, 8 insertions(+), 21 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_gt_debugfs.c b/drivers/gpu/drm/xe/xe_gt_debugfs.c
>index e4fd632f43cf..7c3de6539044 100644
>--- a/drivers/gpu/drm/xe/xe_gt_debugfs.c
>+++ b/drivers/gpu/drm/xe/xe_gt_debugfs.c
>@@ -105,35 +105,24 @@ int xe_gt_debugfs_show_with_rpm(struct seq_file *m, void *data)
> struct drm_info_node *node = m->private;
> struct xe_gt *gt = node_to_gt(node);
> struct xe_device *xe = gt_to_xe(gt);
>- int ret;
>
>- xe_pm_runtime_get(xe);
>- ret = xe_gt_debugfs_simple_show(m, data);
>- xe_pm_runtime_put(xe);
>-
>- return ret;
>+ guard(xe_pm_runtime)(xe);
>+ return xe_gt_debugfs_simple_show(m, data);
> }
>
> static int hw_engines(struct xe_gt *gt, struct drm_printer *p)
> {
> struct xe_hw_engine *hwe;
> enum xe_hw_engine_id id;
>- unsigned int fw_ref;
>- int ret = 0;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
>- ret = -ETIMEDOUT;
>- goto fw_put;
>- }
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
>+ return -ETIMEDOUT;
>
> for_each_hw_engine(hwe, gt, id)
> xe_hw_engine_print(hwe, p);
>
>-fw_put:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
>- return ret;
>+ return 0;
> }
>
> static int steering(struct xe_gt *gt, struct drm_printer *p)
>@@ -269,9 +258,8 @@ static void force_reset(struct xe_gt *gt)
> {
> struct xe_device *xe = gt_to_xe(gt);
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> xe_gt_reset_async(gt);
>- xe_pm_runtime_put(xe);
> }
>
> static ssize_t force_reset_write(struct file *file,
>@@ -297,9 +285,8 @@ static void force_reset_sync(struct xe_gt *gt)
> {
> struct xe_device *xe = gt_to_xe(gt);
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> xe_gt_reset(gt);
>- xe_pm_runtime_put(xe);
> }
>
> static ssize_t force_reset_sync_write(struct file *file,
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 20/30] drm/xe/huc: Use scope-based forcewake
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (18 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 19/30] drm/xe/gt_debugfs: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 17:46 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 21/30] drm/xe/query: " Matt Roper
` (14 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based forcewake in the HuC code for a small simplification and
consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_huc.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_huc.c b/drivers/gpu/drm/xe/xe_huc.c
index 0a70c8924582..4212162913af 100644
--- a/drivers/gpu/drm/xe/xe_huc.c
+++ b/drivers/gpu/drm/xe/xe_huc.c
@@ -300,19 +300,16 @@ void xe_huc_sanitize(struct xe_huc *huc)
void xe_huc_print_info(struct xe_huc *huc, struct drm_printer *p)
{
struct xe_gt *gt = huc_to_gt(huc);
- unsigned int fw_ref;
xe_uc_fw_print(&huc->fw, p);
if (!xe_uc_fw_is_enabled(&huc->fw))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return;
drm_printf(p, "\nHuC status: 0x%08x\n",
xe_mmio_read32(>->mmio, HUC_KERNEL_LOAD_INFO));
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 20/30] drm/xe/huc: Use scope-based forcewake
2025-11-10 23:20 ` [PATCH v2 20/30] drm/xe/huc: Use scope-based forcewake Matt Roper
@ 2025-11-13 17:46 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 17:46 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:38-03:00)
>Use scope-based forcewake in the HuC code for a small simplification and
>consistency with other parts of the driver.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_huc.c | 7 ++-----
> 1 file changed, 2 insertions(+), 5 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_huc.c b/drivers/gpu/drm/xe/xe_huc.c
>index 0a70c8924582..4212162913af 100644
>--- a/drivers/gpu/drm/xe/xe_huc.c
>+++ b/drivers/gpu/drm/xe/xe_huc.c
>@@ -300,19 +300,16 @@ void xe_huc_sanitize(struct xe_huc *huc)
> void xe_huc_print_info(struct xe_huc *huc, struct drm_printer *p)
> {
> struct xe_gt *gt = huc_to_gt(huc);
>- unsigned int fw_ref;
>
> xe_uc_fw_print(&huc->fw, p);
>
> if (!xe_uc_fw_is_enabled(&huc->fw))
> return;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return;
>
> drm_printf(p, "\nHuC status: 0x%08x\n",
> xe_mmio_read32(>->mmio, HUC_KERNEL_LOAD_INFO));
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> }
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 21/30] drm/xe/query: Use scope-based forcewake
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (19 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 20/30] drm/xe/huc: Use scope-based forcewake Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 17:50 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 22/30] drm/xe/reg_sr: " Matt Roper
` (13 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based forcewake handling for consistency with other parts of
the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_query.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
index 1c0915e2cc16..a7bf1fd6dd6a 100644
--- a/drivers/gpu/drm/xe/xe_query.c
+++ b/drivers/gpu/drm/xe/xe_query.c
@@ -122,7 +122,6 @@ query_engine_cycles(struct xe_device *xe,
__ktime_func_t cpu_clock;
struct xe_hw_engine *hwe;
struct xe_gt *gt;
- unsigned int fw_ref;
if (IS_SRIOV_VF(xe))
return -EOPNOTSUPP;
@@ -158,17 +157,14 @@ query_engine_cycles(struct xe_device *xe,
if (!hwe)
return -EINVAL;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
- return -EIO;
+ xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FORCEWAKE_ALL) {
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
+ return -EIO;
+
+ hwe_read_timestamp(hwe, &resp.engine_cycles, &resp.cpu_timestamp,
+ &resp.cpu_delta, cpu_clock);
}
- hwe_read_timestamp(hwe, &resp.engine_cycles, &resp.cpu_timestamp,
- &resp.cpu_delta, cpu_clock);
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
if (GRAPHICS_VER(xe) >= 20)
resp.width = 64;
else
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 21/30] drm/xe/query: Use scope-based forcewake
2025-11-10 23:20 ` [PATCH v2 21/30] drm/xe/query: " Matt Roper
@ 2025-11-13 17:50 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 17:50 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:39-03:00)
>Use scope-based forcewake handling for consistency with other parts of
>the driver.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_query.c | 16 ++++++----------
> 1 file changed, 6 insertions(+), 10 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
>index 1c0915e2cc16..a7bf1fd6dd6a 100644
>--- a/drivers/gpu/drm/xe/xe_query.c
>+++ b/drivers/gpu/drm/xe/xe_query.c
>@@ -122,7 +122,6 @@ query_engine_cycles(struct xe_device *xe,
> __ktime_func_t cpu_clock;
> struct xe_hw_engine *hwe;
> struct xe_gt *gt;
>- unsigned int fw_ref;
>
> if (IS_SRIOV_VF(xe))
> return -EOPNOTSUPP;
>@@ -158,17 +157,14 @@ query_engine_cycles(struct xe_device *xe,
> if (!hwe)
> return -EINVAL;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>- return -EIO;
>+ xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FORCEWAKE_ALL) {
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
>+ return -EIO;
>+
>+ hwe_read_timestamp(hwe, &resp.engine_cycles, &resp.cpu_timestamp,
>+ &resp.cpu_delta, cpu_clock);
> }
>
>- hwe_read_timestamp(hwe, &resp.engine_cycles, &resp.cpu_timestamp,
>- &resp.cpu_delta, cpu_clock);
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
> if (GRAPHICS_VER(xe) >= 20)
> resp.width = 64;
> else
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 22/30] drm/xe/reg_sr: Use scope-based forcewake
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (20 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 21/30] drm/xe/query: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 17:51 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 23/30] drm/xe/vram: " Matt Roper
` (12 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based forcewake to slightly simplify the reg_sr code.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_reg_sr.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
index fc8447a838c4..1a465385f909 100644
--- a/drivers/gpu/drm/xe/xe_reg_sr.c
+++ b/drivers/gpu/drm/xe/xe_reg_sr.c
@@ -168,7 +168,6 @@ void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
{
struct xe_reg_sr_entry *entry;
unsigned long reg;
- unsigned int fw_ref;
if (xa_empty(&sr->xa))
return;
@@ -178,20 +177,14 @@ void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
xe_gt_dbg(gt, "Applying %s save-restore MMIOs\n", sr->name);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- goto err_force_wake;
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
+ xe_gt_err(gt, "Failed to apply, err=-ETIMEDOUT\n");
+ return;
+ }
xa_for_each(&sr->xa, reg, entry)
apply_one_mmio(gt, entry);
-
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
- return;
-
-err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
- xe_gt_err(gt, "Failed to apply, err=-ETIMEDOUT\n");
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 22/30] drm/xe/reg_sr: Use scope-based forcewake
2025-11-10 23:20 ` [PATCH v2 22/30] drm/xe/reg_sr: " Matt Roper
@ 2025-11-13 17:51 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 17:51 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:40-03:00)
>Use scope-based forcewake to slightly simplify the reg_sr code.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_reg_sr.c | 17 +++++------------
> 1 file changed, 5 insertions(+), 12 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
>index fc8447a838c4..1a465385f909 100644
>--- a/drivers/gpu/drm/xe/xe_reg_sr.c
>+++ b/drivers/gpu/drm/xe/xe_reg_sr.c
>@@ -168,7 +168,6 @@ void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
> {
> struct xe_reg_sr_entry *entry;
> unsigned long reg;
>- unsigned int fw_ref;
>
> if (xa_empty(&sr->xa))
> return;
>@@ -178,20 +177,14 @@ void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
>
> xe_gt_dbg(gt, "Applying %s save-restore MMIOs\n", sr->name);
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
>- goto err_force_wake;
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
>+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL)) {
>+ xe_gt_err(gt, "Failed to apply, err=-ETIMEDOUT\n");
>+ return;
>+ }
>
> xa_for_each(&sr->xa, reg, entry)
> apply_one_mmio(gt, entry);
>-
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>-
>- return;
>-
>-err_force_wake:
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
>- xe_gt_err(gt, "Failed to apply, err=-ETIMEDOUT\n");
> }
>
> /**
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 23/30] drm/xe/vram: Use scope-based forcewake
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (21 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 22/30] drm/xe/reg_sr: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-10 23:57 ` [PATCH v2.1 " Matt Roper
2025-11-10 23:20 ` [PATCH v2 24/30] drm/xe/bo: Use scope-based runtime PM Matt Roper
` (11 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Switch VRAM code to use scope-based forcewake for consistency with other
parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_vram.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vram.c b/drivers/gpu/drm/xe/xe_vram.c
index b62a96f8ef9e..11d18741313b 100644
--- a/drivers/gpu/drm/xe/xe_vram.c
+++ b/drivers/gpu/drm/xe/xe_vram.c
@@ -245,7 +245,6 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
{
struct xe_device *xe = tile_to_xe(tile);
struct xe_gt *gt = tile->primary_gt;
- unsigned int fw_ref;
u64 offset;
u32 reg;
@@ -265,8 +264,8 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
return 0;
}
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
/* actual size */
@@ -289,8 +288,6 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
/* remove the tile offset so we have just the available size */
*vram_size = offset - *tile_offset;
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
-
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* [PATCH v2.1 23/30] drm/xe/vram: Use scope-based forcewake
2025-11-10 23:20 ` [PATCH v2 23/30] drm/xe/vram: " Matt Roper
@ 2025-11-10 23:57 ` Matt Roper
2025-11-13 17:52 ` Gustavo Sousa
0 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:57 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Switch VRAM code to use scope-based forcewake for consistency with other
parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
v2.1:
- Rebased again to resolve conflict with latest drm-tip
drivers/gpu/drm/xe/xe_vram.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vram.c b/drivers/gpu/drm/xe/xe_vram.c
index 0e10da790cc5..0a645e76e5fa 100644
--- a/drivers/gpu/drm/xe/xe_vram.c
+++ b/drivers/gpu/drm/xe/xe_vram.c
@@ -186,12 +186,11 @@ static int determine_lmem_bar_size(struct xe_device *xe, struct xe_vram_region *
static int get_flat_ccs_offset(struct xe_gt *gt, u64 tile_size, u64 *poffset)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
u64 offset;
u32 reg;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
return -ETIMEDOUT;
if (GRAPHICS_VER(xe) >= 20) {
@@ -223,7 +222,6 @@ static int get_flat_ccs_offset(struct xe_gt *gt, u64 tile_size, u64 *poffset)
offset = (u64)REG_FIELD_GET(XEHP_FLAT_CCS_PTR, reg) * SZ_64K;
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
*poffset = offset;
return 0;
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2.1 23/30] drm/xe/vram: Use scope-based forcewake
2025-11-10 23:57 ` [PATCH v2.1 " Matt Roper
@ 2025-11-13 17:52 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 17:52 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:57:16-03:00)
>Switch VRAM code to use scope-based forcewake for consistency with other
>parts of the driver.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
>v2.1:
> - Rebased again to resolve conflict with latest drm-tip
>
> drivers/gpu/drm/xe/xe_vram.c | 6 ++----
> 1 file changed, 2 insertions(+), 4 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_vram.c b/drivers/gpu/drm/xe/xe_vram.c
>index 0e10da790cc5..0a645e76e5fa 100644
>--- a/drivers/gpu/drm/xe/xe_vram.c
>+++ b/drivers/gpu/drm/xe/xe_vram.c
>@@ -186,12 +186,11 @@ static int determine_lmem_bar_size(struct xe_device *xe, struct xe_vram_region *
> static int get_flat_ccs_offset(struct xe_gt *gt, u64 tile_size, u64 *poffset)
> {
> struct xe_device *xe = gt_to_xe(gt);
>- unsigned int fw_ref;
> u64 offset;
> u32 reg;
>
>- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>- if (!fw_ref)
>+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>+ if (!fw_ref.domains)
> return -ETIMEDOUT;
>
> if (GRAPHICS_VER(xe) >= 20) {
>@@ -223,7 +222,6 @@ static int get_flat_ccs_offset(struct xe_gt *gt, u64 tile_size, u64 *poffset)
> offset = (u64)REG_FIELD_GET(XEHP_FLAT_CCS_PTR, reg) * SZ_64K;
> }
>
>- xe_force_wake_put(gt_to_fw(gt), fw_ref);
> *poffset = offset;
>
> return 0;
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 24/30] drm/xe/bo: Use scope-based runtime PM
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (22 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 23/30] drm/xe/vram: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 17:54 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 25/30] drm/xe/ggtt: Use scope-based runtime pm Matt Roper
` (10 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based runtime power management in the BO code for consistency
with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index b0bd31d14bb9..03d81664706a 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -2035,9 +2035,8 @@ static int xe_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
struct xe_device *xe = xe_bo_device(bo);
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = ttm_bo_vm_access(vma, addr, buf, len, write);
- xe_pm_runtime_put(xe);
return ret;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 24/30] drm/xe/bo: Use scope-based runtime PM
2025-11-10 23:20 ` [PATCH v2 24/30] drm/xe/bo: Use scope-based runtime PM Matt Roper
@ 2025-11-13 17:54 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 17:54 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:42-03:00)
>Use scope-based runtime power management in the BO code for consistency
>with other parts of the driver.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/xe_bo.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>index b0bd31d14bb9..03d81664706a 100644
>--- a/drivers/gpu/drm/xe/xe_bo.c
>+++ b/drivers/gpu/drm/xe/xe_bo.c
>@@ -2035,9 +2035,8 @@ static int xe_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
> struct xe_device *xe = xe_bo_device(bo);
> int ret;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = ttm_bo_vm_access(vma, addr, buf, len, write);
>- xe_pm_runtime_put(xe);
We can drop the ret variable and return ttm_bo_vm_access(vma, addr, buf,
len, write) directly. With that addressed,
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>
> return ret;
> }
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 25/30] drm/xe/ggtt: Use scope-based runtime pm
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (23 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 24/30] drm/xe/bo: Use scope-based runtime PM Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 17:55 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 26/30] drm/xe/hwmon: Use scope-based runtime PM Matt Roper
` (9 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Switch the GGTT code to scope-based runtime PM for consistency with
other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_ggtt.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
index 20d226d90c50..5e1cd18ec611 100644
--- a/drivers/gpu/drm/xe/xe_ggtt.c
+++ b/drivers/gpu/drm/xe/xe_ggtt.c
@@ -385,9 +385,8 @@ static void ggtt_node_remove_work_func(struct work_struct *work)
delayed_removal_work);
struct xe_device *xe = tile_to_xe(node->ggtt->tile);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ggtt_node_remove(node);
- xe_pm_runtime_put(xe);
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 25/30] drm/xe/ggtt: Use scope-based runtime pm
2025-11-10 23:20 ` [PATCH v2 25/30] drm/xe/ggtt: Use scope-based runtime pm Matt Roper
@ 2025-11-13 17:55 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 17:55 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:43-03:00)
>Switch the GGTT code to scope-based runtime PM for consistency with
>other parts of the driver.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_ggtt.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
>index 20d226d90c50..5e1cd18ec611 100644
>--- a/drivers/gpu/drm/xe/xe_ggtt.c
>+++ b/drivers/gpu/drm/xe/xe_ggtt.c
>@@ -385,9 +385,8 @@ static void ggtt_node_remove_work_func(struct work_struct *work)
> delayed_removal_work);
> struct xe_device *xe = tile_to_xe(node->ggtt->tile);
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ggtt_node_remove(node);
>- xe_pm_runtime_put(xe);
> }
>
> /**
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 26/30] drm/xe/hwmon: Use scope-based runtime PM
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (24 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 25/30] drm/xe/ggtt: Use scope-based runtime pm Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 18:01 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 27/30] drm/xe/sriov: " Matt Roper
` (8 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based runtime power management in the hwmon code for
consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_hwmon.c | 16 ++++------------
1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_hwmon.c b/drivers/gpu/drm/xe/xe_hwmon.c
index 97879daeefc1..5ad351fad6e2 100644
--- a/drivers/gpu/drm/xe/xe_hwmon.c
+++ b/drivers/gpu/drm/xe/xe_hwmon.c
@@ -502,7 +502,7 @@ xe_hwmon_power_max_interval_show(struct device *dev, struct device_attribute *at
int ret = 0;
- xe_pm_runtime_get(hwmon->xe);
+ guard(xe_pm_runtime)(hwmon->xe);
mutex_lock(&hwmon->hwmon_lock);
@@ -521,8 +521,6 @@ xe_hwmon_power_max_interval_show(struct device *dev, struct device_attribute *at
mutex_unlock(&hwmon->hwmon_lock);
- xe_pm_runtime_put(hwmon->xe);
-
x = REG_FIELD_GET(PWR_LIM_TIME_X, reg_val);
y = REG_FIELD_GET(PWR_LIM_TIME_Y, reg_val);
@@ -604,7 +602,7 @@ xe_hwmon_power_max_interval_store(struct device *dev, struct device_attribute *a
rxy = REG_FIELD_PREP(PWR_LIM_TIME_X, x) |
REG_FIELD_PREP(PWR_LIM_TIME_Y, y);
- xe_pm_runtime_get(hwmon->xe);
+ guard(xe_pm_runtime)(hwmon->xe);
mutex_lock(&hwmon->hwmon_lock);
@@ -616,8 +614,6 @@ xe_hwmon_power_max_interval_store(struct device *dev, struct device_attribute *a
mutex_unlock(&hwmon->hwmon_lock);
- xe_pm_runtime_put(hwmon->xe);
-
return count;
}
@@ -1126,7 +1122,7 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
struct xe_hwmon *hwmon = dev_get_drvdata(dev);
int ret;
- xe_pm_runtime_get(hwmon->xe);
+ guard(xe_pm_runtime)(hwmon->xe);
switch (type) {
case hwmon_temp:
@@ -1152,8 +1148,6 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
break;
}
- xe_pm_runtime_put(hwmon->xe);
-
return ret;
}
@@ -1164,7 +1158,7 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
struct xe_hwmon *hwmon = dev_get_drvdata(dev);
int ret;
- xe_pm_runtime_get(hwmon->xe);
+ guard(xe_pm_runtime)(hwmon->xe);
switch (type) {
case hwmon_power:
@@ -1178,8 +1172,6 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
break;
}
- xe_pm_runtime_put(hwmon->xe);
-
return ret;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 26/30] drm/xe/hwmon: Use scope-based runtime PM
2025-11-10 23:20 ` [PATCH v2 26/30] drm/xe/hwmon: Use scope-based runtime PM Matt Roper
@ 2025-11-13 18:01 ` Gustavo Sousa
2025-11-13 18:05 ` Gustavo Sousa
0 siblings, 1 reply; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 18:01 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:44-03:00)
>Use scope-based runtime power management in the hwmon code for
>consistency with other parts of the driver.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_hwmon.c | 16 ++++------------
> 1 file changed, 4 insertions(+), 12 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_hwmon.c b/drivers/gpu/drm/xe/xe_hwmon.c
>index 97879daeefc1..5ad351fad6e2 100644
>--- a/drivers/gpu/drm/xe/xe_hwmon.c
>+++ b/drivers/gpu/drm/xe/xe_hwmon.c
>@@ -502,7 +502,7 @@ xe_hwmon_power_max_interval_show(struct device *dev, struct device_attribute *at
>
> int ret = 0;
>
>- xe_pm_runtime_get(hwmon->xe);
>+ guard(xe_pm_runtime)(hwmon->xe);
>
> mutex_lock(&hwmon->hwmon_lock);
>
>@@ -521,8 +521,6 @@ xe_hwmon_power_max_interval_show(struct device *dev, struct device_attribute *at
>
> mutex_unlock(&hwmon->hwmon_lock);
>
>- xe_pm_runtime_put(hwmon->xe);
>-
> x = REG_FIELD_GET(PWR_LIM_TIME_X, reg_val);
> y = REG_FIELD_GET(PWR_LIM_TIME_Y, reg_val);
>
>@@ -604,7 +602,7 @@ xe_hwmon_power_max_interval_store(struct device *dev, struct device_attribute *a
> rxy = REG_FIELD_PREP(PWR_LIM_TIME_X, x) |
> REG_FIELD_PREP(PWR_LIM_TIME_Y, y);
>
>- xe_pm_runtime_get(hwmon->xe);
>+ guard(xe_pm_runtime)(hwmon->xe);
>
> mutex_lock(&hwmon->hwmon_lock);
>
>@@ -616,8 +614,6 @@ xe_hwmon_power_max_interval_store(struct device *dev, struct device_attribute *a
>
> mutex_unlock(&hwmon->hwmon_lock);
>
>- xe_pm_runtime_put(hwmon->xe);
>-
> return count;
> }
>
>@@ -1126,7 +1122,7 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
> struct xe_hwmon *hwmon = dev_get_drvdata(dev);
> int ret;
>
>- xe_pm_runtime_get(hwmon->xe);
>+ guard(xe_pm_runtime)(hwmon->xe);
>
> switch (type) {
> case hwmon_temp:
>@@ -1152,8 +1148,6 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
> break;
> }
>
>- xe_pm_runtime_put(hwmon->xe);
>-
> return ret;
> }
>
>@@ -1164,7 +1158,7 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
> struct xe_hwmon *hwmon = dev_get_drvdata(dev);
> int ret;
>
>- xe_pm_runtime_get(hwmon->xe);
>+ guard(xe_pm_runtime)(hwmon->xe);
>
> switch (type) {
> case hwmon_power:
>@@ -1178,8 +1172,6 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
> break;
> }
>
>- xe_pm_runtime_put(hwmon->xe);
>-
> return ret;
> }
>
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread* Re: [PATCH v2 26/30] drm/xe/hwmon: Use scope-based runtime PM
2025-11-13 18:01 ` Gustavo Sousa
@ 2025-11-13 18:05 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 18:05 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Gustavo Sousa (2025-11-13 15:01:18-03:00)
>Quoting Matt Roper (2025-11-10 20:20:44-03:00)
>>Use scope-based runtime power management in the hwmon code for
>>consistency with other parts of the driver.
>>
>>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>
>Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
Ah, I just realized that functions xe_hwmon_read() and xe_hwmon_write()
can be simplified by dropping the variable ret and return directly.
--
Gustavo Sousa
>
>>---
>> drivers/gpu/drm/xe/xe_hwmon.c | 16 ++++------------
>> 1 file changed, 4 insertions(+), 12 deletions(-)
>>
>>diff --git a/drivers/gpu/drm/xe/xe_hwmon.c b/drivers/gpu/drm/xe/xe_hwmon.c
>>index 97879daeefc1..5ad351fad6e2 100644
>>--- a/drivers/gpu/drm/xe/xe_hwmon.c
>>+++ b/drivers/gpu/drm/xe/xe_hwmon.c
>>@@ -502,7 +502,7 @@ xe_hwmon_power_max_interval_show(struct device *dev, struct device_attribute *at
>>
>> int ret = 0;
>>
>>- xe_pm_runtime_get(hwmon->xe);
>>+ guard(xe_pm_runtime)(hwmon->xe);
>>
>> mutex_lock(&hwmon->hwmon_lock);
>>
>>@@ -521,8 +521,6 @@ xe_hwmon_power_max_interval_show(struct device *dev, struct device_attribute *at
>>
>> mutex_unlock(&hwmon->hwmon_lock);
>>
>>- xe_pm_runtime_put(hwmon->xe);
>>-
>> x = REG_FIELD_GET(PWR_LIM_TIME_X, reg_val);
>> y = REG_FIELD_GET(PWR_LIM_TIME_Y, reg_val);
>>
>>@@ -604,7 +602,7 @@ xe_hwmon_power_max_interval_store(struct device *dev, struct device_attribute *a
>> rxy = REG_FIELD_PREP(PWR_LIM_TIME_X, x) |
>> REG_FIELD_PREP(PWR_LIM_TIME_Y, y);
>>
>>- xe_pm_runtime_get(hwmon->xe);
>>+ guard(xe_pm_runtime)(hwmon->xe);
>>
>> mutex_lock(&hwmon->hwmon_lock);
>>
>>@@ -616,8 +614,6 @@ xe_hwmon_power_max_interval_store(struct device *dev, struct device_attribute *a
>>
>> mutex_unlock(&hwmon->hwmon_lock);
>>
>>- xe_pm_runtime_put(hwmon->xe);
>>-
>> return count;
>> }
>>
>>@@ -1126,7 +1122,7 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
>> struct xe_hwmon *hwmon = dev_get_drvdata(dev);
>> int ret;
>>
>>- xe_pm_runtime_get(hwmon->xe);
>>+ guard(xe_pm_runtime)(hwmon->xe);
>>
>> switch (type) {
>> case hwmon_temp:
>>@@ -1152,8 +1148,6 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
>> break;
>> }
>>
>>- xe_pm_runtime_put(hwmon->xe);
>>-
>> return ret;
>> }
>>
>>@@ -1164,7 +1158,7 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
>> struct xe_hwmon *hwmon = dev_get_drvdata(dev);
>> int ret;
>>
>>- xe_pm_runtime_get(hwmon->xe);
>>+ guard(xe_pm_runtime)(hwmon->xe);
>>
>> switch (type) {
>> case hwmon_power:
>>@@ -1178,8 +1172,6 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
>> break;
>> }
>>
>>- xe_pm_runtime_put(hwmon->xe);
>>-
>> return ret;
>> }
>>
>>--
>>2.51.1
>>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 27/30] drm/xe/sriov: Use scope-based runtime PM
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (25 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 26/30] drm/xe/hwmon: Use scope-based runtime PM Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 18:09 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 28/30] drm/xe/tests: " Matt Roper
` (7 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based runtime power management in the SRIOV code for
consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_pci_sriov.c | 3 +--
drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c | 6 ++----
drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 6 ++----
drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +----
drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c | 3 +--
5 files changed, 7 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.c b/drivers/gpu/drm/xe/xe_pci_sriov.c
index d0fcde66a774..4b16748fe2ed 100644
--- a/drivers/gpu/drm/xe/xe_pci_sriov.c
+++ b/drivers/gpu/drm/xe/xe_pci_sriov.c
@@ -212,12 +212,11 @@ int xe_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
if (num_vfs && pci_num_vf(pdev))
return -EBUSY;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
if (num_vfs > 0)
ret = pf_enable_vfs(xe, num_vfs);
else
ret = pf_disable_vfs(xe);
- xe_pm_runtime_put(xe);
return ret;
}
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c
index a81aa05c5532..21eafe333cb5 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c
@@ -69,9 +69,8 @@ static ssize_t from_file_write_to_xe_call(struct file *file, const char __user *
if (ret < 0)
return ret;
if (yes) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = call(xe);
- xe_pm_runtime_put(xe);
}
if (ret < 0)
return ret;
@@ -157,9 +156,8 @@ static ssize_t from_file_write_to_vf_call(struct file *file, const char __user *
if (ret < 0)
return ret;
if (yes) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = call(xe, vfid);
- xe_pm_runtime_put(xe);
}
if (ret < 0)
return ret;
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index c0b767ac735c..f0777976335c 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -394,9 +394,8 @@ static ssize_t xe_sriov_dev_attr_store(struct kobject *kobj, struct attribute *a
if (!vattr->store)
return -EPERM;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_sriov_pf_wait_ready(xe) ?: vattr->store(xe, buf, count);
- xe_pm_runtime_put(xe);
return ret;
}
@@ -430,9 +429,8 @@ static ssize_t xe_sriov_vf_attr_store(struct kobject *kobj, struct attribute *at
if (!vattr->store)
return -EPERM;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_sriov_pf_wait_ready(xe) ?: vattr->store(xe, vfid, buf, count);
- xe_pm_runtime_get(xe);
return ret;
}
diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
index 797a4b866226..e1cdc46ad710 100644
--- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
@@ -463,8 +463,7 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
if (!IS_VF_CCS_READY(xe))
return;
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_ccs_rw_ctx(ctx_id) {
bb_pool = xe->sriov.vf.ccs.contexts[ctx_id].mem.ccs_bb_pool;
if (!bb_pool)
@@ -475,6 +474,4 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
drm_suballoc_dump_debug_info(&bb_pool->base, p, xe_sa_manager_gpu_addr(bb_pool));
drm_puts(p, "\n");
}
-
- xe_pm_runtime_put(xe);
}
diff --git a/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c
index f3f478f14ff5..7f97db2f89bb 100644
--- a/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c
@@ -141,12 +141,11 @@ static int NAME##_set(void *data, u64 val) \
if (val > (TYPE)~0ull) \
return -EOVERFLOW; \
\
- xe_pm_runtime_get(xe); \
+ guard(xe_pm_runtime)(xe); \
err = xe_sriov_pf_wait_ready(xe) ?: \
xe_gt_sriov_pf_config_set_##CONFIG(gt, vfid, val); \
if (!err) \
xe_sriov_pf_provision_set_custom_mode(xe); \
- xe_pm_runtime_put(xe); \
\
return err; \
} \
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 27/30] drm/xe/sriov: Use scope-based runtime PM
2025-11-10 23:20 ` [PATCH v2 27/30] drm/xe/sriov: " Matt Roper
@ 2025-11-13 18:09 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 18:09 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:45-03:00)
>Use scope-based runtime power management in the SRIOV code for
>consistency with other parts of the driver.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
I think we can drop variable ret in functions:
* xe_pci_sriov_configure()
* xe_sriov_dev_attr_store()
* xe_sriov_vf_attr_store()
With that,
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_pci_sriov.c | 3 +--
> drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c | 6 ++----
> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 6 ++----
> drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +----
> drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c | 3 +--
> 5 files changed, 7 insertions(+), 16 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.c b/drivers/gpu/drm/xe/xe_pci_sriov.c
>index d0fcde66a774..4b16748fe2ed 100644
>--- a/drivers/gpu/drm/xe/xe_pci_sriov.c
>+++ b/drivers/gpu/drm/xe/xe_pci_sriov.c
>@@ -212,12 +212,11 @@ int xe_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
> if (num_vfs && pci_num_vf(pdev))
> return -EBUSY;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> if (num_vfs > 0)
> ret = pf_enable_vfs(xe, num_vfs);
> else
> ret = pf_disable_vfs(xe);
>- xe_pm_runtime_put(xe);
>
> return ret;
> }
>diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c
>index a81aa05c5532..21eafe333cb5 100644
>--- a/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c
>+++ b/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c
>@@ -69,9 +69,8 @@ static ssize_t from_file_write_to_xe_call(struct file *file, const char __user *
> if (ret < 0)
> return ret;
> if (yes) {
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = call(xe);
>- xe_pm_runtime_put(xe);
> }
> if (ret < 0)
> return ret;
>@@ -157,9 +156,8 @@ static ssize_t from_file_write_to_vf_call(struct file *file, const char __user *
> if (ret < 0)
> return ret;
> if (yes) {
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = call(xe, vfid);
>- xe_pm_runtime_put(xe);
> }
> if (ret < 0)
> return ret;
>diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>index c0b767ac735c..f0777976335c 100644
>--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
>@@ -394,9 +394,8 @@ static ssize_t xe_sriov_dev_attr_store(struct kobject *kobj, struct attribute *a
> if (!vattr->store)
> return -EPERM;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = xe_sriov_pf_wait_ready(xe) ?: vattr->store(xe, buf, count);
>- xe_pm_runtime_put(xe);
>
> return ret;
> }
>@@ -430,9 +429,8 @@ static ssize_t xe_sriov_vf_attr_store(struct kobject *kobj, struct attribute *at
> if (!vattr->store)
> return -EPERM;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = xe_sriov_pf_wait_ready(xe) ?: vattr->store(xe, vfid, buf, count);
>- xe_pm_runtime_get(xe);
>
> return ret;
> }
>diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
>index 797a4b866226..e1cdc46ad710 100644
>--- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
>+++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
>@@ -463,8 +463,7 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
> if (!IS_VF_CCS_READY(xe))
> return;
>
>- xe_pm_runtime_get(xe);
>-
>+ guard(xe_pm_runtime)(xe);
> for_each_ccs_rw_ctx(ctx_id) {
> bb_pool = xe->sriov.vf.ccs.contexts[ctx_id].mem.ccs_bb_pool;
> if (!bb_pool)
>@@ -475,6 +474,4 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
> drm_suballoc_dump_debug_info(&bb_pool->base, p, xe_sa_manager_gpu_addr(bb_pool));
> drm_puts(p, "\n");
> }
>-
>- xe_pm_runtime_put(xe);
> }
>diff --git a/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c
>index f3f478f14ff5..7f97db2f89bb 100644
>--- a/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c
>+++ b/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c
>@@ -141,12 +141,11 @@ static int NAME##_set(void *data, u64 val) \
> if (val > (TYPE)~0ull) \
> return -EOVERFLOW; \
> \
>- xe_pm_runtime_get(xe); \
>+ guard(xe_pm_runtime)(xe); \
> err = xe_sriov_pf_wait_ready(xe) ?: \
> xe_gt_sriov_pf_config_set_##CONFIG(gt, vfid, val); \
> if (!err) \
> xe_sriov_pf_provision_set_custom_mode(xe); \
>- xe_pm_runtime_put(xe); \
> \
> return err; \
> } \
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 28/30] drm/xe/tests: Use scope-based runtime PM
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (26 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 27/30] drm/xe/sriov: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 18:15 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 29/30] drm/xe/sysfs: Use scope-based runtime power management Matt Roper
` (6 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based handling of runtime PM in the kunit tests for
consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/tests/xe_bo.c | 10 ++--------
drivers/gpu/drm/xe/tests/xe_dma_buf.c | 3 +--
drivers/gpu/drm/xe/tests/xe_migrate.c | 10 ++--------
drivers/gpu/drm/xe/tests/xe_mocs.c | 10 ++--------
4 files changed, 7 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c
index 2294cf89f3e1..2278e589a493 100644
--- a/drivers/gpu/drm/xe/tests/xe_bo.c
+++ b/drivers/gpu/drm/xe/tests/xe_bo.c
@@ -185,8 +185,7 @@ static int ccs_test_run_device(struct xe_device *xe)
return 0;
}
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_tile(tile, xe, id) {
/* For igfx run only for primary tile */
if (!IS_DGFX(xe) && id > 0)
@@ -194,8 +193,6 @@ static int ccs_test_run_device(struct xe_device *xe)
ccs_test_run_tile(xe, tile, test);
}
- xe_pm_runtime_put(xe);
-
return 0;
}
@@ -356,13 +353,10 @@ static int evict_test_run_device(struct xe_device *xe)
return 0;
}
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_tile(tile, xe, id)
evict_test_run_tile(xe, tile, test);
- xe_pm_runtime_put(xe);
-
return 0;
}
diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
index 5df98de5ba3c..954b6b911ea0 100644
--- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c
+++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
@@ -266,7 +266,7 @@ static int dma_buf_run_device(struct xe_device *xe)
const struct dma_buf_test_params *params;
struct kunit *test = kunit_get_current_test();
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
for (params = test_params; params->mem_mask; ++params) {
struct dma_buf_test_params p = *params;
@@ -274,7 +274,6 @@ static int dma_buf_run_device(struct xe_device *xe)
test->priv = &p;
xe_test_dmabuf_import_same_driver(xe);
}
- xe_pm_runtime_put(xe);
/* A non-zero return would halt iteration over driver devices */
return 0;
diff --git a/drivers/gpu/drm/xe/tests/xe_migrate.c b/drivers/gpu/drm/xe/tests/xe_migrate.c
index 5904d658d1f2..34e2f0f4631f 100644
--- a/drivers/gpu/drm/xe/tests/xe_migrate.c
+++ b/drivers/gpu/drm/xe/tests/xe_migrate.c
@@ -344,8 +344,7 @@ static int migrate_test_run_device(struct xe_device *xe)
struct xe_tile *tile;
int id;
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_tile(tile, xe, id) {
struct xe_migrate *m = tile->migrate;
struct drm_exec *exec = XE_VALIDATION_OPT_OUT;
@@ -356,8 +355,6 @@ static int migrate_test_run_device(struct xe_device *xe)
xe_vm_unlock(m->q->vm);
}
- xe_pm_runtime_put(xe);
-
return 0;
}
@@ -759,13 +756,10 @@ static int validate_ccs_test_run_device(struct xe_device *xe)
return 0;
}
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_tile(tile, xe, id)
validate_ccs_test_run_tile(xe, tile, test);
- xe_pm_runtime_put(xe);
-
return 0;
}
diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
index 53a0c9c49f85..28cebbe4baed 100644
--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
+++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
@@ -115,8 +115,7 @@ static int mocs_kernel_test_run_device(struct xe_device *xe)
unsigned int flags;
int id;
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_gt(gt, xe, id) {
flags = live_mocs_init(&mocs, gt);
if (flags & HAS_GLOBAL_MOCS)
@@ -125,8 +124,6 @@ static int mocs_kernel_test_run_device(struct xe_device *xe)
read_l3cc_table(gt, &mocs.table);
}
- xe_pm_runtime_put(xe);
-
return 0;
}
@@ -150,8 +147,7 @@ static int mocs_reset_test_run_device(struct xe_device *xe)
int id;
struct kunit *test = kunit_get_current_test();
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_gt(gt, xe, id) {
flags = live_mocs_init(&mocs, gt);
kunit_info(test, "mocs_reset_test before reset\n");
@@ -169,8 +165,6 @@ static int mocs_reset_test_run_device(struct xe_device *xe)
read_l3cc_table(gt, &mocs.table);
}
- xe_pm_runtime_put(xe);
-
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 28/30] drm/xe/tests: Use scope-based runtime PM
2025-11-10 23:20 ` [PATCH v2 28/30] drm/xe/tests: " Matt Roper
@ 2025-11-13 18:15 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 18:15 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:46-03:00)
>Use scope-based handling of runtime PM in the kunit tests for
>consistency with other parts of the driver.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/tests/xe_bo.c | 10 ++--------
> drivers/gpu/drm/xe/tests/xe_dma_buf.c | 3 +--
> drivers/gpu/drm/xe/tests/xe_migrate.c | 10 ++--------
> drivers/gpu/drm/xe/tests/xe_mocs.c | 10 ++--------
> 4 files changed, 7 insertions(+), 26 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c
>index 2294cf89f3e1..2278e589a493 100644
>--- a/drivers/gpu/drm/xe/tests/xe_bo.c
>+++ b/drivers/gpu/drm/xe/tests/xe_bo.c
>@@ -185,8 +185,7 @@ static int ccs_test_run_device(struct xe_device *xe)
> return 0;
> }
>
>- xe_pm_runtime_get(xe);
>-
>+ guard(xe_pm_runtime)(xe);
> for_each_tile(tile, xe, id) {
> /* For igfx run only for primary tile */
> if (!IS_DGFX(xe) && id > 0)
>@@ -194,8 +193,6 @@ static int ccs_test_run_device(struct xe_device *xe)
> ccs_test_run_tile(xe, tile, test);
> }
>
>- xe_pm_runtime_put(xe);
>-
> return 0;
> }
>
>@@ -356,13 +353,10 @@ static int evict_test_run_device(struct xe_device *xe)
> return 0;
> }
>
>- xe_pm_runtime_get(xe);
>-
>+ guard(xe_pm_runtime)(xe);
> for_each_tile(tile, xe, id)
> evict_test_run_tile(xe, tile, test);
>
>- xe_pm_runtime_put(xe);
>-
> return 0;
> }
>
>diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
>index 5df98de5ba3c..954b6b911ea0 100644
>--- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c
>+++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
>@@ -266,7 +266,7 @@ static int dma_buf_run_device(struct xe_device *xe)
> const struct dma_buf_test_params *params;
> struct kunit *test = kunit_get_current_test();
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> for (params = test_params; params->mem_mask; ++params) {
> struct dma_buf_test_params p = *params;
>
>@@ -274,7 +274,6 @@ static int dma_buf_run_device(struct xe_device *xe)
> test->priv = &p;
> xe_test_dmabuf_import_same_driver(xe);
> }
>- xe_pm_runtime_put(xe);
>
> /* A non-zero return would halt iteration over driver devices */
> return 0;
>diff --git a/drivers/gpu/drm/xe/tests/xe_migrate.c b/drivers/gpu/drm/xe/tests/xe_migrate.c
>index 5904d658d1f2..34e2f0f4631f 100644
>--- a/drivers/gpu/drm/xe/tests/xe_migrate.c
>+++ b/drivers/gpu/drm/xe/tests/xe_migrate.c
>@@ -344,8 +344,7 @@ static int migrate_test_run_device(struct xe_device *xe)
> struct xe_tile *tile;
> int id;
>
>- xe_pm_runtime_get(xe);
>-
>+ guard(xe_pm_runtime)(xe);
> for_each_tile(tile, xe, id) {
> struct xe_migrate *m = tile->migrate;
> struct drm_exec *exec = XE_VALIDATION_OPT_OUT;
>@@ -356,8 +355,6 @@ static int migrate_test_run_device(struct xe_device *xe)
> xe_vm_unlock(m->q->vm);
> }
>
>- xe_pm_runtime_put(xe);
>-
> return 0;
> }
>
>@@ -759,13 +756,10 @@ static int validate_ccs_test_run_device(struct xe_device *xe)
> return 0;
> }
>
>- xe_pm_runtime_get(xe);
>-
>+ guard(xe_pm_runtime)(xe);
> for_each_tile(tile, xe, id)
> validate_ccs_test_run_tile(xe, tile, test);
>
>- xe_pm_runtime_put(xe);
>-
> return 0;
> }
>
>diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
>index 53a0c9c49f85..28cebbe4baed 100644
>--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
>+++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
>@@ -115,8 +115,7 @@ static int mocs_kernel_test_run_device(struct xe_device *xe)
> unsigned int flags;
> int id;
>
>- xe_pm_runtime_get(xe);
>-
>+ guard(xe_pm_runtime)(xe);
> for_each_gt(gt, xe, id) {
> flags = live_mocs_init(&mocs, gt);
> if (flags & HAS_GLOBAL_MOCS)
>@@ -125,8 +124,6 @@ static int mocs_kernel_test_run_device(struct xe_device *xe)
> read_l3cc_table(gt, &mocs.table);
> }
>
>- xe_pm_runtime_put(xe);
>-
> return 0;
> }
>
>@@ -150,8 +147,7 @@ static int mocs_reset_test_run_device(struct xe_device *xe)
> int id;
> struct kunit *test = kunit_get_current_test();
>
>- xe_pm_runtime_get(xe);
>-
>+ guard(xe_pm_runtime)(xe);
> for_each_gt(gt, xe, id) {
> flags = live_mocs_init(&mocs, gt);
> kunit_info(test, "mocs_reset_test before reset\n");
>@@ -169,8 +165,6 @@ static int mocs_reset_test_run_device(struct xe_device *xe)
> read_l3cc_table(gt, &mocs.table);
> }
>
>- xe_pm_runtime_put(xe);
>-
> return 0;
> }
>
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 29/30] drm/xe/sysfs: Use scope-based runtime power management
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (27 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 28/30] drm/xe/tests: " Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 18:25 ` Gustavo Sousa
2025-11-10 23:20 ` [PATCH v2 30/30] drm/xe/debugfs: Use scope-based runtime PM Matt Roper
` (5 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Switch sysfs to use scope-based runtime power management to slightly
simplify the code.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_device_sysfs.c | 33 ++++++++-----------
drivers/gpu/drm/xe/xe_gt_freq.c | 27 +++++----------
drivers/gpu/drm/xe/xe_gt_throttle.c | 3 +-
drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c | 6 ++--
4 files changed, 25 insertions(+), 44 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device_sysfs.c b/drivers/gpu/drm/xe/xe_device_sysfs.c
index ec9c06b06fb5..a73e0e957cb0 100644
--- a/drivers/gpu/drm/xe/xe_device_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_device_sysfs.c
@@ -57,9 +57,8 @@ vram_d3cold_threshold_store(struct device *dev, struct device_attribute *attr,
drm_dbg(&xe->drm, "vram_d3cold_threshold: %u\n", vram_d3cold_threshold);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_pm_set_vram_threshold(xe, vram_d3cold_threshold);
- xe_pm_runtime_put(xe);
return ret ?: count;
}
@@ -84,33 +83,31 @@ lb_fan_control_version_show(struct device *dev, struct device_attribute *attr, c
u16 major = 0, minor = 0, hotfix = 0, build = 0;
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_CAPABILITY_STATUS, 0),
&cap, NULL);
if (ret)
- goto out;
+ return ret;
if (REG_FIELD_GET(V1_FAN_PROVISIONED, cap)) {
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_LOW, 0),
&ver_low, NULL);
if (ret)
- goto out;
+ return ret;
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_HIGH, 0),
&ver_high, NULL);
if (ret)
- goto out;
+ return ret;
major = REG_FIELD_GET(MAJOR_VERSION_MASK, ver_low);
minor = REG_FIELD_GET(MINOR_VERSION_MASK, ver_low);
hotfix = REG_FIELD_GET(HOTFIX_VERSION_MASK, ver_high);
build = REG_FIELD_GET(BUILD_VERSION_MASK, ver_high);
}
-out:
- xe_pm_runtime_put(xe);
- return ret ?: sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
+ return sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
}
static DEVICE_ATTR_ADMIN_RO(lb_fan_control_version);
@@ -123,33 +120,31 @@ lb_voltage_regulator_version_show(struct device *dev, struct device_attribute *a
u16 major = 0, minor = 0, hotfix = 0, build = 0;
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_CAPABILITY_STATUS, 0),
&cap, NULL);
if (ret)
- goto out;
+ return ret;
if (REG_FIELD_GET(VR_PARAMS_PROVISIONED, cap)) {
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_LOW, 0),
&ver_low, NULL);
if (ret)
- goto out;
+ return ret;
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_HIGH, 0),
&ver_high, NULL);
if (ret)
- goto out;
+ return ret;
major = REG_FIELD_GET(MAJOR_VERSION_MASK, ver_low);
minor = REG_FIELD_GET(MINOR_VERSION_MASK, ver_low);
hotfix = REG_FIELD_GET(HOTFIX_VERSION_MASK, ver_high);
build = REG_FIELD_GET(BUILD_VERSION_MASK, ver_high);
}
-out:
- xe_pm_runtime_put(xe);
- return ret ?: sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
+ return sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
}
static DEVICE_ATTR_ADMIN_RO(lb_voltage_regulator_version);
@@ -233,9 +228,8 @@ auto_link_downgrade_capable_show(struct device *dev, struct device_attribute *at
struct xe_device *xe = pdev_to_xe_device(pdev);
u32 cap, val;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
val = xe_mmio_read32(xe_root_tile_mmio(xe), BMG_PCIE_CAP);
- xe_pm_runtime_put(xe);
cap = REG_FIELD_GET(LINK_DOWNGRADE, val);
return sysfs_emit(buf, "%u\n", cap == DOWNGRADE_CAPABLE);
@@ -251,11 +245,10 @@ auto_link_downgrade_status_show(struct device *dev, struct device_attribute *att
u32 val = 0;
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_pcode_read(xe_device_get_root_tile(xe),
PCODE_MBOX(DGFX_PCODE_STATUS, DGFX_GET_INIT_STATUS, 0),
&val, NULL);
- xe_pm_runtime_put(xe);
return ret ?: sysfs_emit(buf, "%u\n", REG_FIELD_GET(DGFX_LINK_DOWNGRADE_STATUS, val));
}
diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
index 849ea6c86e8e..6284a4daf00a 100644
--- a/drivers/gpu/drm/xe/xe_gt_freq.c
+++ b/drivers/gpu/drm/xe/xe_gt_freq.c
@@ -70,9 +70,8 @@ static ssize_t act_freq_show(struct kobject *kobj,
struct xe_guc_pc *pc = dev_to_pc(dev);
u32 freq;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
freq = xe_guc_pc_get_act_freq(pc);
- xe_pm_runtime_put(dev_to_xe(dev));
return sysfs_emit(buf, "%d\n", freq);
}
@@ -86,9 +85,8 @@ static ssize_t cur_freq_show(struct kobject *kobj,
u32 freq;
ssize_t ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_get_cur_freq(pc, &freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -113,9 +111,8 @@ static ssize_t rpe_freq_show(struct kobject *kobj,
struct xe_guc_pc *pc = dev_to_pc(dev);
u32 freq;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
freq = xe_guc_pc_get_rpe_freq(pc);
- xe_pm_runtime_put(dev_to_xe(dev));
return sysfs_emit(buf, "%d\n", freq);
}
@@ -128,9 +125,8 @@ static ssize_t rpa_freq_show(struct kobject *kobj,
struct xe_guc_pc *pc = dev_to_pc(dev);
u32 freq;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
freq = xe_guc_pc_get_rpa_freq(pc);
- xe_pm_runtime_put(dev_to_xe(dev));
return sysfs_emit(buf, "%d\n", freq);
}
@@ -154,9 +150,8 @@ static ssize_t min_freq_show(struct kobject *kobj,
u32 freq;
ssize_t ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_get_min_freq(pc, &freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -175,9 +170,8 @@ static ssize_t min_freq_store(struct kobject *kobj,
if (ret)
return ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_set_min_freq(pc, freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -193,9 +187,8 @@ static ssize_t max_freq_show(struct kobject *kobj,
u32 freq;
ssize_t ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_get_max_freq(pc, &freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -214,9 +207,8 @@ static ssize_t max_freq_store(struct kobject *kobj,
if (ret)
return ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_set_max_freq(pc, freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -243,9 +235,8 @@ static ssize_t power_profile_store(struct kobject *kobj,
struct xe_guc_pc *pc = dev_to_pc(dev);
int err;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
err = xe_guc_pc_set_power_profile(pc, buff);
- xe_pm_runtime_put(dev_to_xe(dev));
return err ?: count;
}
diff --git a/drivers/gpu/drm/xe/xe_gt_throttle.c b/drivers/gpu/drm/xe/xe_gt_throttle.c
index 82c5fbcdfbe3..0ee288389e71 100644
--- a/drivers/gpu/drm/xe/xe_gt_throttle.c
+++ b/drivers/gpu/drm/xe/xe_gt_throttle.c
@@ -97,9 +97,8 @@ u32 xe_gt_throttle_get_limit_reasons(struct xe_gt *gt)
else
mask = GT0_PERF_LIMIT_REASONS_MASK;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
val = xe_mmio_read32(>->mmio, reg) & mask;
- xe_pm_runtime_put(xe);
return val;
}
diff --git a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
index 640950172088..1d3511d0d025 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
@@ -47,9 +47,8 @@ static ssize_t xe_hw_engine_class_sysfs_attr_show(struct kobject *kobj,
kattr = container_of(attr, struct kobj_attribute, attr);
if (kattr->show) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = kattr->show(kobj, kattr, buf);
- xe_pm_runtime_put(xe);
}
return ret;
@@ -66,9 +65,8 @@ static ssize_t xe_hw_engine_class_sysfs_attr_store(struct kobject *kobj,
kattr = container_of(attr, struct kobj_attribute, attr);
if (kattr->store) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = kattr->store(kobj, kattr, buf, count);
- xe_pm_runtime_put(xe);
}
return ret;
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 29/30] drm/xe/sysfs: Use scope-based runtime power management
2025-11-10 23:20 ` [PATCH v2 29/30] drm/xe/sysfs: Use scope-based runtime power management Matt Roper
@ 2025-11-13 18:25 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 18:25 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:47-03:00)
>Switch sysfs to use scope-based runtime power management to slightly
>simplify the code.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
>---
> drivers/gpu/drm/xe/xe_device_sysfs.c | 33 ++++++++-----------
> drivers/gpu/drm/xe/xe_gt_freq.c | 27 +++++----------
> drivers/gpu/drm/xe/xe_gt_throttle.c | 3 +-
> drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c | 6 ++--
> 4 files changed, 25 insertions(+), 44 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_device_sysfs.c b/drivers/gpu/drm/xe/xe_device_sysfs.c
>index ec9c06b06fb5..a73e0e957cb0 100644
>--- a/drivers/gpu/drm/xe/xe_device_sysfs.c
>+++ b/drivers/gpu/drm/xe/xe_device_sysfs.c
>@@ -57,9 +57,8 @@ vram_d3cold_threshold_store(struct device *dev, struct device_attribute *attr,
>
> drm_dbg(&xe->drm, "vram_d3cold_threshold: %u\n", vram_d3cold_threshold);
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = xe_pm_set_vram_threshold(xe, vram_d3cold_threshold);
>- xe_pm_runtime_put(xe);
>
> return ret ?: count;
> }
>@@ -84,33 +83,31 @@ lb_fan_control_version_show(struct device *dev, struct device_attribute *attr, c
> u16 major = 0, minor = 0, hotfix = 0, build = 0;
> int ret;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
>
> ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_CAPABILITY_STATUS, 0),
> &cap, NULL);
> if (ret)
>- goto out;
>+ return ret;
>
> if (REG_FIELD_GET(V1_FAN_PROVISIONED, cap)) {
> ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_LOW, 0),
> &ver_low, NULL);
> if (ret)
>- goto out;
>+ return ret;
>
> ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_HIGH, 0),
> &ver_high, NULL);
> if (ret)
>- goto out;
>+ return ret;
>
> major = REG_FIELD_GET(MAJOR_VERSION_MASK, ver_low);
> minor = REG_FIELD_GET(MINOR_VERSION_MASK, ver_low);
> hotfix = REG_FIELD_GET(HOTFIX_VERSION_MASK, ver_high);
> build = REG_FIELD_GET(BUILD_VERSION_MASK, ver_high);
> }
>-out:
>- xe_pm_runtime_put(xe);
>
>- return ret ?: sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
>+ return sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
> }
> static DEVICE_ATTR_ADMIN_RO(lb_fan_control_version);
>
>@@ -123,33 +120,31 @@ lb_voltage_regulator_version_show(struct device *dev, struct device_attribute *a
> u16 major = 0, minor = 0, hotfix = 0, build = 0;
> int ret;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
>
> ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_CAPABILITY_STATUS, 0),
> &cap, NULL);
> if (ret)
>- goto out;
>+ return ret;
>
> if (REG_FIELD_GET(VR_PARAMS_PROVISIONED, cap)) {
> ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_LOW, 0),
> &ver_low, NULL);
> if (ret)
>- goto out;
>+ return ret;
>
> ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_HIGH, 0),
> &ver_high, NULL);
> if (ret)
>- goto out;
>+ return ret;
>
> major = REG_FIELD_GET(MAJOR_VERSION_MASK, ver_low);
> minor = REG_FIELD_GET(MINOR_VERSION_MASK, ver_low);
> hotfix = REG_FIELD_GET(HOTFIX_VERSION_MASK, ver_high);
> build = REG_FIELD_GET(BUILD_VERSION_MASK, ver_high);
> }
>-out:
>- xe_pm_runtime_put(xe);
>
>- return ret ?: sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
>+ return sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
> }
> static DEVICE_ATTR_ADMIN_RO(lb_voltage_regulator_version);
>
>@@ -233,9 +228,8 @@ auto_link_downgrade_capable_show(struct device *dev, struct device_attribute *at
> struct xe_device *xe = pdev_to_xe_device(pdev);
> u32 cap, val;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> val = xe_mmio_read32(xe_root_tile_mmio(xe), BMG_PCIE_CAP);
>- xe_pm_runtime_put(xe);
>
> cap = REG_FIELD_GET(LINK_DOWNGRADE, val);
> return sysfs_emit(buf, "%u\n", cap == DOWNGRADE_CAPABLE);
>@@ -251,11 +245,10 @@ auto_link_downgrade_status_show(struct device *dev, struct device_attribute *att
> u32 val = 0;
> int ret;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = xe_pcode_read(xe_device_get_root_tile(xe),
> PCODE_MBOX(DGFX_PCODE_STATUS, DGFX_GET_INIT_STATUS, 0),
> &val, NULL);
>- xe_pm_runtime_put(xe);
>
> return ret ?: sysfs_emit(buf, "%u\n", REG_FIELD_GET(DGFX_LINK_DOWNGRADE_STATUS, val));
> }
>diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
>index 849ea6c86e8e..6284a4daf00a 100644
>--- a/drivers/gpu/drm/xe/xe_gt_freq.c
>+++ b/drivers/gpu/drm/xe/xe_gt_freq.c
>@@ -70,9 +70,8 @@ static ssize_t act_freq_show(struct kobject *kobj,
> struct xe_guc_pc *pc = dev_to_pc(dev);
> u32 freq;
>
>- xe_pm_runtime_get(dev_to_xe(dev));
>+ guard(xe_pm_runtime)(dev_to_xe(dev));
> freq = xe_guc_pc_get_act_freq(pc);
>- xe_pm_runtime_put(dev_to_xe(dev));
>
> return sysfs_emit(buf, "%d\n", freq);
> }
>@@ -86,9 +85,8 @@ static ssize_t cur_freq_show(struct kobject *kobj,
> u32 freq;
> ssize_t ret;
>
>- xe_pm_runtime_get(dev_to_xe(dev));
>+ guard(xe_pm_runtime)(dev_to_xe(dev));
> ret = xe_guc_pc_get_cur_freq(pc, &freq);
>- xe_pm_runtime_put(dev_to_xe(dev));
> if (ret)
> return ret;
>
>@@ -113,9 +111,8 @@ static ssize_t rpe_freq_show(struct kobject *kobj,
> struct xe_guc_pc *pc = dev_to_pc(dev);
> u32 freq;
>
>- xe_pm_runtime_get(dev_to_xe(dev));
>+ guard(xe_pm_runtime)(dev_to_xe(dev));
> freq = xe_guc_pc_get_rpe_freq(pc);
>- xe_pm_runtime_put(dev_to_xe(dev));
>
> return sysfs_emit(buf, "%d\n", freq);
> }
>@@ -128,9 +125,8 @@ static ssize_t rpa_freq_show(struct kobject *kobj,
> struct xe_guc_pc *pc = dev_to_pc(dev);
> u32 freq;
>
>- xe_pm_runtime_get(dev_to_xe(dev));
>+ guard(xe_pm_runtime)(dev_to_xe(dev));
> freq = xe_guc_pc_get_rpa_freq(pc);
>- xe_pm_runtime_put(dev_to_xe(dev));
>
> return sysfs_emit(buf, "%d\n", freq);
> }
>@@ -154,9 +150,8 @@ static ssize_t min_freq_show(struct kobject *kobj,
> u32 freq;
> ssize_t ret;
>
>- xe_pm_runtime_get(dev_to_xe(dev));
>+ guard(xe_pm_runtime)(dev_to_xe(dev));
> ret = xe_guc_pc_get_min_freq(pc, &freq);
>- xe_pm_runtime_put(dev_to_xe(dev));
> if (ret)
> return ret;
>
>@@ -175,9 +170,8 @@ static ssize_t min_freq_store(struct kobject *kobj,
> if (ret)
> return ret;
>
>- xe_pm_runtime_get(dev_to_xe(dev));
>+ guard(xe_pm_runtime)(dev_to_xe(dev));
> ret = xe_guc_pc_set_min_freq(pc, freq);
>- xe_pm_runtime_put(dev_to_xe(dev));
> if (ret)
> return ret;
>
>@@ -193,9 +187,8 @@ static ssize_t max_freq_show(struct kobject *kobj,
> u32 freq;
> ssize_t ret;
>
>- xe_pm_runtime_get(dev_to_xe(dev));
>+ guard(xe_pm_runtime)(dev_to_xe(dev));
> ret = xe_guc_pc_get_max_freq(pc, &freq);
>- xe_pm_runtime_put(dev_to_xe(dev));
> if (ret)
> return ret;
>
>@@ -214,9 +207,8 @@ static ssize_t max_freq_store(struct kobject *kobj,
> if (ret)
> return ret;
>
>- xe_pm_runtime_get(dev_to_xe(dev));
>+ guard(xe_pm_runtime)(dev_to_xe(dev));
> ret = xe_guc_pc_set_max_freq(pc, freq);
>- xe_pm_runtime_put(dev_to_xe(dev));
> if (ret)
> return ret;
>
>@@ -243,9 +235,8 @@ static ssize_t power_profile_store(struct kobject *kobj,
> struct xe_guc_pc *pc = dev_to_pc(dev);
> int err;
>
>- xe_pm_runtime_get(dev_to_xe(dev));
>+ guard(xe_pm_runtime)(dev_to_xe(dev));
> err = xe_guc_pc_set_power_profile(pc, buff);
>- xe_pm_runtime_put(dev_to_xe(dev));
>
> return err ?: count;
> }
>diff --git a/drivers/gpu/drm/xe/xe_gt_throttle.c b/drivers/gpu/drm/xe/xe_gt_throttle.c
>index 82c5fbcdfbe3..0ee288389e71 100644
>--- a/drivers/gpu/drm/xe/xe_gt_throttle.c
>+++ b/drivers/gpu/drm/xe/xe_gt_throttle.c
>@@ -97,9 +97,8 @@ u32 xe_gt_throttle_get_limit_reasons(struct xe_gt *gt)
> else
> mask = GT0_PERF_LIMIT_REASONS_MASK;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> val = xe_mmio_read32(>->mmio, reg) & mask;
>- xe_pm_runtime_put(xe);
>
> return val;
We can drop variable val and return directly.
> }
>diff --git a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
>index 640950172088..1d3511d0d025 100644
>--- a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
>+++ b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
>@@ -47,9 +47,8 @@ static ssize_t xe_hw_engine_class_sysfs_attr_show(struct kobject *kobj,
>
> kattr = container_of(attr, struct kobj_attribute, attr);
> if (kattr->show) {
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = kattr->show(kobj, kattr, buf);
>- xe_pm_runtime_put(xe);
> }
>
> return ret;
I think we are able to drop variable ret by using return directly in the
two places.
>@@ -66,9 +65,8 @@ static ssize_t xe_hw_engine_class_sysfs_attr_store(struct kobject *kobj,
>
> kattr = container_of(attr, struct kobj_attribute, attr);
> if (kattr->store) {
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = kattr->store(kobj, kattr, buf, count);
>- xe_pm_runtime_put(xe);
> }
>
> return ret;
In here as well.
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* [PATCH v2 30/30] drm/xe/debugfs: Use scope-based runtime PM
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (28 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 29/30] drm/xe/sysfs: Use scope-based runtime power management Matt Roper
@ 2025-11-10 23:20 ` Matt Roper
2025-11-13 18:30 ` Gustavo Sousa
2025-11-11 0:20 ` ✓ CI.KUnit: success for Scope-based forcewake and runtime PM (rev3) Patchwork
` (4 subsequent siblings)
34 siblings, 1 reply; 74+ messages in thread
From: Matt Roper @ 2025-11-10 23:20 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Switch the debugfs code to use scope-based runtime PM where possible,
for consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_debugfs.c | 16 +++++-----------
drivers/gpu/drm/xe/xe_gsc_debugfs.c | 3 +--
drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 12 ++++--------
drivers/gpu/drm/xe/xe_guc_debugfs.c | 3 +--
drivers/gpu/drm/xe/xe_huc_debugfs.c | 3 +--
drivers/gpu/drm/xe/xe_tile_debugfs.c | 3 +--
6 files changed, 13 insertions(+), 27 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
index e91da9589c5f..1d5a2a43a9d7 100644
--- a/drivers/gpu/drm/xe/xe_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_debugfs.c
@@ -68,7 +68,7 @@ static int info(struct seq_file *m, void *data)
struct xe_gt *gt;
u8 id;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
drm_printf(&p, "graphics_verx100 %d\n", xe->info.graphics_verx100);
drm_printf(&p, "media_verx100 %d\n", xe->info.media_verx100);
@@ -95,7 +95,6 @@ static int info(struct seq_file *m, void *data)
gt->info.engine_mask);
}
- xe_pm_runtime_put(xe);
return 0;
}
@@ -110,9 +109,8 @@ static int sriov_info(struct seq_file *m, void *data)
static int workarounds(struct xe_device *xe, struct drm_printer *p)
{
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_wa_device_dump(xe, p);
- xe_pm_runtime_put(xe);
return 0;
}
@@ -134,7 +132,7 @@ static int dgfx_pkg_residencies_show(struct seq_file *m, void *data)
xe = node_to_xe(m->private);
p = drm_seq_file_printer(m);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
mmio = xe_root_tile_mmio(xe);
static const struct {
u32 offset;
@@ -151,7 +149,6 @@ static int dgfx_pkg_residencies_show(struct seq_file *m, void *data)
for (int i = 0; i < ARRAY_SIZE(residencies); i++)
read_residency_counter(xe, mmio, residencies[i].offset, residencies[i].name, &p);
- xe_pm_runtime_put(xe);
return 0;
}
@@ -163,7 +160,7 @@ static int dgfx_pcie_link_residencies_show(struct seq_file *m, void *data)
xe = node_to_xe(m->private);
p = drm_seq_file_printer(m);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
mmio = xe_root_tile_mmio(xe);
static const struct {
@@ -178,7 +175,6 @@ static int dgfx_pcie_link_residencies_show(struct seq_file *m, void *data)
for (int i = 0; i < ARRAY_SIZE(residencies); i++)
read_residency_counter(xe, mmio, residencies[i].offset, residencies[i].name, &p);
- xe_pm_runtime_put(xe);
return 0;
}
@@ -277,16 +273,14 @@ static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf,
xe->wedged.mode = wedged_mode;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
for_each_gt(gt, xe, id) {
ret = xe_guc_ads_scheduler_policy_toggle_reset(>->uc.guc.ads);
if (ret) {
xe_gt_err(gt, "Failed to update GuC ADS scheduler policy. GuC may still cause engine reset even with wedged_mode=2\n");
- xe_pm_runtime_put(xe);
return -EIO;
}
}
- xe_pm_runtime_put(xe);
return size;
}
diff --git a/drivers/gpu/drm/xe/xe_gsc_debugfs.c b/drivers/gpu/drm/xe/xe_gsc_debugfs.c
index 461d7e99c2b3..b13928b50eb9 100644
--- a/drivers/gpu/drm/xe/xe_gsc_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_gsc_debugfs.c
@@ -37,9 +37,8 @@ static int gsc_info(struct seq_file *m, void *data)
struct xe_device *xe = gsc_to_xe(gsc);
struct drm_printer p = drm_seq_file_printer(m);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_gsc_print_info(gsc, &p);
- xe_pm_runtime_put(xe);
return 0;
}
diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
index 838beb7f6327..ddb64e78b988 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
@@ -123,11 +123,10 @@ static int POLICY##_set(void *data, u64 val) \
if (val > (TYPE)~0ull) \
return -EOVERFLOW; \
\
- xe_pm_runtime_get(xe); \
+ guard(xe_pm_runtime)(xe); \
err = xe_gt_sriov_pf_policy_set_##POLICY(gt, val); \
if (!err) \
xe_sriov_pf_provision_set_custom_mode(xe); \
- xe_pm_runtime_put(xe); \
\
return err; \
} \
@@ -189,12 +188,11 @@ static int CONFIG##_set(void *data, u64 val) \
if (val > (TYPE)~0ull) \
return -EOVERFLOW; \
\
- xe_pm_runtime_get(xe); \
+ guard(xe_pm_runtime)(xe); \
err = xe_sriov_pf_wait_ready(xe) ?: \
xe_gt_sriov_pf_config_set_##CONFIG(gt, vfid, val); \
if (!err) \
xe_sriov_pf_provision_set_custom_mode(xe); \
- xe_pm_runtime_put(xe); \
\
return err; \
} \
@@ -249,11 +247,10 @@ static int set_threshold(void *data, u64 val, enum xe_guc_klv_threshold_index in
if (val > (u32)~0ull)
return -EOVERFLOW;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
err = xe_gt_sriov_pf_config_set_threshold(gt, vfid, index, val);
if (!err)
xe_sriov_pf_provision_set_custom_mode(xe);
- xe_pm_runtime_put(xe);
return err;
}
@@ -361,9 +358,8 @@ static ssize_t control_write(struct file *file, const char __user *buf, size_t c
xe_gt_assert(gt, sizeof(cmd) > strlen(control_cmds[n].cmd));
if (sysfs_streq(cmd, control_cmds[n].cmd)) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = control_cmds[n].fn ? (*control_cmds[n].fn)(gt, vfid) : 0;
- xe_pm_runtime_put(xe);
break;
}
}
diff --git a/drivers/gpu/drm/xe/xe_guc_debugfs.c b/drivers/gpu/drm/xe/xe_guc_debugfs.c
index 0b102ab46c4d..2198141526ae 100644
--- a/drivers/gpu/drm/xe/xe_guc_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_guc_debugfs.c
@@ -72,9 +72,8 @@ static int guc_debugfs_show(struct seq_file *m, void *data)
int (*print)(struct xe_guc *, struct drm_printer *) = node->info_ent->data;
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = print(>->uc.guc, &p);
- xe_pm_runtime_put(xe);
return ret;
}
diff --git a/drivers/gpu/drm/xe/xe_huc_debugfs.c b/drivers/gpu/drm/xe/xe_huc_debugfs.c
index 3a888a40188b..df9c4d79b710 100644
--- a/drivers/gpu/drm/xe/xe_huc_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_huc_debugfs.c
@@ -37,9 +37,8 @@ static int huc_info(struct seq_file *m, void *data)
struct xe_device *xe = huc_to_xe(huc);
struct drm_printer p = drm_seq_file_printer(m);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_huc_print_info(huc, &p);
- xe_pm_runtime_put(xe);
return 0;
}
diff --git a/drivers/gpu/drm/xe/xe_tile_debugfs.c b/drivers/gpu/drm/xe/xe_tile_debugfs.c
index fff242a5ae56..773d352da6de 100644
--- a/drivers/gpu/drm/xe/xe_tile_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_tile_debugfs.c
@@ -84,9 +84,8 @@ int xe_tile_debugfs_show_with_rpm(struct seq_file *m, void *data)
struct xe_device *xe = tile_to_xe(tile);
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_tile_debugfs_simple_show(m, data);
- xe_pm_runtime_put(xe);
return ret;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 74+ messages in thread* Re: [PATCH v2 30/30] drm/xe/debugfs: Use scope-based runtime PM
2025-11-10 23:20 ` [PATCH v2 30/30] drm/xe/debugfs: Use scope-based runtime PM Matt Roper
@ 2025-11-13 18:30 ` Gustavo Sousa
0 siblings, 0 replies; 74+ messages in thread
From: Gustavo Sousa @ 2025-11-13 18:30 UTC (permalink / raw)
To: Matt Roper, intel-xe; +Cc: matthew.d.roper
Quoting Matt Roper (2025-11-10 20:20:48-03:00)
>Switch the debugfs code to use scope-based runtime PM where possible,
>for consistency with other parts of the driver.
>
>Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
I think we can drop variable ret in functions:
* guc_debugfs_show()
* xe_tile_debugfs_show_with_rpm()
Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com>
>---
> drivers/gpu/drm/xe/xe_debugfs.c | 16 +++++-----------
> drivers/gpu/drm/xe/xe_gsc_debugfs.c | 3 +--
> drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 12 ++++--------
> drivers/gpu/drm/xe/xe_guc_debugfs.c | 3 +--
> drivers/gpu/drm/xe/xe_huc_debugfs.c | 3 +--
> drivers/gpu/drm/xe/xe_tile_debugfs.c | 3 +--
> 6 files changed, 13 insertions(+), 27 deletions(-)
>
>diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
>index e91da9589c5f..1d5a2a43a9d7 100644
>--- a/drivers/gpu/drm/xe/xe_debugfs.c
>+++ b/drivers/gpu/drm/xe/xe_debugfs.c
>@@ -68,7 +68,7 @@ static int info(struct seq_file *m, void *data)
> struct xe_gt *gt;
> u8 id;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
>
> drm_printf(&p, "graphics_verx100 %d\n", xe->info.graphics_verx100);
> drm_printf(&p, "media_verx100 %d\n", xe->info.media_verx100);
>@@ -95,7 +95,6 @@ static int info(struct seq_file *m, void *data)
> gt->info.engine_mask);
> }
>
>- xe_pm_runtime_put(xe);
> return 0;
> }
>
>@@ -110,9 +109,8 @@ static int sriov_info(struct seq_file *m, void *data)
>
> static int workarounds(struct xe_device *xe, struct drm_printer *p)
> {
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> xe_wa_device_dump(xe, p);
>- xe_pm_runtime_put(xe);
>
> return 0;
> }
>@@ -134,7 +132,7 @@ static int dgfx_pkg_residencies_show(struct seq_file *m, void *data)
>
> xe = node_to_xe(m->private);
> p = drm_seq_file_printer(m);
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> mmio = xe_root_tile_mmio(xe);
> static const struct {
> u32 offset;
>@@ -151,7 +149,6 @@ static int dgfx_pkg_residencies_show(struct seq_file *m, void *data)
> for (int i = 0; i < ARRAY_SIZE(residencies); i++)
> read_residency_counter(xe, mmio, residencies[i].offset, residencies[i].name, &p);
>
>- xe_pm_runtime_put(xe);
> return 0;
> }
>
>@@ -163,7 +160,7 @@ static int dgfx_pcie_link_residencies_show(struct seq_file *m, void *data)
>
> xe = node_to_xe(m->private);
> p = drm_seq_file_printer(m);
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> mmio = xe_root_tile_mmio(xe);
>
> static const struct {
>@@ -178,7 +175,6 @@ static int dgfx_pcie_link_residencies_show(struct seq_file *m, void *data)
> for (int i = 0; i < ARRAY_SIZE(residencies); i++)
> read_residency_counter(xe, mmio, residencies[i].offset, residencies[i].name, &p);
>
>- xe_pm_runtime_put(xe);
> return 0;
> }
>
>@@ -277,16 +273,14 @@ static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf,
>
> xe->wedged.mode = wedged_mode;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> for_each_gt(gt, xe, id) {
> ret = xe_guc_ads_scheduler_policy_toggle_reset(>->uc.guc.ads);
> if (ret) {
> xe_gt_err(gt, "Failed to update GuC ADS scheduler policy. GuC may still cause engine reset even with wedged_mode=2\n");
>- xe_pm_runtime_put(xe);
> return -EIO;
> }
> }
>- xe_pm_runtime_put(xe);
>
> return size;
> }
>diff --git a/drivers/gpu/drm/xe/xe_gsc_debugfs.c b/drivers/gpu/drm/xe/xe_gsc_debugfs.c
>index 461d7e99c2b3..b13928b50eb9 100644
>--- a/drivers/gpu/drm/xe/xe_gsc_debugfs.c
>+++ b/drivers/gpu/drm/xe/xe_gsc_debugfs.c
>@@ -37,9 +37,8 @@ static int gsc_info(struct seq_file *m, void *data)
> struct xe_device *xe = gsc_to_xe(gsc);
> struct drm_printer p = drm_seq_file_printer(m);
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> xe_gsc_print_info(gsc, &p);
>- xe_pm_runtime_put(xe);
>
> return 0;
> }
>diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
>index 838beb7f6327..ddb64e78b988 100644
>--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
>+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
>@@ -123,11 +123,10 @@ static int POLICY##_set(void *data, u64 val) \
> if (val > (TYPE)~0ull) \
> return -EOVERFLOW; \
> \
>- xe_pm_runtime_get(xe); \
>+ guard(xe_pm_runtime)(xe); \
> err = xe_gt_sriov_pf_policy_set_##POLICY(gt, val); \
> if (!err) \
> xe_sriov_pf_provision_set_custom_mode(xe); \
>- xe_pm_runtime_put(xe); \
> \
> return err; \
> } \
>@@ -189,12 +188,11 @@ static int CONFIG##_set(void *data, u64 val) \
> if (val > (TYPE)~0ull) \
> return -EOVERFLOW; \
> \
>- xe_pm_runtime_get(xe); \
>+ guard(xe_pm_runtime)(xe); \
> err = xe_sriov_pf_wait_ready(xe) ?: \
> xe_gt_sriov_pf_config_set_##CONFIG(gt, vfid, val); \
> if (!err) \
> xe_sriov_pf_provision_set_custom_mode(xe); \
>- xe_pm_runtime_put(xe); \
> \
> return err; \
> } \
>@@ -249,11 +247,10 @@ static int set_threshold(void *data, u64 val, enum xe_guc_klv_threshold_index in
> if (val > (u32)~0ull)
> return -EOVERFLOW;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> err = xe_gt_sriov_pf_config_set_threshold(gt, vfid, index, val);
> if (!err)
> xe_sriov_pf_provision_set_custom_mode(xe);
>- xe_pm_runtime_put(xe);
>
> return err;
> }
>@@ -361,9 +358,8 @@ static ssize_t control_write(struct file *file, const char __user *buf, size_t c
> xe_gt_assert(gt, sizeof(cmd) > strlen(control_cmds[n].cmd));
>
> if (sysfs_streq(cmd, control_cmds[n].cmd)) {
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = control_cmds[n].fn ? (*control_cmds[n].fn)(gt, vfid) : 0;
>- xe_pm_runtime_put(xe);
> break;
> }
> }
>diff --git a/drivers/gpu/drm/xe/xe_guc_debugfs.c b/drivers/gpu/drm/xe/xe_guc_debugfs.c
>index 0b102ab46c4d..2198141526ae 100644
>--- a/drivers/gpu/drm/xe/xe_guc_debugfs.c
>+++ b/drivers/gpu/drm/xe/xe_guc_debugfs.c
>@@ -72,9 +72,8 @@ static int guc_debugfs_show(struct seq_file *m, void *data)
> int (*print)(struct xe_guc *, struct drm_printer *) = node->info_ent->data;
> int ret;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = print(>->uc.guc, &p);
>- xe_pm_runtime_put(xe);
>
> return ret;
> }
>diff --git a/drivers/gpu/drm/xe/xe_huc_debugfs.c b/drivers/gpu/drm/xe/xe_huc_debugfs.c
>index 3a888a40188b..df9c4d79b710 100644
>--- a/drivers/gpu/drm/xe/xe_huc_debugfs.c
>+++ b/drivers/gpu/drm/xe/xe_huc_debugfs.c
>@@ -37,9 +37,8 @@ static int huc_info(struct seq_file *m, void *data)
> struct xe_device *xe = huc_to_xe(huc);
> struct drm_printer p = drm_seq_file_printer(m);
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> xe_huc_print_info(huc, &p);
>- xe_pm_runtime_put(xe);
>
> return 0;
> }
>diff --git a/drivers/gpu/drm/xe/xe_tile_debugfs.c b/drivers/gpu/drm/xe/xe_tile_debugfs.c
>index fff242a5ae56..773d352da6de 100644
>--- a/drivers/gpu/drm/xe/xe_tile_debugfs.c
>+++ b/drivers/gpu/drm/xe/xe_tile_debugfs.c
>@@ -84,9 +84,8 @@ int xe_tile_debugfs_show_with_rpm(struct seq_file *m, void *data)
> struct xe_device *xe = tile_to_xe(tile);
> int ret;
>
>- xe_pm_runtime_get(xe);
>+ guard(xe_pm_runtime)(xe);
> ret = xe_tile_debugfs_simple_show(m, data);
>- xe_pm_runtime_put(xe);
>
> return ret;
> }
>--
>2.51.1
>
^ permalink raw reply [flat|nested] 74+ messages in thread
* ✓ CI.KUnit: success for Scope-based forcewake and runtime PM (rev3)
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (29 preceding siblings ...)
2025-11-10 23:20 ` [PATCH v2 30/30] drm/xe/debugfs: Use scope-based runtime PM Matt Roper
@ 2025-11-11 0:20 ` Patchwork
2025-11-11 0:57 ` ✓ Xe.CI.BAT: " Patchwork
` (3 subsequent siblings)
34 siblings, 0 replies; 74+ messages in thread
From: Patchwork @ 2025-11-11 0:20 UTC (permalink / raw)
To: Matt Roper; +Cc: intel-xe
== Series Details ==
Series: Scope-based forcewake and runtime PM (rev3)
URL : https://patchwork.freedesktop.org/series/157253/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[00:18:59] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[00:19:03] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[00:19:34] Starting KUnit Kernel (1/1)...
[00:19:34] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[00:19:34] ================== guc_buf (11 subtests) ===================
[00:19:34] [PASSED] test_smallest
[00:19:34] [PASSED] test_largest
[00:19:34] [PASSED] test_granular
[00:19:34] [PASSED] test_unique
[00:19:34] [PASSED] test_overlap
[00:19:34] [PASSED] test_reusable
[00:19:34] [PASSED] test_too_big
[00:19:34] [PASSED] test_flush
[00:19:34] [PASSED] test_lookup
[00:19:34] [PASSED] test_data
[00:19:34] [PASSED] test_class
[00:19:34] ===================== [PASSED] guc_buf =====================
[00:19:34] =================== guc_dbm (7 subtests) ===================
[00:19:34] [PASSED] test_empty
[00:19:34] [PASSED] test_default
[00:19:34] ======================== test_size ========================
[00:19:34] [PASSED] 4
[00:19:34] [PASSED] 8
[00:19:34] [PASSED] 32
[00:19:34] [PASSED] 256
[00:19:34] ==================== [PASSED] test_size ====================
[00:19:34] ======================= test_reuse ========================
[00:19:34] [PASSED] 4
[00:19:34] [PASSED] 8
[00:19:34] [PASSED] 32
[00:19:34] [PASSED] 256
[00:19:34] =================== [PASSED] test_reuse ====================
[00:19:34] =================== test_range_overlap ====================
[00:19:34] [PASSED] 4
[00:19:34] [PASSED] 8
[00:19:34] [PASSED] 32
[00:19:34] [PASSED] 256
[00:19:34] =============== [PASSED] test_range_overlap ================
[00:19:34] =================== test_range_compact ====================
[00:19:34] [PASSED] 4
[00:19:34] [PASSED] 8
[00:19:34] [PASSED] 32
[00:19:34] [PASSED] 256
[00:19:34] =============== [PASSED] test_range_compact ================
[00:19:34] ==================== test_range_spare =====================
[00:19:34] [PASSED] 4
[00:19:34] [PASSED] 8
[00:19:34] [PASSED] 32
[00:19:34] [PASSED] 256
[00:19:34] ================ [PASSED] test_range_spare =================
[00:19:34] ===================== [PASSED] guc_dbm =====================
[00:19:34] =================== guc_idm (6 subtests) ===================
[00:19:34] [PASSED] bad_init
[00:19:34] [PASSED] no_init
[00:19:34] [PASSED] init_fini
[00:19:34] [PASSED] check_used
[00:19:34] [PASSED] check_quota
[00:19:34] [PASSED] check_all
[00:19:34] ===================== [PASSED] guc_idm =====================
[00:19:34] ================== no_relay (3 subtests) ===================
[00:19:34] [PASSED] xe_drops_guc2pf_if_not_ready
[00:19:34] [PASSED] xe_drops_guc2vf_if_not_ready
[00:19:34] [PASSED] xe_rejects_send_if_not_ready
[00:19:34] ==================== [PASSED] no_relay =====================
[00:19:34] ================== pf_relay (14 subtests) ==================
[00:19:34] [PASSED] pf_rejects_guc2pf_too_short
[00:19:34] [PASSED] pf_rejects_guc2pf_too_long
[00:19:34] [PASSED] pf_rejects_guc2pf_no_payload
[00:19:34] [PASSED] pf_fails_no_payload
[00:19:34] [PASSED] pf_fails_bad_origin
[00:19:34] [PASSED] pf_fails_bad_type
[00:19:34] [PASSED] pf_txn_reports_error
[00:19:34] [PASSED] pf_txn_sends_pf2guc
[00:19:34] [PASSED] pf_sends_pf2guc
[00:19:34] [SKIPPED] pf_loopback_nop
[00:19:34] [SKIPPED] pf_loopback_echo
[00:19:34] [SKIPPED] pf_loopback_fail
[00:19:34] [SKIPPED] pf_loopback_busy
[00:19:34] [SKIPPED] pf_loopback_retry
[00:19:34] ==================== [PASSED] pf_relay =====================
[00:19:34] ================== vf_relay (3 subtests) ===================
[00:19:34] [PASSED] vf_rejects_guc2vf_too_short
[00:19:34] [PASSED] vf_rejects_guc2vf_too_long
[00:19:34] [PASSED] vf_rejects_guc2vf_no_payload
[00:19:34] ==================== [PASSED] vf_relay =====================
[00:19:34] ================ pf_gt_config (4 subtests) =================
[00:19:34] [PASSED] fair_contexts_1vf
[00:19:34] [PASSED] fair_doorbells_1vf
[00:19:34] ====================== fair_contexts ======================
[00:19:34] [PASSED] 1 VF
[00:19:34] [PASSED] 2 VFs
[00:19:34] [PASSED] 3 VFs
[00:19:34] [PASSED] 4 VFs
[00:19:34] [PASSED] 5 VFs
[00:19:34] [PASSED] 6 VFs
[00:19:34] [PASSED] 7 VFs
[00:19:34] [PASSED] 8 VFs
[00:19:34] [PASSED] 9 VFs
[00:19:34] [PASSED] 10 VFs
[00:19:34] [PASSED] 11 VFs
[00:19:34] [PASSED] 12 VFs
[00:19:34] [PASSED] 13 VFs
[00:19:34] [PASSED] 14 VFs
[00:19:34] [PASSED] 15 VFs
[00:19:34] [PASSED] 16 VFs
[00:19:34] [PASSED] 17 VFs
[00:19:34] [PASSED] 18 VFs
[00:19:34] [PASSED] 19 VFs
[00:19:34] [PASSED] 20 VFs
[00:19:34] [PASSED] 21 VFs
[00:19:34] [PASSED] 22 VFs
[00:19:34] [PASSED] 23 VFs
[00:19:34] [PASSED] 24 VFs
[00:19:34] [PASSED] 25 VFs
[00:19:34] [PASSED] 26 VFs
[00:19:34] [PASSED] 27 VFs
[00:19:34] [PASSED] 28 VFs
[00:19:34] [PASSED] 29 VFs
[00:19:34] [PASSED] 30 VFs
[00:19:34] [PASSED] 31 VFs
[00:19:34] [PASSED] 32 VFs
[00:19:34] [PASSED] 33 VFs
[00:19:34] [PASSED] 34 VFs
[00:19:34] [PASSED] 35 VFs
[00:19:34] [PASSED] 36 VFs
[00:19:34] [PASSED] 37 VFs
[00:19:34] [PASSED] 38 VFs
[00:19:34] [PASSED] 39 VFs
[00:19:34] [PASSED] 40 VFs
[00:19:34] [PASSED] 41 VFs
[00:19:34] [PASSED] 42 VFs
[00:19:34] [PASSED] 43 VFs
[00:19:34] [PASSED] 44 VFs
[00:19:34] [PASSED] 45 VFs
[00:19:34] [PASSED] 46 VFs
[00:19:34] [PASSED] 47 VFs
[00:19:34] [PASSED] 48 VFs
[00:19:34] [PASSED] 49 VFs
[00:19:34] [PASSED] 50 VFs
[00:19:34] [PASSED] 51 VFs
[00:19:34] [PASSED] 52 VFs
[00:19:34] [PASSED] 53 VFs
[00:19:34] [PASSED] 54 VFs
[00:19:34] [PASSED] 55 VFs
[00:19:34] [PASSED] 56 VFs
[00:19:34] [PASSED] 57 VFs
[00:19:34] [PASSED] 58 VFs
[00:19:34] [PASSED] 59 VFs
[00:19:34] [PASSED] 60 VFs
[00:19:34] [PASSED] 61 VFs
[00:19:34] [PASSED] 62 VFs
[00:19:34] [PASSED] 63 VFs
[00:19:34] ================== [PASSED] fair_contexts ==================
[00:19:34] ===================== fair_doorbells ======================
[00:19:34] [PASSED] 1 VF
[00:19:34] [PASSED] 2 VFs
[00:19:34] [PASSED] 3 VFs
[00:19:34] [PASSED] 4 VFs
[00:19:34] [PASSED] 5 VFs
[00:19:34] [PASSED] 6 VFs
[00:19:34] [PASSED] 7 VFs
[00:19:34] [PASSED] 8 VFs
[00:19:34] [PASSED] 9 VFs
[00:19:34] [PASSED] 10 VFs
[00:19:34] [PASSED] 11 VFs
[00:19:34] [PASSED] 12 VFs
[00:19:34] [PASSED] 13 VFs
[00:19:34] [PASSED] 14 VFs
[00:19:34] [PASSED] 15 VFs
[00:19:34] [PASSED] 16 VFs
[00:19:34] [PASSED] 17 VFs
[00:19:34] [PASSED] 18 VFs
[00:19:34] [PASSED] 19 VFs
[00:19:34] [PASSED] 20 VFs
[00:19:34] [PASSED] 21 VFs
[00:19:34] [PASSED] 22 VFs
[00:19:34] [PASSED] 23 VFs
[00:19:34] [PASSED] 24 VFs
[00:19:34] [PASSED] 25 VFs
[00:19:34] [PASSED] 26 VFs
[00:19:34] [PASSED] 27 VFs
[00:19:34] [PASSED] 28 VFs
[00:19:34] [PASSED] 29 VFs
[00:19:34] [PASSED] 30 VFs
[00:19:34] [PASSED] 31 VFs
[00:19:34] [PASSED] 32 VFs
[00:19:34] [PASSED] 33 VFs
[00:19:34] [PASSED] 34 VFs
[00:19:34] [PASSED] 35 VFs
[00:19:34] [PASSED] 36 VFs
[00:19:34] [PASSED] 37 VFs
[00:19:34] [PASSED] 38 VFs
[00:19:34] [PASSED] 39 VFs
[00:19:34] [PASSED] 40 VFs
[00:19:34] [PASSED] 41 VFs
[00:19:34] [PASSED] 42 VFs
[00:19:34] [PASSED] 43 VFs
[00:19:34] [PASSED] 44 VFs
[00:19:34] [PASSED] 45 VFs
[00:19:34] [PASSED] 46 VFs
[00:19:34] [PASSED] 47 VFs
[00:19:34] [PASSED] 48 VFs
[00:19:34] [PASSED] 49 VFs
[00:19:34] [PASSED] 50 VFs
[00:19:34] [PASSED] 51 VFs
[00:19:34] [PASSED] 52 VFs
[00:19:34] [PASSED] 53 VFs
[00:19:34] [PASSED] 54 VFs
[00:19:34] [PASSED] 55 VFs
[00:19:34] [PASSED] 56 VFs
[00:19:34] [PASSED] 57 VFs
[00:19:34] [PASSED] 58 VFs
[00:19:34] [PASSED] 59 VFs
[00:19:34] [PASSED] 60 VFs
[00:19:34] [PASSED] 61 VFs
[00:19:34] [PASSED] 62 VFs
[00:19:34] [PASSED] 63 VFs
[00:19:34] ================= [PASSED] fair_doorbells ==================
[00:19:34] ================== [PASSED] pf_gt_config ===================
[00:19:34] ===================== lmtt (1 subtest) =====================
[00:19:34] ======================== test_ops =========================
[00:19:34] [PASSED] 2-level
[00:19:34] [PASSED] multi-level
[00:19:34] ==================== [PASSED] test_ops =====================
[00:19:34] ====================== [PASSED] lmtt =======================
[00:19:34] ================= pf_service (11 subtests) =================
[00:19:34] [PASSED] pf_negotiate_any
[00:19:34] [PASSED] pf_negotiate_base_match
[00:19:34] [PASSED] pf_negotiate_base_newer
[00:19:34] [PASSED] pf_negotiate_base_next
[00:19:34] [SKIPPED] pf_negotiate_base_older
[00:19:34] [PASSED] pf_negotiate_base_prev
[00:19:34] [PASSED] pf_negotiate_latest_match
[00:19:34] [PASSED] pf_negotiate_latest_newer
[00:19:34] [PASSED] pf_negotiate_latest_next
[00:19:34] [SKIPPED] pf_negotiate_latest_older
[00:19:34] [SKIPPED] pf_negotiate_latest_prev
[00:19:34] =================== [PASSED] pf_service ====================
[00:19:34] ================= xe_guc_g2g (2 subtests) ==================
[00:19:34] ============== xe_live_guc_g2g_kunit_default ==============
[00:19:34] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[00:19:34] ============== xe_live_guc_g2g_kunit_allmem ===============
[00:19:34] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[00:19:34] =================== [SKIPPED] xe_guc_g2g ===================
[00:19:34] =================== xe_mocs (2 subtests) ===================
[00:19:34] ================ xe_live_mocs_kernel_kunit ================
[00:19:34] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[00:19:34] ================ xe_live_mocs_reset_kunit =================
[00:19:34] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[00:19:34] ==================== [SKIPPED] xe_mocs =====================
[00:19:34] ================= xe_migrate (2 subtests) ==================
[00:19:34] ================= xe_migrate_sanity_kunit =================
[00:19:34] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[00:19:34] ================== xe_validate_ccs_kunit ==================
[00:19:34] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[00:19:34] =================== [SKIPPED] xe_migrate ===================
[00:19:34] ================== xe_dma_buf (1 subtest) ==================
[00:19:34] ==================== xe_dma_buf_kunit =====================
[00:19:34] ================ [SKIPPED] xe_dma_buf_kunit ================
[00:19:34] =================== [SKIPPED] xe_dma_buf ===================
[00:19:34] ================= xe_bo_shrink (1 subtest) =================
[00:19:34] =================== xe_bo_shrink_kunit ====================
[00:19:34] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[00:19:34] ================== [SKIPPED] xe_bo_shrink ==================
[00:19:34] ==================== xe_bo (2 subtests) ====================
[00:19:34] ================== xe_ccs_migrate_kunit ===================
[00:19:34] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[00:19:34] ==================== xe_bo_evict_kunit ====================
[00:19:34] =============== [SKIPPED] xe_bo_evict_kunit ================
[00:19:34] ===================== [SKIPPED] xe_bo ======================
[00:19:34] ==================== args (11 subtests) ====================
[00:19:34] [PASSED] count_args_test
[00:19:34] [PASSED] call_args_example
[00:19:34] [PASSED] call_args_test
[00:19:34] [PASSED] drop_first_arg_example
[00:19:34] [PASSED] drop_first_arg_test
[00:19:34] [PASSED] first_arg_example
[00:19:34] [PASSED] first_arg_test
[00:19:34] [PASSED] last_arg_example
[00:19:34] [PASSED] last_arg_test
[00:19:34] [PASSED] pick_arg_example
[00:19:34] [PASSED] sep_comma_example
[00:19:34] ====================== [PASSED] args =======================
[00:19:34] =================== xe_pci (3 subtests) ====================
[00:19:34] ==================== check_graphics_ip ====================
[00:19:34] [PASSED] 12.00 Xe_LP
[00:19:34] [PASSED] 12.10 Xe_LP+
[00:19:34] [PASSED] 12.55 Xe_HPG
[00:19:34] [PASSED] 12.60 Xe_HPC
[00:19:34] [PASSED] 12.70 Xe_LPG
[00:19:34] [PASSED] 12.71 Xe_LPG
[00:19:34] [PASSED] 12.74 Xe_LPG+
[00:19:34] [PASSED] 20.01 Xe2_HPG
[00:19:34] [PASSED] 20.02 Xe2_HPG
[00:19:34] [PASSED] 20.04 Xe2_LPG
[00:19:34] [PASSED] 30.00 Xe3_LPG
[00:19:34] [PASSED] 30.01 Xe3_LPG
[00:19:34] [PASSED] 30.03 Xe3_LPG
[00:19:34] [PASSED] 30.04 Xe3_LPG
[00:19:34] [PASSED] 30.05 Xe3_LPG
[00:19:34] [PASSED] 35.11 Xe3p_XPC
[00:19:34] ================ [PASSED] check_graphics_ip ================
[00:19:34] ===================== check_media_ip ======================
[00:19:34] [PASSED] 12.00 Xe_M
[00:19:34] [PASSED] 12.55 Xe_HPM
[00:19:34] [PASSED] 13.00 Xe_LPM+
[00:19:34] [PASSED] 13.01 Xe2_HPM
[00:19:34] [PASSED] 20.00 Xe2_LPM
[00:19:34] [PASSED] 30.00 Xe3_LPM
[00:19:34] [PASSED] 30.02 Xe3_LPM
[00:19:34] [PASSED] 35.00 Xe3p_LPM
[00:19:34] [PASSED] 35.03 Xe3p_HPM
[00:19:34] ================= [PASSED] check_media_ip ==================
[00:19:34] =================== check_platform_desc ===================
[00:19:34] [PASSED] 0x9A60 (TIGERLAKE)
[00:19:34] [PASSED] 0x9A68 (TIGERLAKE)
[00:19:34] [PASSED] 0x9A70 (TIGERLAKE)
[00:19:34] [PASSED] 0x9A40 (TIGERLAKE)
[00:19:34] [PASSED] 0x9A49 (TIGERLAKE)
[00:19:34] [PASSED] 0x9A59 (TIGERLAKE)
[00:19:34] [PASSED] 0x9A78 (TIGERLAKE)
[00:19:34] [PASSED] 0x9AC0 (TIGERLAKE)
[00:19:34] [PASSED] 0x9AC9 (TIGERLAKE)
[00:19:34] [PASSED] 0x9AD9 (TIGERLAKE)
[00:19:34] [PASSED] 0x9AF8 (TIGERLAKE)
[00:19:34] [PASSED] 0x4C80 (ROCKETLAKE)
[00:19:34] [PASSED] 0x4C8A (ROCKETLAKE)
[00:19:34] [PASSED] 0x4C8B (ROCKETLAKE)
[00:19:34] [PASSED] 0x4C8C (ROCKETLAKE)
[00:19:34] [PASSED] 0x4C90 (ROCKETLAKE)
[00:19:34] [PASSED] 0x4C9A (ROCKETLAKE)
[00:19:34] [PASSED] 0x4680 (ALDERLAKE_S)
[00:19:34] [PASSED] 0x4682 (ALDERLAKE_S)
[00:19:34] [PASSED] 0x4688 (ALDERLAKE_S)
[00:19:34] [PASSED] 0x468A (ALDERLAKE_S)
[00:19:34] [PASSED] 0x468B (ALDERLAKE_S)
[00:19:34] [PASSED] 0x4690 (ALDERLAKE_S)
[00:19:34] [PASSED] 0x4692 (ALDERLAKE_S)
[00:19:34] [PASSED] 0x4693 (ALDERLAKE_S)
[00:19:34] [PASSED] 0x46A0 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46A1 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46A2 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46A3 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46A6 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46A8 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46AA (ALDERLAKE_P)
[00:19:34] [PASSED] 0x462A (ALDERLAKE_P)
[00:19:34] [PASSED] 0x4626 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x4628 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46B0 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46B1 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46B2 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46B3 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46C0 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46C1 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46C2 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46C3 (ALDERLAKE_P)
[00:19:34] [PASSED] 0x46D0 (ALDERLAKE_N)
[00:19:34] [PASSED] 0x46D1 (ALDERLAKE_N)
[00:19:34] [PASSED] 0x46D2 (ALDERLAKE_N)
[00:19:34] [PASSED] 0x46D3 (ALDERLAKE_N)
[00:19:34] [PASSED] 0x46D4 (ALDERLAKE_N)
[00:19:34] [PASSED] 0xA721 (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA7A1 (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA7A9 (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA7AC (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA7AD (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA720 (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA7A0 (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA7A8 (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA7AA (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA7AB (ALDERLAKE_P)
[00:19:34] [PASSED] 0xA780 (ALDERLAKE_S)
[00:19:34] [PASSED] 0xA781 (ALDERLAKE_S)
[00:19:34] [PASSED] 0xA782 (ALDERLAKE_S)
[00:19:34] [PASSED] 0xA783 (ALDERLAKE_S)
[00:19:34] [PASSED] 0xA788 (ALDERLAKE_S)
[00:19:34] [PASSED] 0xA789 (ALDERLAKE_S)
[00:19:34] [PASSED] 0xA78A (ALDERLAKE_S)
[00:19:34] [PASSED] 0xA78B (ALDERLAKE_S)
[00:19:34] [PASSED] 0x4905 (DG1)
[00:19:34] [PASSED] 0x4906 (DG1)
[00:19:34] [PASSED] 0x4907 (DG1)
[00:19:34] [PASSED] 0x4908 (DG1)
[00:19:34] [PASSED] 0x4909 (DG1)
[00:19:34] [PASSED] 0x56C0 (DG2)
[00:19:34] [PASSED] 0x56C2 (DG2)
[00:19:34] [PASSED] 0x56C1 (DG2)
[00:19:34] [PASSED] 0x7D51 (METEORLAKE)
[00:19:34] [PASSED] 0x7DD1 (METEORLAKE)
[00:19:34] [PASSED] 0x7D41 (METEORLAKE)
[00:19:34] [PASSED] 0x7D67 (METEORLAKE)
[00:19:34] [PASSED] 0xB640 (METEORLAKE)
[00:19:34] [PASSED] 0x56A0 (DG2)
[00:19:34] [PASSED] 0x56A1 (DG2)
[00:19:34] [PASSED] 0x56A2 (DG2)
[00:19:34] [PASSED] 0x56BE (DG2)
[00:19:34] [PASSED] 0x56BF (DG2)
[00:19:34] [PASSED] 0x5690 (DG2)
stty: 'standard input': Inappropriate ioctl for device
[00:19:34] [PASSED] 0x5691 (DG2)
[00:19:34] [PASSED] 0x5692 (DG2)
[00:19:34] [PASSED] 0x56A5 (DG2)
[00:19:34] [PASSED] 0x56A6 (DG2)
[00:19:34] [PASSED] 0x56B0 (DG2)
[00:19:34] [PASSED] 0x56B1 (DG2)
[00:19:34] [PASSED] 0x56BA (DG2)
[00:19:34] [PASSED] 0x56BB (DG2)
[00:19:34] [PASSED] 0x56BC (DG2)
[00:19:34] [PASSED] 0x56BD (DG2)
[00:19:34] [PASSED] 0x5693 (DG2)
[00:19:34] [PASSED] 0x5694 (DG2)
[00:19:34] [PASSED] 0x5695 (DG2)
[00:19:34] [PASSED] 0x56A3 (DG2)
[00:19:34] [PASSED] 0x56A4 (DG2)
[00:19:34] [PASSED] 0x56B2 (DG2)
[00:19:34] [PASSED] 0x56B3 (DG2)
[00:19:34] [PASSED] 0x5696 (DG2)
[00:19:34] [PASSED] 0x5697 (DG2)
[00:19:34] [PASSED] 0xB69 (PVC)
[00:19:34] [PASSED] 0xB6E (PVC)
[00:19:34] [PASSED] 0xBD4 (PVC)
[00:19:34] [PASSED] 0xBD5 (PVC)
[00:19:34] [PASSED] 0xBD6 (PVC)
[00:19:34] [PASSED] 0xBD7 (PVC)
[00:19:34] [PASSED] 0xBD8 (PVC)
[00:19:34] [PASSED] 0xBD9 (PVC)
[00:19:34] [PASSED] 0xBDA (PVC)
[00:19:34] [PASSED] 0xBDB (PVC)
[00:19:34] [PASSED] 0xBE0 (PVC)
[00:19:34] [PASSED] 0xBE1 (PVC)
[00:19:34] [PASSED] 0xBE5 (PVC)
[00:19:34] [PASSED] 0x7D40 (METEORLAKE)
[00:19:34] [PASSED] 0x7D45 (METEORLAKE)
[00:19:34] [PASSED] 0x7D55 (METEORLAKE)
[00:19:34] [PASSED] 0x7D60 (METEORLAKE)
[00:19:34] [PASSED] 0x7DD5 (METEORLAKE)
[00:19:34] [PASSED] 0x6420 (LUNARLAKE)
[00:19:34] [PASSED] 0x64A0 (LUNARLAKE)
[00:19:34] [PASSED] 0x64B0 (LUNARLAKE)
[00:19:34] [PASSED] 0xE202 (BATTLEMAGE)
[00:19:34] [PASSED] 0xE209 (BATTLEMAGE)
[00:19:34] [PASSED] 0xE20B (BATTLEMAGE)
[00:19:34] [PASSED] 0xE20C (BATTLEMAGE)
[00:19:34] [PASSED] 0xE20D (BATTLEMAGE)
[00:19:34] [PASSED] 0xE210 (BATTLEMAGE)
[00:19:34] [PASSED] 0xE211 (BATTLEMAGE)
[00:19:34] [PASSED] 0xE212 (BATTLEMAGE)
[00:19:34] [PASSED] 0xE216 (BATTLEMAGE)
[00:19:34] [PASSED] 0xE220 (BATTLEMAGE)
[00:19:34] [PASSED] 0xE221 (BATTLEMAGE)
[00:19:34] [PASSED] 0xE222 (BATTLEMAGE)
[00:19:34] [PASSED] 0xE223 (BATTLEMAGE)
[00:19:34] [PASSED] 0xB080 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB081 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB082 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB083 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB084 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB085 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB086 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB087 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB08F (PANTHERLAKE)
[00:19:34] [PASSED] 0xB090 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB0A0 (PANTHERLAKE)
[00:19:34] [PASSED] 0xB0B0 (PANTHERLAKE)
[00:19:34] [PASSED] 0xD740 (NOVALAKE_S)
[00:19:34] [PASSED] 0xD741 (NOVALAKE_S)
[00:19:34] [PASSED] 0xD742 (NOVALAKE_S)
[00:19:34] [PASSED] 0xD743 (NOVALAKE_S)
[00:19:34] [PASSED] 0xD744 (NOVALAKE_S)
[00:19:34] [PASSED] 0xD745 (NOVALAKE_S)
[00:19:34] [PASSED] 0x674C (CRESCENTISLAND)
[00:19:34] [PASSED] 0xFD80 (PANTHERLAKE)
[00:19:34] [PASSED] 0xFD81 (PANTHERLAKE)
[00:19:34] =============== [PASSED] check_platform_desc ===============
[00:19:34] ===================== [PASSED] xe_pci ======================
[00:19:34] =================== xe_rtp (2 subtests) ====================
[00:19:34] =============== xe_rtp_process_to_sr_tests ================
[00:19:34] [PASSED] coalesce-same-reg
[00:19:34] [PASSED] no-match-no-add
[00:19:34] [PASSED] match-or
[00:19:34] [PASSED] match-or-xfail
[00:19:34] [PASSED] no-match-no-add-multiple-rules
[00:19:34] [PASSED] two-regs-two-entries
[00:19:34] [PASSED] clr-one-set-other
[00:19:34] [PASSED] set-field
[00:19:34] [PASSED] conflict-duplicate
[00:19:34] [PASSED] conflict-not-disjoint
[00:19:34] [PASSED] conflict-reg-type
[00:19:34] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[00:19:34] ================== xe_rtp_process_tests ===================
[00:19:34] [PASSED] active1
[00:19:34] [PASSED] active2
[00:19:34] [PASSED] active-inactive
[00:19:34] [PASSED] inactive-active
[00:19:34] [PASSED] inactive-1st_or_active-inactive
[00:19:34] [PASSED] inactive-2nd_or_active-inactive
[00:19:34] [PASSED] inactive-last_or_active-inactive
[00:19:34] [PASSED] inactive-no_or_active-inactive
[00:19:34] ============== [PASSED] xe_rtp_process_tests ===============
[00:19:34] ===================== [PASSED] xe_rtp ======================
[00:19:34] ==================== xe_wa (1 subtest) =====================
[00:19:34] ======================== xe_wa_gt =========================
[00:19:34] [PASSED] TIGERLAKE B0
[00:19:34] [PASSED] DG1 A0
[00:19:34] [PASSED] DG1 B0
[00:19:34] [PASSED] ALDERLAKE_S A0
[00:19:34] [PASSED] ALDERLAKE_S B0
[00:19:34] [PASSED] ALDERLAKE_S C0
[00:19:34] [PASSED] ALDERLAKE_S D0
[00:19:34] [PASSED] ALDERLAKE_P A0
[00:19:34] [PASSED] ALDERLAKE_P B0
[00:19:34] [PASSED] ALDERLAKE_P C0
[00:19:34] [PASSED] ALDERLAKE_S RPLS D0
[00:19:34] [PASSED] ALDERLAKE_P RPLU E0
[00:19:34] [PASSED] DG2 G10 C0
[00:19:34] [PASSED] DG2 G11 B1
[00:19:34] [PASSED] DG2 G12 A1
[00:19:34] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[00:19:34] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[00:19:34] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[00:19:34] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[00:19:34] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[00:19:34] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[00:19:34] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[00:19:34] ==================== [PASSED] xe_wa_gt =====================
[00:19:34] ====================== [PASSED] xe_wa ======================
[00:19:34] ============================================================
[00:19:34] Testing complete. Ran 446 tests: passed: 428, skipped: 18
[00:19:34] Elapsed time: 35.391s total, 4.307s configuring, 30.616s building, 0.417s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[00:19:35] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[00:19:36] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[00:20:01] Starting KUnit Kernel (1/1)...
[00:20:01] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[00:20:01] ============ drm_test_pick_cmdline (2 subtests) ============
[00:20:01] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[00:20:01] =============== drm_test_pick_cmdline_named ===============
[00:20:01] [PASSED] NTSC
[00:20:01] [PASSED] NTSC-J
[00:20:01] [PASSED] PAL
[00:20:01] [PASSED] PAL-M
[00:20:01] =========== [PASSED] drm_test_pick_cmdline_named ===========
[00:20:01] ============== [PASSED] drm_test_pick_cmdline ==============
[00:20:01] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[00:20:01] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[00:20:01] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[00:20:01] =========== drm_validate_clone_mode (2 subtests) ===========
[00:20:01] ============== drm_test_check_in_clone_mode ===============
[00:20:01] [PASSED] in_clone_mode
[00:20:01] [PASSED] not_in_clone_mode
[00:20:01] ========== [PASSED] drm_test_check_in_clone_mode ===========
[00:20:01] =============== drm_test_check_valid_clones ===============
[00:20:01] [PASSED] not_in_clone_mode
[00:20:01] [PASSED] valid_clone
[00:20:01] [PASSED] invalid_clone
[00:20:01] =========== [PASSED] drm_test_check_valid_clones ===========
[00:20:01] ============= [PASSED] drm_validate_clone_mode =============
[00:20:01] ============= drm_validate_modeset (1 subtest) =============
[00:20:01] [PASSED] drm_test_check_connector_changed_modeset
[00:20:01] ============== [PASSED] drm_validate_modeset ===============
[00:20:01] ====== drm_test_bridge_get_current_state (2 subtests) ======
[00:20:01] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[00:20:01] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[00:20:01] ======== [PASSED] drm_test_bridge_get_current_state ========
[00:20:01] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[00:20:01] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[00:20:01] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[00:20:01] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[00:20:01] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[00:20:01] ============== drm_bridge_alloc (2 subtests) ===============
[00:20:01] [PASSED] drm_test_drm_bridge_alloc_basic
[00:20:01] [PASSED] drm_test_drm_bridge_alloc_get_put
[00:20:01] ================ [PASSED] drm_bridge_alloc =================
[00:20:01] ================== drm_buddy (8 subtests) ==================
[00:20:01] [PASSED] drm_test_buddy_alloc_limit
[00:20:01] [PASSED] drm_test_buddy_alloc_optimistic
[00:20:01] [PASSED] drm_test_buddy_alloc_pessimistic
[00:20:01] [PASSED] drm_test_buddy_alloc_pathological
[00:20:01] [PASSED] drm_test_buddy_alloc_contiguous
[00:20:01] [PASSED] drm_test_buddy_alloc_clear
[00:20:01] [PASSED] drm_test_buddy_alloc_range_bias
[00:20:02] [PASSED] drm_test_buddy_fragmentation_performance
[00:20:02] ==================== [PASSED] drm_buddy ====================
[00:20:02] ============= drm_cmdline_parser (40 subtests) =============
[00:20:02] [PASSED] drm_test_cmdline_force_d_only
[00:20:02] [PASSED] drm_test_cmdline_force_D_only_dvi
[00:20:02] [PASSED] drm_test_cmdline_force_D_only_hdmi
[00:20:02] [PASSED] drm_test_cmdline_force_D_only_not_digital
[00:20:02] [PASSED] drm_test_cmdline_force_e_only
[00:20:02] [PASSED] drm_test_cmdline_res
[00:20:02] [PASSED] drm_test_cmdline_res_vesa
[00:20:02] [PASSED] drm_test_cmdline_res_vesa_rblank
[00:20:02] [PASSED] drm_test_cmdline_res_rblank
[00:20:02] [PASSED] drm_test_cmdline_res_bpp
[00:20:02] [PASSED] drm_test_cmdline_res_refresh
[00:20:02] [PASSED] drm_test_cmdline_res_bpp_refresh
[00:20:02] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[00:20:02] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[00:20:02] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[00:20:02] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[00:20:02] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[00:20:02] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[00:20:02] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[00:20:02] [PASSED] drm_test_cmdline_res_margins_force_on
[00:20:02] [PASSED] drm_test_cmdline_res_vesa_margins
[00:20:02] [PASSED] drm_test_cmdline_name
[00:20:02] [PASSED] drm_test_cmdline_name_bpp
[00:20:02] [PASSED] drm_test_cmdline_name_option
[00:20:02] [PASSED] drm_test_cmdline_name_bpp_option
[00:20:02] [PASSED] drm_test_cmdline_rotate_0
[00:20:02] [PASSED] drm_test_cmdline_rotate_90
[00:20:02] [PASSED] drm_test_cmdline_rotate_180
[00:20:02] [PASSED] drm_test_cmdline_rotate_270
[00:20:02] [PASSED] drm_test_cmdline_hmirror
[00:20:02] [PASSED] drm_test_cmdline_vmirror
[00:20:02] [PASSED] drm_test_cmdline_margin_options
[00:20:02] [PASSED] drm_test_cmdline_multiple_options
[00:20:02] [PASSED] drm_test_cmdline_bpp_extra_and_option
[00:20:02] [PASSED] drm_test_cmdline_extra_and_option
[00:20:02] [PASSED] drm_test_cmdline_freestanding_options
[00:20:02] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[00:20:02] [PASSED] drm_test_cmdline_panel_orientation
[00:20:02] ================ drm_test_cmdline_invalid =================
[00:20:02] [PASSED] margin_only
[00:20:02] [PASSED] interlace_only
[00:20:02] [PASSED] res_missing_x
[00:20:02] [PASSED] res_missing_y
[00:20:02] [PASSED] res_bad_y
[00:20:02] [PASSED] res_missing_y_bpp
[00:20:02] [PASSED] res_bad_bpp
[00:20:02] [PASSED] res_bad_refresh
[00:20:02] [PASSED] res_bpp_refresh_force_on_off
[00:20:02] [PASSED] res_invalid_mode
[00:20:02] [PASSED] res_bpp_wrong_place_mode
[00:20:02] [PASSED] name_bpp_refresh
[00:20:02] [PASSED] name_refresh
[00:20:02] [PASSED] name_refresh_wrong_mode
[00:20:02] [PASSED] name_refresh_invalid_mode
[00:20:02] [PASSED] rotate_multiple
[00:20:02] [PASSED] rotate_invalid_val
[00:20:02] [PASSED] rotate_truncated
[00:20:02] [PASSED] invalid_option
[00:20:02] [PASSED] invalid_tv_option
[00:20:02] [PASSED] truncated_tv_option
[00:20:02] ============ [PASSED] drm_test_cmdline_invalid =============
[00:20:02] =============== drm_test_cmdline_tv_options ===============
[00:20:02] [PASSED] NTSC
[00:20:02] [PASSED] NTSC_443
[00:20:02] [PASSED] NTSC_J
[00:20:02] [PASSED] PAL
[00:20:02] [PASSED] PAL_M
[00:20:02] [PASSED] PAL_N
[00:20:02] [PASSED] SECAM
[00:20:02] [PASSED] MONO_525
[00:20:02] [PASSED] MONO_625
[00:20:02] =========== [PASSED] drm_test_cmdline_tv_options ===========
[00:20:02] =============== [PASSED] drm_cmdline_parser ================
[00:20:02] ========== drmm_connector_hdmi_init (20 subtests) ==========
[00:20:02] [PASSED] drm_test_connector_hdmi_init_valid
[00:20:02] [PASSED] drm_test_connector_hdmi_init_bpc_8
[00:20:02] [PASSED] drm_test_connector_hdmi_init_bpc_10
[00:20:02] [PASSED] drm_test_connector_hdmi_init_bpc_12
[00:20:02] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[00:20:02] [PASSED] drm_test_connector_hdmi_init_bpc_null
[00:20:02] [PASSED] drm_test_connector_hdmi_init_formats_empty
[00:20:02] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[00:20:02] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[00:20:02] [PASSED] supported_formats=0x9 yuv420_allowed=1
[00:20:02] [PASSED] supported_formats=0x9 yuv420_allowed=0
[00:20:02] [PASSED] supported_formats=0x3 yuv420_allowed=1
[00:20:02] [PASSED] supported_formats=0x3 yuv420_allowed=0
[00:20:02] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[00:20:02] [PASSED] drm_test_connector_hdmi_init_null_ddc
[00:20:02] [PASSED] drm_test_connector_hdmi_init_null_product
[00:20:02] [PASSED] drm_test_connector_hdmi_init_null_vendor
[00:20:02] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[00:20:02] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[00:20:02] [PASSED] drm_test_connector_hdmi_init_product_valid
[00:20:02] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[00:20:02] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[00:20:02] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[00:20:02] ========= drm_test_connector_hdmi_init_type_valid =========
[00:20:02] [PASSED] HDMI-A
[00:20:02] [PASSED] HDMI-B
[00:20:02] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[00:20:02] ======== drm_test_connector_hdmi_init_type_invalid ========
[00:20:02] [PASSED] Unknown
[00:20:02] [PASSED] VGA
[00:20:02] [PASSED] DVI-I
[00:20:02] [PASSED] DVI-D
[00:20:02] [PASSED] DVI-A
[00:20:02] [PASSED] Composite
[00:20:02] [PASSED] SVIDEO
[00:20:02] [PASSED] LVDS
[00:20:02] [PASSED] Component
[00:20:02] [PASSED] DIN
[00:20:02] [PASSED] DP
[00:20:02] [PASSED] TV
[00:20:02] [PASSED] eDP
[00:20:02] [PASSED] Virtual
[00:20:02] [PASSED] DSI
[00:20:02] [PASSED] DPI
[00:20:02] [PASSED] Writeback
[00:20:02] [PASSED] SPI
[00:20:02] [PASSED] USB
[00:20:02] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[00:20:02] ============ [PASSED] drmm_connector_hdmi_init =============
[00:20:02] ============= drmm_connector_init (3 subtests) =============
[00:20:02] [PASSED] drm_test_drmm_connector_init
[00:20:02] [PASSED] drm_test_drmm_connector_init_null_ddc
[00:20:02] ========= drm_test_drmm_connector_init_type_valid =========
[00:20:02] [PASSED] Unknown
[00:20:02] [PASSED] VGA
[00:20:02] [PASSED] DVI-I
[00:20:02] [PASSED] DVI-D
[00:20:02] [PASSED] DVI-A
[00:20:02] [PASSED] Composite
[00:20:02] [PASSED] SVIDEO
[00:20:02] [PASSED] LVDS
[00:20:02] [PASSED] Component
[00:20:02] [PASSED] DIN
[00:20:02] [PASSED] DP
[00:20:02] [PASSED] HDMI-A
[00:20:02] [PASSED] HDMI-B
[00:20:02] [PASSED] TV
[00:20:02] [PASSED] eDP
[00:20:02] [PASSED] Virtual
[00:20:02] [PASSED] DSI
[00:20:02] [PASSED] DPI
[00:20:02] [PASSED] Writeback
[00:20:02] [PASSED] SPI
[00:20:02] [PASSED] USB
[00:20:02] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[00:20:02] =============== [PASSED] drmm_connector_init ===============
[00:20:02] ========= drm_connector_dynamic_init (6 subtests) ==========
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_init
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_init_properties
[00:20:02] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[00:20:02] [PASSED] Unknown
[00:20:02] [PASSED] VGA
[00:20:02] [PASSED] DVI-I
[00:20:02] [PASSED] DVI-D
[00:20:02] [PASSED] DVI-A
[00:20:02] [PASSED] Composite
[00:20:02] [PASSED] SVIDEO
[00:20:02] [PASSED] LVDS
[00:20:02] [PASSED] Component
[00:20:02] [PASSED] DIN
[00:20:02] [PASSED] DP
[00:20:02] [PASSED] HDMI-A
[00:20:02] [PASSED] HDMI-B
[00:20:02] [PASSED] TV
[00:20:02] [PASSED] eDP
[00:20:02] [PASSED] Virtual
[00:20:02] [PASSED] DSI
[00:20:02] [PASSED] DPI
[00:20:02] [PASSED] Writeback
[00:20:02] [PASSED] SPI
[00:20:02] [PASSED] USB
[00:20:02] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[00:20:02] ======== drm_test_drm_connector_dynamic_init_name =========
[00:20:02] [PASSED] Unknown
[00:20:02] [PASSED] VGA
[00:20:02] [PASSED] DVI-I
[00:20:02] [PASSED] DVI-D
[00:20:02] [PASSED] DVI-A
[00:20:02] [PASSED] Composite
[00:20:02] [PASSED] SVIDEO
[00:20:02] [PASSED] LVDS
[00:20:02] [PASSED] Component
[00:20:02] [PASSED] DIN
[00:20:02] [PASSED] DP
[00:20:02] [PASSED] HDMI-A
[00:20:02] [PASSED] HDMI-B
[00:20:02] [PASSED] TV
[00:20:02] [PASSED] eDP
[00:20:02] [PASSED] Virtual
[00:20:02] [PASSED] DSI
[00:20:02] [PASSED] DPI
[00:20:02] [PASSED] Writeback
[00:20:02] [PASSED] SPI
[00:20:02] [PASSED] USB
[00:20:02] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[00:20:02] =========== [PASSED] drm_connector_dynamic_init ============
[00:20:02] ==== drm_connector_dynamic_register_early (4 subtests) =====
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[00:20:02] ====== [PASSED] drm_connector_dynamic_register_early =======
[00:20:02] ======= drm_connector_dynamic_register (7 subtests) ========
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[00:20:02] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[00:20:02] ========= [PASSED] drm_connector_dynamic_register ==========
[00:20:02] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[00:20:02] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[00:20:02] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[00:20:02] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[00:20:02] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[00:20:02] ========== drm_test_get_tv_mode_from_name_valid ===========
[00:20:02] [PASSED] NTSC
[00:20:02] [PASSED] NTSC-443
[00:20:02] [PASSED] NTSC-J
[00:20:02] [PASSED] PAL
[00:20:02] [PASSED] PAL-M
[00:20:02] [PASSED] PAL-N
[00:20:02] [PASSED] SECAM
[00:20:02] [PASSED] Mono
[00:20:02] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[00:20:02] [PASSED] drm_test_get_tv_mode_from_name_truncated
[00:20:02] ============ [PASSED] drm_get_tv_mode_from_name ============
[00:20:02] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[00:20:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[00:20:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[00:20:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[00:20:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[00:20:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[00:20:02] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[00:20:02] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[00:20:02] [PASSED] VIC 96
[00:20:02] [PASSED] VIC 97
[00:20:02] [PASSED] VIC 101
[00:20:02] [PASSED] VIC 102
[00:20:02] [PASSED] VIC 106
[00:20:02] [PASSED] VIC 107
[00:20:02] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[00:20:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[00:20:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[00:20:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[00:20:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[00:20:02] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[00:20:02] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[00:20:02] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[00:20:02] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[00:20:02] [PASSED] Automatic
[00:20:02] [PASSED] Full
[00:20:02] [PASSED] Limited 16:235
[00:20:02] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[00:20:02] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[00:20:02] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[00:20:02] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[00:20:02] === drm_test_drm_hdmi_connector_get_output_format_name ====
[00:20:02] [PASSED] RGB
[00:20:02] [PASSED] YUV 4:2:0
[00:20:02] [PASSED] YUV 4:2:2
[00:20:02] [PASSED] YUV 4:4:4
[00:20:02] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[00:20:02] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[00:20:02] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[00:20:02] ============= drm_damage_helper (21 subtests) ==============
[00:20:02] [PASSED] drm_test_damage_iter_no_damage
[00:20:02] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[00:20:02] [PASSED] drm_test_damage_iter_no_damage_src_moved
[00:20:02] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[00:20:02] [PASSED] drm_test_damage_iter_no_damage_not_visible
[00:20:02] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[00:20:02] [PASSED] drm_test_damage_iter_no_damage_no_fb
[00:20:02] [PASSED] drm_test_damage_iter_simple_damage
[00:20:02] [PASSED] drm_test_damage_iter_single_damage
[00:20:02] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[00:20:02] [PASSED] drm_test_damage_iter_single_damage_outside_src
[00:20:02] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[00:20:02] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[00:20:02] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[00:20:02] [PASSED] drm_test_damage_iter_single_damage_src_moved
[00:20:02] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[00:20:02] [PASSED] drm_test_damage_iter_damage
[00:20:02] [PASSED] drm_test_damage_iter_damage_one_intersect
[00:20:02] [PASSED] drm_test_damage_iter_damage_one_outside
[00:20:02] [PASSED] drm_test_damage_iter_damage_src_moved
[00:20:02] [PASSED] drm_test_damage_iter_damage_not_visible
[00:20:02] ================ [PASSED] drm_damage_helper ================
[00:20:02] ============== drm_dp_mst_helper (3 subtests) ==============
[00:20:02] ============== drm_test_dp_mst_calc_pbn_mode ==============
[00:20:02] [PASSED] Clock 154000 BPP 30 DSC disabled
[00:20:02] [PASSED] Clock 234000 BPP 30 DSC disabled
[00:20:02] [PASSED] Clock 297000 BPP 24 DSC disabled
[00:20:02] [PASSED] Clock 332880 BPP 24 DSC enabled
[00:20:02] [PASSED] Clock 324540 BPP 24 DSC enabled
[00:20:02] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[00:20:02] ============== drm_test_dp_mst_calc_pbn_div ===============
[00:20:02] [PASSED] Link rate 2000000 lane count 4
[00:20:02] [PASSED] Link rate 2000000 lane count 2
[00:20:02] [PASSED] Link rate 2000000 lane count 1
[00:20:02] [PASSED] Link rate 1350000 lane count 4
[00:20:02] [PASSED] Link rate 1350000 lane count 2
[00:20:02] [PASSED] Link rate 1350000 lane count 1
[00:20:02] [PASSED] Link rate 1000000 lane count 4
[00:20:02] [PASSED] Link rate 1000000 lane count 2
[00:20:02] [PASSED] Link rate 1000000 lane count 1
[00:20:02] [PASSED] Link rate 810000 lane count 4
[00:20:02] [PASSED] Link rate 810000 lane count 2
[00:20:02] [PASSED] Link rate 810000 lane count 1
[00:20:02] [PASSED] Link rate 540000 lane count 4
[00:20:02] [PASSED] Link rate 540000 lane count 2
[00:20:02] [PASSED] Link rate 540000 lane count 1
[00:20:02] [PASSED] Link rate 270000 lane count 4
[00:20:02] [PASSED] Link rate 270000 lane count 2
[00:20:02] [PASSED] Link rate 270000 lane count 1
[00:20:02] [PASSED] Link rate 162000 lane count 4
[00:20:02] [PASSED] Link rate 162000 lane count 2
[00:20:02] [PASSED] Link rate 162000 lane count 1
[00:20:02] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[00:20:02] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[00:20:02] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[00:20:02] [PASSED] DP_POWER_UP_PHY with port number
[00:20:02] [PASSED] DP_POWER_DOWN_PHY with port number
[00:20:02] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[00:20:02] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[00:20:02] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[00:20:02] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[00:20:02] [PASSED] DP_QUERY_PAYLOAD with port number
[00:20:02] [PASSED] DP_QUERY_PAYLOAD with VCPI
[00:20:02] [PASSED] DP_REMOTE_DPCD_READ with port number
[00:20:02] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[00:20:02] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[00:20:02] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[00:20:02] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[00:20:02] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[00:20:02] [PASSED] DP_REMOTE_I2C_READ with port number
[00:20:02] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[00:20:02] [PASSED] DP_REMOTE_I2C_READ with transactions array
[00:20:02] [PASSED] DP_REMOTE_I2C_WRITE with port number
[00:20:02] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[00:20:02] [PASSED] DP_REMOTE_I2C_WRITE with data array
[00:20:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[00:20:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[00:20:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[00:20:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[00:20:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[00:20:02] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[00:20:02] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[00:20:02] ================ [PASSED] drm_dp_mst_helper ================
[00:20:02] ================== drm_exec (7 subtests) ===================
[00:20:02] [PASSED] sanitycheck
[00:20:02] [PASSED] test_lock
[00:20:02] [PASSED] test_lock_unlock
[00:20:02] [PASSED] test_duplicates
[00:20:02] [PASSED] test_prepare
[00:20:02] [PASSED] test_prepare_array
[00:20:02] [PASSED] test_multiple_loops
[00:20:02] ==================== [PASSED] drm_exec =====================
[00:20:02] =========== drm_format_helper_test (17 subtests) ===========
[00:20:02] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[00:20:02] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[00:20:02] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[00:20:02] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[00:20:02] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[00:20:02] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[00:20:02] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[00:20:02] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[00:20:02] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[00:20:02] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[00:20:02] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[00:20:02] ============== drm_test_fb_xrgb8888_to_mono ===============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[00:20:02] ==================== drm_test_fb_swab =====================
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ================ [PASSED] drm_test_fb_swab =================
[00:20:02] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[00:20:02] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[00:20:02] [PASSED] single_pixel_source_buffer
[00:20:02] [PASSED] single_pixel_clip_rectangle
[00:20:02] [PASSED] well_known_colors
[00:20:02] [PASSED] destination_pitch
[00:20:02] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[00:20:02] ================= drm_test_fb_clip_offset =================
[00:20:02] [PASSED] pass through
[00:20:02] [PASSED] horizontal offset
[00:20:02] [PASSED] vertical offset
[00:20:02] [PASSED] horizontal and vertical offset
[00:20:02] [PASSED] horizontal offset (custom pitch)
[00:20:02] [PASSED] vertical offset (custom pitch)
[00:20:02] [PASSED] horizontal and vertical offset (custom pitch)
[00:20:02] ============= [PASSED] drm_test_fb_clip_offset =============
[00:20:02] =================== drm_test_fb_memcpy ====================
[00:20:02] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[00:20:02] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[00:20:02] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[00:20:02] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[00:20:02] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[00:20:02] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[00:20:02] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[00:20:02] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[00:20:02] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[00:20:02] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[00:20:02] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[00:20:02] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[00:20:02] =============== [PASSED] drm_test_fb_memcpy ================
[00:20:02] ============= [PASSED] drm_format_helper_test ==============
[00:20:02] ================= drm_format (18 subtests) =================
[00:20:02] [PASSED] drm_test_format_block_width_invalid
[00:20:02] [PASSED] drm_test_format_block_width_one_plane
[00:20:02] [PASSED] drm_test_format_block_width_two_plane
[00:20:02] [PASSED] drm_test_format_block_width_three_plane
[00:20:02] [PASSED] drm_test_format_block_width_tiled
[00:20:02] [PASSED] drm_test_format_block_height_invalid
[00:20:02] [PASSED] drm_test_format_block_height_one_plane
[00:20:02] [PASSED] drm_test_format_block_height_two_plane
[00:20:02] [PASSED] drm_test_format_block_height_three_plane
[00:20:02] [PASSED] drm_test_format_block_height_tiled
[00:20:02] [PASSED] drm_test_format_min_pitch_invalid
[00:20:02] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[00:20:02] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[00:20:02] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[00:20:02] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[00:20:02] [PASSED] drm_test_format_min_pitch_two_plane
[00:20:02] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[00:20:02] [PASSED] drm_test_format_min_pitch_tiled
[00:20:02] =================== [PASSED] drm_format ====================
[00:20:02] ============== drm_framebuffer (10 subtests) ===============
[00:20:02] ========== drm_test_framebuffer_check_src_coords ==========
[00:20:02] [PASSED] Success: source fits into fb
[00:20:02] [PASSED] Fail: overflowing fb with x-axis coordinate
[00:20:02] [PASSED] Fail: overflowing fb with y-axis coordinate
[00:20:02] [PASSED] Fail: overflowing fb with source width
[00:20:02] [PASSED] Fail: overflowing fb with source height
[00:20:02] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[00:20:02] [PASSED] drm_test_framebuffer_cleanup
[00:20:02] =============== drm_test_framebuffer_create ===============
[00:20:02] [PASSED] ABGR8888 normal sizes
[00:20:02] [PASSED] ABGR8888 max sizes
[00:20:02] [PASSED] ABGR8888 pitch greater than min required
[00:20:02] [PASSED] ABGR8888 pitch less than min required
[00:20:02] [PASSED] ABGR8888 Invalid width
[00:20:02] [PASSED] ABGR8888 Invalid buffer handle
[00:20:02] [PASSED] No pixel format
[00:20:02] [PASSED] ABGR8888 Width 0
[00:20:02] [PASSED] ABGR8888 Height 0
[00:20:02] [PASSED] ABGR8888 Out of bound height * pitch combination
[00:20:02] [PASSED] ABGR8888 Large buffer offset
[00:20:02] [PASSED] ABGR8888 Buffer offset for inexistent plane
[00:20:02] [PASSED] ABGR8888 Invalid flag
[00:20:02] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[00:20:02] [PASSED] ABGR8888 Valid buffer modifier
[00:20:02] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[00:20:02] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[00:20:02] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[00:20:02] [PASSED] NV12 Normal sizes
[00:20:02] [PASSED] NV12 Max sizes
[00:20:02] [PASSED] NV12 Invalid pitch
[00:20:02] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[00:20:02] [PASSED] NV12 different modifier per-plane
[00:20:02] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[00:20:02] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[00:20:02] [PASSED] NV12 Modifier for inexistent plane
[00:20:02] [PASSED] NV12 Handle for inexistent plane
[00:20:02] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[00:20:02] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[00:20:02] [PASSED] YVU420 Normal sizes
[00:20:02] [PASSED] YVU420 Max sizes
[00:20:02] [PASSED] YVU420 Invalid pitch
[00:20:02] [PASSED] YVU420 Different pitches
[00:20:02] [PASSED] YVU420 Different buffer offsets/pitches
[00:20:02] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[00:20:02] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[00:20:02] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[00:20:02] [PASSED] YVU420 Valid modifier
[00:20:02] [PASSED] YVU420 Different modifiers per plane
[00:20:02] [PASSED] YVU420 Modifier for inexistent plane
[00:20:02] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[00:20:02] [PASSED] X0L2 Normal sizes
[00:20:02] [PASSED] X0L2 Max sizes
[00:20:02] [PASSED] X0L2 Invalid pitch
[00:20:02] [PASSED] X0L2 Pitch greater than minimum required
[00:20:02] [PASSED] X0L2 Handle for inexistent plane
[00:20:02] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[00:20:02] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[00:20:02] [PASSED] X0L2 Valid modifier
[00:20:02] [PASSED] X0L2 Modifier for inexistent plane
[00:20:02] =========== [PASSED] drm_test_framebuffer_create ===========
[00:20:02] [PASSED] drm_test_framebuffer_free
[00:20:02] [PASSED] drm_test_framebuffer_init
[00:20:02] [PASSED] drm_test_framebuffer_init_bad_format
[00:20:02] [PASSED] drm_test_framebuffer_init_dev_mismatch
[00:20:02] [PASSED] drm_test_framebuffer_lookup
[00:20:02] [PASSED] drm_test_framebuffer_lookup_inexistent
[00:20:02] [PASSED] drm_test_framebuffer_modifiers_not_supported
[00:20:02] ================= [PASSED] drm_framebuffer =================
[00:20:02] ================ drm_gem_shmem (8 subtests) ================
[00:20:02] [PASSED] drm_gem_shmem_test_obj_create
[00:20:02] [PASSED] drm_gem_shmem_test_obj_create_private
[00:20:02] [PASSED] drm_gem_shmem_test_pin_pages
[00:20:02] [PASSED] drm_gem_shmem_test_vmap
[00:20:02] [PASSED] drm_gem_shmem_test_get_pages_sgt
[00:20:02] [PASSED] drm_gem_shmem_test_get_sg_table
[00:20:02] [PASSED] drm_gem_shmem_test_madvise
[00:20:02] [PASSED] drm_gem_shmem_test_purge
[00:20:02] ================== [PASSED] drm_gem_shmem ==================
[00:20:02] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[00:20:02] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[00:20:02] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[00:20:02] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[00:20:02] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[00:20:02] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[00:20:02] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[00:20:02] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[00:20:02] [PASSED] Automatic
[00:20:02] [PASSED] Full
[00:20:02] [PASSED] Limited 16:235
[00:20:02] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[00:20:02] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[00:20:02] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[00:20:02] [PASSED] drm_test_check_disable_connector
[00:20:02] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[00:20:02] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[00:20:02] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[00:20:02] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[00:20:02] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[00:20:02] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[00:20:02] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[00:20:02] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[00:20:02] [PASSED] drm_test_check_output_bpc_dvi
[00:20:02] [PASSED] drm_test_check_output_bpc_format_vic_1
[00:20:02] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[00:20:02] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[00:20:02] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[00:20:02] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[00:20:02] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[00:20:02] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[00:20:02] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[00:20:02] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[00:20:02] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[00:20:02] [PASSED] drm_test_check_broadcast_rgb_value
[00:20:02] [PASSED] drm_test_check_bpc_8_value
[00:20:02] [PASSED] drm_test_check_bpc_10_value
[00:20:02] [PASSED] drm_test_check_bpc_12_value
[00:20:02] [PASSED] drm_test_check_format_value
[00:20:02] [PASSED] drm_test_check_tmds_char_value
[00:20:02] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[00:20:02] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[00:20:02] [PASSED] drm_test_check_mode_valid
[00:20:02] [PASSED] drm_test_check_mode_valid_reject
[00:20:02] [PASSED] drm_test_check_mode_valid_reject_rate
[00:20:02] [PASSED] drm_test_check_mode_valid_reject_max_clock
[00:20:02] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[00:20:02] ================= drm_managed (2 subtests) =================
[00:20:02] [PASSED] drm_test_managed_release_action
[00:20:02] [PASSED] drm_test_managed_run_action
[00:20:02] =================== [PASSED] drm_managed ===================
[00:20:02] =================== drm_mm (6 subtests) ====================
[00:20:02] [PASSED] drm_test_mm_init
[00:20:02] [PASSED] drm_test_mm_debug
[00:20:02] [PASSED] drm_test_mm_align32
[00:20:02] [PASSED] drm_test_mm_align64
[00:20:02] [PASSED] drm_test_mm_lowest
[00:20:02] [PASSED] drm_test_mm_highest
[00:20:02] ===================== [PASSED] drm_mm ======================
[00:20:02] ============= drm_modes_analog_tv (5 subtests) =============
[00:20:02] [PASSED] drm_test_modes_analog_tv_mono_576i
[00:20:02] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[00:20:02] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[00:20:02] [PASSED] drm_test_modes_analog_tv_pal_576i
[00:20:02] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[00:20:02] =============== [PASSED] drm_modes_analog_tv ===============
[00:20:02] ============== drm_plane_helper (2 subtests) ===============
[00:20:02] =============== drm_test_check_plane_state ================
[00:20:02] [PASSED] clipping_simple
[00:20:02] [PASSED] clipping_rotate_reflect
[00:20:02] [PASSED] positioning_simple
[00:20:02] [PASSED] upscaling
[00:20:02] [PASSED] downscaling
[00:20:02] [PASSED] rounding1
[00:20:02] [PASSED] rounding2
[00:20:02] [PASSED] rounding3
[00:20:02] [PASSED] rounding4
[00:20:02] =========== [PASSED] drm_test_check_plane_state ============
[00:20:02] =========== drm_test_check_invalid_plane_state ============
[00:20:02] [PASSED] positioning_invalid
[00:20:02] [PASSED] upscaling_invalid
[00:20:02] [PASSED] downscaling_invalid
[00:20:02] ======= [PASSED] drm_test_check_invalid_plane_state ========
[00:20:02] ================ [PASSED] drm_plane_helper =================
[00:20:02] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[00:20:02] ====== drm_test_connector_helper_tv_get_modes_check =======
[00:20:02] [PASSED] None
[00:20:02] [PASSED] PAL
[00:20:02] [PASSED] NTSC
[00:20:02] [PASSED] Both, NTSC Default
[00:20:02] [PASSED] Both, PAL Default
[00:20:02] [PASSED] Both, NTSC Default, with PAL on command-line
[00:20:02] [PASSED] Both, PAL Default, with NTSC on command-line
[00:20:02] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[00:20:02] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[00:20:02] ================== drm_rect (9 subtests) ===================
[00:20:02] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[00:20:02] [PASSED] drm_test_rect_clip_scaled_not_clipped
[00:20:02] [PASSED] drm_test_rect_clip_scaled_clipped
[00:20:02] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[00:20:02] ================= drm_test_rect_intersect =================
[00:20:02] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[00:20:02] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[00:20:02] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[00:20:02] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[00:20:02] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[00:20:02] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[00:20:02] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[00:20:02] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[00:20:02] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[00:20:02] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[00:20:02] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[00:20:02] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[00:20:02] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[00:20:02] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[00:20:02] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[00:20:02] ============= [PASSED] drm_test_rect_intersect =============
[00:20:02] ================ drm_test_rect_calc_hscale ================
[00:20:02] [PASSED] normal use
[00:20:02] [PASSED] out of max range
[00:20:02] [PASSED] out of min range
[00:20:02] [PASSED] zero dst
[00:20:02] [PASSED] negative src
[00:20:02] [PASSED] negative dst
[00:20:02] ============ [PASSED] drm_test_rect_calc_hscale ============
[00:20:02] ================ drm_test_rect_calc_vscale ================
[00:20:02] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[00:20:02] [PASSED] out of max range
[00:20:02] [PASSED] out of min range
[00:20:02] [PASSED] zero dst
[00:20:02] [PASSED] negative src
[00:20:02] [PASSED] negative dst
[00:20:02] ============ [PASSED] drm_test_rect_calc_vscale ============
[00:20:02] ================== drm_test_rect_rotate ===================
[00:20:02] [PASSED] reflect-x
[00:20:02] [PASSED] reflect-y
[00:20:02] [PASSED] rotate-0
[00:20:02] [PASSED] rotate-90
[00:20:02] [PASSED] rotate-180
[00:20:02] [PASSED] rotate-270
[00:20:02] ============== [PASSED] drm_test_rect_rotate ===============
[00:20:02] ================ drm_test_rect_rotate_inv =================
[00:20:02] [PASSED] reflect-x
[00:20:02] [PASSED] reflect-y
[00:20:02] [PASSED] rotate-0
[00:20:02] [PASSED] rotate-90
[00:20:02] [PASSED] rotate-180
[00:20:02] [PASSED] rotate-270
[00:20:02] ============ [PASSED] drm_test_rect_rotate_inv =============
[00:20:02] ==================== [PASSED] drm_rect =====================
[00:20:02] ============ drm_sysfb_modeset_test (1 subtest) ============
[00:20:02] ============ drm_test_sysfb_build_fourcc_list =============
[00:20:02] [PASSED] no native formats
[00:20:02] [PASSED] XRGB8888 as native format
[00:20:02] [PASSED] remove duplicates
[00:20:02] [PASSED] convert alpha formats
[00:20:02] [PASSED] random formats
[00:20:02] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[00:20:02] ============= [PASSED] drm_sysfb_modeset_test ==============
[00:20:02] ============================================================
[00:20:02] Testing complete. Ran 622 tests: passed: 622
[00:20:02] Elapsed time: 27.018s total, 1.715s configuring, 24.884s building, 0.391s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[00:20:02] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[00:20:03] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[00:20:13] Starting KUnit Kernel (1/1)...
[00:20:13] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[00:20:13] ================= ttm_device (5 subtests) ==================
[00:20:13] [PASSED] ttm_device_init_basic
[00:20:13] [PASSED] ttm_device_init_multiple
[00:20:13] [PASSED] ttm_device_fini_basic
[00:20:13] [PASSED] ttm_device_init_no_vma_man
[00:20:13] ================== ttm_device_init_pools ==================
[00:20:13] [PASSED] No DMA allocations, no DMA32 required
[00:20:13] [PASSED] DMA allocations, DMA32 required
[00:20:13] [PASSED] No DMA allocations, DMA32 required
[00:20:13] [PASSED] DMA allocations, no DMA32 required
[00:20:13] ============== [PASSED] ttm_device_init_pools ==============
[00:20:13] =================== [PASSED] ttm_device ====================
[00:20:13] ================== ttm_pool (8 subtests) ===================
[00:20:13] ================== ttm_pool_alloc_basic ===================
[00:20:13] [PASSED] One page
[00:20:13] [PASSED] More than one page
[00:20:13] [PASSED] Above the allocation limit
[00:20:13] [PASSED] One page, with coherent DMA mappings enabled
[00:20:13] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[00:20:13] ============== [PASSED] ttm_pool_alloc_basic ===============
[00:20:13] ============== ttm_pool_alloc_basic_dma_addr ==============
[00:20:13] [PASSED] One page
[00:20:13] [PASSED] More than one page
[00:20:13] [PASSED] Above the allocation limit
[00:20:13] [PASSED] One page, with coherent DMA mappings enabled
[00:20:13] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[00:20:13] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[00:20:13] [PASSED] ttm_pool_alloc_order_caching_match
[00:20:13] [PASSED] ttm_pool_alloc_caching_mismatch
[00:20:13] [PASSED] ttm_pool_alloc_order_mismatch
[00:20:13] [PASSED] ttm_pool_free_dma_alloc
[00:20:13] [PASSED] ttm_pool_free_no_dma_alloc
[00:20:13] [PASSED] ttm_pool_fini_basic
[00:20:13] ==================== [PASSED] ttm_pool =====================
[00:20:13] ================ ttm_resource (8 subtests) =================
[00:20:13] ================= ttm_resource_init_basic =================
[00:20:13] [PASSED] Init resource in TTM_PL_SYSTEM
[00:20:13] [PASSED] Init resource in TTM_PL_VRAM
[00:20:13] [PASSED] Init resource in a private placement
[00:20:13] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[00:20:13] ============= [PASSED] ttm_resource_init_basic =============
[00:20:13] [PASSED] ttm_resource_init_pinned
[00:20:13] [PASSED] ttm_resource_fini_basic
[00:20:13] [PASSED] ttm_resource_manager_init_basic
[00:20:13] [PASSED] ttm_resource_manager_usage_basic
[00:20:13] [PASSED] ttm_resource_manager_set_used_basic
[00:20:13] [PASSED] ttm_sys_man_alloc_basic
[00:20:13] [PASSED] ttm_sys_man_free_basic
[00:20:13] ================== [PASSED] ttm_resource ===================
[00:20:13] =================== ttm_tt (15 subtests) ===================
[00:20:13] ==================== ttm_tt_init_basic ====================
[00:20:13] [PASSED] Page-aligned size
[00:20:13] [PASSED] Extra pages requested
[00:20:13] ================ [PASSED] ttm_tt_init_basic ================
[00:20:13] [PASSED] ttm_tt_init_misaligned
[00:20:13] [PASSED] ttm_tt_fini_basic
[00:20:13] [PASSED] ttm_tt_fini_sg
[00:20:13] [PASSED] ttm_tt_fini_shmem
[00:20:13] [PASSED] ttm_tt_create_basic
[00:20:13] [PASSED] ttm_tt_create_invalid_bo_type
[00:20:13] [PASSED] ttm_tt_create_ttm_exists
[00:20:13] [PASSED] ttm_tt_create_failed
[00:20:13] [PASSED] ttm_tt_destroy_basic
[00:20:13] [PASSED] ttm_tt_populate_null_ttm
[00:20:13] [PASSED] ttm_tt_populate_populated_ttm
[00:20:13] [PASSED] ttm_tt_unpopulate_basic
[00:20:13] [PASSED] ttm_tt_unpopulate_empty_ttm
[00:20:13] [PASSED] ttm_tt_swapin_basic
[00:20:13] ===================== [PASSED] ttm_tt ======================
[00:20:13] =================== ttm_bo (14 subtests) ===================
[00:20:13] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[00:20:13] [PASSED] Cannot be interrupted and sleeps
[00:20:13] [PASSED] Cannot be interrupted, locks straight away
[00:20:13] [PASSED] Can be interrupted, sleeps
[00:20:13] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[00:20:13] [PASSED] ttm_bo_reserve_locked_no_sleep
[00:20:13] [PASSED] ttm_bo_reserve_no_wait_ticket
[00:20:13] [PASSED] ttm_bo_reserve_double_resv
[00:20:13] [PASSED] ttm_bo_reserve_interrupted
[00:20:13] [PASSED] ttm_bo_reserve_deadlock
[00:20:13] [PASSED] ttm_bo_unreserve_basic
[00:20:13] [PASSED] ttm_bo_unreserve_pinned
[00:20:13] [PASSED] ttm_bo_unreserve_bulk
[00:20:13] [PASSED] ttm_bo_fini_basic
[00:20:13] [PASSED] ttm_bo_fini_shared_resv
[00:20:13] [PASSED] ttm_bo_pin_basic
[00:20:13] [PASSED] ttm_bo_pin_unpin_resource
[00:20:13] [PASSED] ttm_bo_multiple_pin_one_unpin
[00:20:13] ===================== [PASSED] ttm_bo ======================
[00:20:13] ============== ttm_bo_validate (21 subtests) ===============
[00:20:13] ============== ttm_bo_init_reserved_sys_man ===============
[00:20:13] [PASSED] Buffer object for userspace
[00:20:13] [PASSED] Kernel buffer object
[00:20:13] [PASSED] Shared buffer object
[00:20:13] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[00:20:13] ============== ttm_bo_init_reserved_mock_man ==============
[00:20:13] [PASSED] Buffer object for userspace
[00:20:13] [PASSED] Kernel buffer object
[00:20:13] [PASSED] Shared buffer object
[00:20:13] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[00:20:13] [PASSED] ttm_bo_init_reserved_resv
[00:20:13] ================== ttm_bo_validate_basic ==================
[00:20:13] [PASSED] Buffer object for userspace
[00:20:13] [PASSED] Kernel buffer object
[00:20:13] [PASSED] Shared buffer object
[00:20:13] ============== [PASSED] ttm_bo_validate_basic ==============
[00:20:13] [PASSED] ttm_bo_validate_invalid_placement
[00:20:13] ============= ttm_bo_validate_same_placement ==============
[00:20:13] [PASSED] System manager
[00:20:13] [PASSED] VRAM manager
[00:20:13] ========= [PASSED] ttm_bo_validate_same_placement ==========
[00:20:13] [PASSED] ttm_bo_validate_failed_alloc
[00:20:13] [PASSED] ttm_bo_validate_pinned
[00:20:13] [PASSED] ttm_bo_validate_busy_placement
[00:20:13] ================ ttm_bo_validate_multihop =================
[00:20:13] [PASSED] Buffer object for userspace
[00:20:13] [PASSED] Kernel buffer object
[00:20:13] [PASSED] Shared buffer object
[00:20:13] ============ [PASSED] ttm_bo_validate_multihop =============
[00:20:13] ========== ttm_bo_validate_no_placement_signaled ==========
[00:20:13] [PASSED] Buffer object in system domain, no page vector
[00:20:13] [PASSED] Buffer object in system domain with an existing page vector
[00:20:13] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[00:20:13] ======== ttm_bo_validate_no_placement_not_signaled ========
[00:20:13] [PASSED] Buffer object for userspace
[00:20:13] [PASSED] Kernel buffer object
[00:20:13] [PASSED] Shared buffer object
[00:20:13] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[00:20:13] [PASSED] ttm_bo_validate_move_fence_signaled
[00:20:13] ========= ttm_bo_validate_move_fence_not_signaled =========
[00:20:13] [PASSED] Waits for GPU
[00:20:13] [PASSED] Tries to lock straight away
[00:20:13] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[00:20:13] [PASSED] ttm_bo_validate_happy_evict
[00:20:13] [PASSED] ttm_bo_validate_all_pinned_evict
[00:20:13] [PASSED] ttm_bo_validate_allowed_only_evict
[00:20:13] [PASSED] ttm_bo_validate_deleted_evict
[00:20:13] [PASSED] ttm_bo_validate_busy_domain_evict
[00:20:13] [PASSED] ttm_bo_validate_evict_gutting
[00:20:13] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[00:20:13] ================= [PASSED] ttm_bo_validate =================
[00:20:13] ============================================================
[00:20:13] Testing complete. Ran 101 tests: passed: 101
[00:20:13] Elapsed time: 11.394s total, 1.717s configuring, 9.461s building, 0.187s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 74+ messages in thread* ✓ Xe.CI.BAT: success for Scope-based forcewake and runtime PM (rev3)
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (30 preceding siblings ...)
2025-11-11 0:20 ` ✓ CI.KUnit: success for Scope-based forcewake and runtime PM (rev3) Patchwork
@ 2025-11-11 0:57 ` Patchwork
2025-11-11 10:50 ` ✗ Xe.CI.Full: failure " Patchwork
` (2 subsequent siblings)
34 siblings, 0 replies; 74+ messages in thread
From: Patchwork @ 2025-11-11 0:57 UTC (permalink / raw)
To: Matt Roper; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1897 bytes --]
== Series Details ==
Series: Scope-based forcewake and runtime PM (rev3)
URL : https://patchwork.freedesktop.org/series/157253/
State : success
== Summary ==
CI Bug Log - changes from xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b_BAT -> xe-pw-157253v3_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (13 -> 13)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-157253v3_BAT that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@kms_flip@basic-plain-flip@b-edp1:
- bat-adlp-7: [PASS][1] -> [DMESG-WARN][2] ([Intel XE#4543]) +1 other test dmesg-warn
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/bat-adlp-7/igt@kms_flip@basic-plain-flip@b-edp1.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/bat-adlp-7/igt@kms_flip@basic-plain-flip@b-edp1.html
* igt@xe_waitfence@abstime:
- bat-dg2-oem2: [PASS][3] -> [TIMEOUT][4] ([Intel XE#6506])
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/bat-dg2-oem2/igt@xe_waitfence@abstime.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/bat-dg2-oem2/igt@xe_waitfence@abstime.html
[Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
[Intel XE#6506]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6506
Build changes
-------------
* Linux: xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b -> xe-pw-157253v3
IGT_8618: 8618
xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b: 4ce351022716985e9c1dd18583acd4d3d149cb5b
xe-pw-157253v3: 157253v3
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/index.html
[-- Attachment #2: Type: text/html, Size: 2496 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread* ✗ Xe.CI.Full: failure for Scope-based forcewake and runtime PM (rev3)
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (31 preceding siblings ...)
2025-11-11 0:57 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-11-11 10:50 ` Patchwork
2025-11-11 10:57 ` [PATCH v2 00/30] Scope-based forcewake and runtime PM Jani Nikula
2025-11-13 22:11 ` Matt Roper
34 siblings, 0 replies; 74+ messages in thread
From: Patchwork @ 2025-11-11 10:50 UTC (permalink / raw)
To: Matt Roper; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 48116 bytes --]
== Series Details ==
Series: Scope-based forcewake and runtime PM (rev3)
URL : https://patchwork.freedesktop.org/series/157253/
State : failure
== Summary ==
CI Bug Log - changes from xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b_FULL -> xe-pw-157253v3_FULL
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with xe-pw-157253v3_FULL absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in xe-pw-157253v3_FULL, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
to document this new failure mode, which will reduce false positives in CI.
Participating hosts (4 -> 4)
------------------------------
No changes in participating hosts
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in xe-pw-157253v3_FULL:
### IGT changes ###
#### Possible regressions ####
* igt@xe_wedged@basic-wedged:
- shard-adlp: [PASS][1] -> [INCOMPLETE][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-8/igt@xe_wedged@basic-wedged.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-8/igt@xe_wedged@basic-wedged.html
- shard-lnl: [PASS][3] -> [INCOMPLETE][4]
[3]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-lnl-1/igt@xe_wedged@basic-wedged.html
[4]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-8/igt@xe_wedged@basic-wedged.html
- shard-bmg: [PASS][5] -> [ABORT][6]
[5]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-5/igt@xe_wedged@basic-wedged.html
[6]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-7/igt@xe_wedged@basic-wedged.html
Known issues
------------
Here are the changes found in xe-pw-157253v3_FULL that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@intel_hwmon@hwmon-write:
- shard-lnl: NOTRUN -> [SKIP][7] ([Intel XE#1125])
[7]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@intel_hwmon@hwmon-write.html
* igt@kms_big_fb@linear-32bpp-rotate-90:
- shard-dg2-set2: NOTRUN -> [SKIP][8] ([Intel XE#316]) +2 other tests skip
[8]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-464/igt@kms_big_fb@linear-32bpp-rotate-90.html
* igt@kms_big_fb@x-tiled-16bpp-rotate-270:
- shard-lnl: NOTRUN -> [SKIP][9] ([Intel XE#1407])
[9]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_big_fb@x-tiled-16bpp-rotate-270.html
* igt@kms_big_fb@y-tiled-16bpp-rotate-180:
- shard-lnl: NOTRUN -> [SKIP][10] ([Intel XE#1124]) +1 other test skip
[10]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_big_fb@y-tiled-16bpp-rotate-180.html
* igt@kms_big_fb@y-tiled-addfb:
- shard-dg2-set2: NOTRUN -> [SKIP][11] ([Intel XE#619])
[11]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-435/igt@kms_big_fb@y-tiled-addfb.html
* igt@kms_big_fb@yf-tiled-64bpp-rotate-90:
- shard-dg2-set2: NOTRUN -> [SKIP][12] ([Intel XE#1124]) +2 other tests skip
[12]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@kms_big_fb@yf-tiled-64bpp-rotate-90.html
* igt@kms_big_fb@yf-tiled-8bpp-rotate-0:
- shard-bmg: NOTRUN -> [SKIP][13] ([Intel XE#1124])
[13]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@kms_big_fb@yf-tiled-8bpp-rotate-0.html
* igt@kms_bw@linear-tiling-3-displays-3840x2160p:
- shard-dg2-set2: NOTRUN -> [SKIP][14] ([Intel XE#367]) +2 other tests skip
[14]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-466/igt@kms_bw@linear-tiling-3-displays-3840x2160p.html
* igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6:
- shard-dg2-set2: NOTRUN -> [SKIP][15] ([Intel XE#787]) +55 other tests skip
[15]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@kms_ccs@bad-pixel-format-4-tiled-mtl-rc-ccs-cc@pipe-a-hdmi-a-6.html
* igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][16] ([Intel XE#3442])
[16]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@kms_ccs@crc-primary-suspend-4-tiled-lnl-ccs.html
* igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc:
- shard-lnl: NOTRUN -> [SKIP][17] ([Intel XE#2887]) +2 other tests skip
[17]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-rc-ccs-cc.html
* igt@kms_ccs@missing-ccs-buffer-y-tiled-gen12-mc-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][18] ([Intel XE#455] / [Intel XE#787]) +15 other tests skip
[18]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-464/igt@kms_ccs@missing-ccs-buffer-y-tiled-gen12-mc-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-bmg-ccs:
- shard-dg2-set2: NOTRUN -> [SKIP][19] ([Intel XE#2907])
[19]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-bmg-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc:
- shard-bmg: NOTRUN -> [SKIP][20] ([Intel XE#2887])
[20]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs-cc.html
* igt@kms_cdclk@mode-transition@pipe-b-edp-1:
- shard-lnl: NOTRUN -> [SKIP][21] ([Intel XE#4417]) +3 other tests skip
[21]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_cdclk@mode-transition@pipe-b-edp-1.html
* igt@kms_chamelium_audio@dp-audio:
- shard-dg2-set2: NOTRUN -> [SKIP][22] ([Intel XE#373]) +5 other tests skip
[22]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-466/igt@kms_chamelium_audio@dp-audio.html
* igt@kms_chamelium_color@ctm-negative:
- shard-dg2-set2: NOTRUN -> [SKIP][23] ([Intel XE#306])
[23]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-435/igt@kms_chamelium_color@ctm-negative.html
* igt@kms_chamelium_edid@dp-edid-read:
- shard-lnl: NOTRUN -> [SKIP][24] ([Intel XE#373]) +1 other test skip
[24]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_chamelium_edid@dp-edid-read.html
* igt@kms_chamelium_hpd@dp-hpd-after-hibernate:
- shard-bmg: NOTRUN -> [SKIP][25] ([Intel XE#2252])
[25]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@kms_chamelium_hpd@dp-hpd-after-hibernate.html
* igt@kms_chamelium_sharpness_filter@filter-basic:
- shard-lnl: NOTRUN -> [SKIP][26] ([Intel XE#6507])
[26]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_chamelium_sharpness_filter@filter-basic.html
* igt@kms_content_protection@dp-mst-lic-type-1:
- shard-dg2-set2: NOTRUN -> [SKIP][27] ([Intel XE#307])
[27]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-466/igt@kms_content_protection@dp-mst-lic-type-1.html
* igt@kms_content_protection@legacy@pipe-a-dp-2:
- shard-bmg: NOTRUN -> [FAIL][28] ([Intel XE#1178])
[28]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-7/igt@kms_content_protection@legacy@pipe-a-dp-2.html
* igt@kms_cursor_crc@cursor-onscreen-512x170:
- shard-bmg: NOTRUN -> [SKIP][29] ([Intel XE#2321])
[29]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@kms_cursor_crc@cursor-onscreen-512x170.html
* igt@kms_cursor_crc@cursor-random-128x42:
- shard-lnl: NOTRUN -> [SKIP][30] ([Intel XE#1424]) +1 other test skip
[30]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_cursor_crc@cursor-random-128x42.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size:
- shard-bmg: [PASS][31] -> [SKIP][32] ([Intel XE#2291]) +7 other tests skip
[31]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-8/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html
[32]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipb-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@cursorb-vs-flipb-legacy:
- shard-lnl: NOTRUN -> [SKIP][33] ([Intel XE#309])
[33]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_cursor_legacy@cursorb-vs-flipb-legacy.html
* igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
- shard-dg2-set2: NOTRUN -> [SKIP][34] ([Intel XE#323])
[34]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-435/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@single-bo:
- shard-bmg: [PASS][35] -> [DMESG-WARN][36] ([Intel XE#5354]) +1 other test dmesg-warn
[35]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-8/igt@kms_cursor_legacy@single-bo.html
[36]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-6/igt@kms_cursor_legacy@single-bo.html
* igt@kms_dp_link_training@non-uhbr-mst:
- shard-lnl: NOTRUN -> [SKIP][37] ([Intel XE#4354])
[37]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_dp_link_training@non-uhbr-mst.html
* igt@kms_dsc@dsc-with-output-formats-with-bpc:
- shard-bmg: NOTRUN -> [SKIP][38] ([Intel XE#2244])
[38]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@kms_dsc@dsc-with-output-formats-with-bpc.html
* igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats:
- shard-lnl: NOTRUN -> [SKIP][39] ([Intel XE#4422])
[39]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_fbc_dirty_rect@fbc-dirty-rectangle-different-formats.html
* igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset:
- shard-bmg: [PASS][40] -> [SKIP][41] ([Intel XE#2316]) +1 other test skip
[40]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-1/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset.html
[41]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-6/igt@kms_flip@2x-single-buffer-flip-vs-dpms-off-vs-modeset.html
* igt@kms_flip@2x-wf_vblank-ts-check:
- shard-bmg: [PASS][42] -> [FAIL][43] ([Intel XE#3098]) +1 other test fail
[42]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-2/igt@kms_flip@2x-wf_vblank-ts-check.html
[43]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-7/igt@kms_flip@2x-wf_vblank-ts-check.html
* igt@kms_flip@basic-flip-vs-modeset@c-hdmi-a1:
- shard-adlp: [PASS][44] -> [DMESG-WARN][45] ([Intel XE#4543]) +7 other tests dmesg-warn
[44]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-3/igt@kms_flip@basic-flip-vs-modeset@c-hdmi-a1.html
[45]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-8/igt@kms_flip@basic-flip-vs-modeset@c-hdmi-a1.html
* igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1:
- shard-adlp: [PASS][46] -> [DMESG-WARN][47] ([Intel XE#2953] / [Intel XE#4173]) +10 other tests dmesg-warn
[46]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-6/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1.html
[47]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-4/igt@kms_flip@flip-vs-suspend-interruptible@a-hdmi-a1.html
* igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling:
- shard-lnl: NOTRUN -> [SKIP][48] ([Intel XE#1401] / [Intel XE#1745]) +1 other test skip
[48]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling.html
* igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-downscaling@pipe-a-default-mode:
- shard-lnl: NOTRUN -> [SKIP][49] ([Intel XE#1401]) +1 other test skip
[49]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-downscaling@pipe-a-default-mode.html
* igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-shrfb-draw-mmap-wc:
- shard-lnl: NOTRUN -> [SKIP][50] ([Intel XE#6312])
[50]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_frontbuffer_tracking@drrs-1p-offscreen-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-onoff:
- shard-dg2-set2: NOTRUN -> [SKIP][51] ([Intel XE#651]) +11 other tests skip
[51]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@kms_frontbuffer_tracking@drrs-1p-primscrn-spr-indfb-onoff.html
* igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-mmap-wc:
- shard-lnl: NOTRUN -> [SKIP][52] ([Intel XE#656]) +8 other tests skip
[52]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-shrfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@fbc-tiling-y:
- shard-lnl: NOTRUN -> [SKIP][53] ([Intel XE#1469])
[53]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_frontbuffer_tracking@fbc-tiling-y.html
* igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-shrfb-plflip-blt:
- shard-lnl: NOTRUN -> [SKIP][54] ([Intel XE#651]) +1 other test skip
[54]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_frontbuffer_tracking@fbcdrrs-1p-primscrn-shrfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-tiling-linear:
- shard-bmg: NOTRUN -> [SKIP][55] ([Intel XE#2311]) +5 other tests skip
[55]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@kms_frontbuffer_tracking@fbcdrrs-tiling-linear.html
* igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-blt:
- shard-dg2-set2: NOTRUN -> [SKIP][56] ([Intel XE#653]) +11 other tests skip
[56]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-435/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-blt.html
* igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-plflip-blt:
- shard-bmg: NOTRUN -> [SKIP][57] ([Intel XE#2313]) +2 other tests skip
[57]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-plflip-blt.html
* igt@kms_hdr@static-swap:
- shard-bmg: [PASS][58] -> [SKIP][59] ([Intel XE#1503]) +2 other tests skip
[58]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-8/igt@kms_hdr@static-swap.html
[59]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-6/igt@kms_hdr@static-swap.html
* igt@kms_hdr@static-toggle-suspend:
- shard-lnl: NOTRUN -> [SKIP][60] ([Intel XE#1503])
[60]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_hdr@static-toggle-suspend.html
* igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-factor-0-25:
- shard-lnl: NOTRUN -> [SKIP][61] ([Intel XE#2763]) +7 other tests skip
[61]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_plane_scaling@planes-downscale-factor-0-5-upscale-factor-0-25.html
* igt@kms_pm_backlight@basic-brightness:
- shard-dg2-set2: NOTRUN -> [SKIP][62] ([Intel XE#870])
[62]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-466/igt@kms_pm_backlight@basic-brightness.html
* igt@kms_pm_backlight@fade:
- shard-bmg: NOTRUN -> [SKIP][63] ([Intel XE#870])
[63]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@kms_pm_backlight@fade.html
* igt@kms_pm_rpm@dpms-mode-unset-non-lpsp:
- shard-lnl: NOTRUN -> [SKIP][64] ([Intel XE#1439] / [Intel XE#836])
[64]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html
* igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area:
- shard-lnl: NOTRUN -> [SKIP][65] ([Intel XE#1406] / [Intel XE#2893]) +1 other test skip
[65]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area.html
* igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-sf-dmg-area:
- shard-bmg: NOTRUN -> [SKIP][66] ([Intel XE#1406] / [Intel XE#1489])
[66]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@kms_psr2_sf@fbc-psr2-overlay-plane-update-sf-dmg-area.html
* igt@kms_psr2_su@page_flip-p010:
- shard-lnl: NOTRUN -> [SKIP][67] ([Intel XE#1128] / [Intel XE#1406])
[67]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_psr2_su@page_flip-p010.html
* igt@kms_psr@fbc-pr-dpms:
- shard-dg2-set2: NOTRUN -> [SKIP][68] ([Intel XE#1406] / [Intel XE#2850] / [Intel XE#929]) +4 other tests skip
[68]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@kms_psr@fbc-pr-dpms.html
* igt@kms_psr@fbc-psr-cursor-plane-move:
- shard-bmg: NOTRUN -> [SKIP][69] ([Intel XE#1406] / [Intel XE#2234] / [Intel XE#2850])
[69]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@kms_psr@fbc-psr-cursor-plane-move.html
* igt@kms_psr@fbc-psr2-primary-render:
- shard-lnl: NOTRUN -> [SKIP][70] ([Intel XE#1406])
[70]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_psr@fbc-psr2-primary-render.html
* igt@kms_psr@fbc-psr2-primary-render@edp-1:
- shard-lnl: NOTRUN -> [SKIP][71] ([Intel XE#1406] / [Intel XE#4609])
[71]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@kms_psr@fbc-psr2-primary-render@edp-1.html
* igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270:
- shard-bmg: NOTRUN -> [SKIP][72] ([Intel XE#3414] / [Intel XE#3904])
[72]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270.html
* igt@kms_setmode@basic:
- shard-adlp: [PASS][73] -> [FAIL][74] ([Intel XE#6361]) +2 other tests fail
[73]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-9/igt@kms_setmode@basic.html
[74]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-6/igt@kms_setmode@basic.html
* igt@kms_setmode@invalid-clone-single-crtc-stealing:
- shard-bmg: [PASS][75] -> [SKIP][76] ([Intel XE#1435])
[75]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-8/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
[76]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-6/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
* igt@kms_vrr@flipline:
- shard-dg2-set2: NOTRUN -> [SKIP][77] ([Intel XE#455]) +11 other tests skip
[77]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@kms_vrr@flipline.html
* igt@xe_configfs@survivability-mode:
- shard-lnl: NOTRUN -> [SKIP][78] ([Intel XE#6010])
[78]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@xe_configfs@survivability-mode.html
* igt@xe_copy_basic@mem-set-linear-0x3fff:
- shard-dg2-set2: NOTRUN -> [SKIP][79] ([Intel XE#1126])
[79]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-466/igt@xe_copy_basic@mem-set-linear-0x3fff.html
* igt@xe_eu_stall@invalid-event-report-count:
- shard-dg2-set2: NOTRUN -> [SKIP][80] ([Intel XE#5626]) +1 other test skip
[80]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@xe_eu_stall@invalid-event-report-count.html
* igt@xe_eudebug@basic-vm-access-parameters:
- shard-dg2-set2: NOTRUN -> [SKIP][81] ([Intel XE#4837]) +7 other tests skip
[81]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@xe_eudebug@basic-vm-access-parameters.html
* igt@xe_eudebug_online@single-step:
- shard-lnl: NOTRUN -> [SKIP][82] ([Intel XE#4837]) +2 other tests skip
[82]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@xe_eudebug_online@single-step.html
* igt@xe_evict@evict-mixed-many-threads-small:
- shard-bmg: [PASS][83] -> [INCOMPLETE][84] ([Intel XE#6321])
[83]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-2/igt@xe_evict@evict-mixed-many-threads-small.html
[84]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-8/igt@xe_evict@evict-mixed-many-threads-small.html
* igt@xe_evict@evict-threads-small-multi-vm:
- shard-lnl: NOTRUN -> [SKIP][85] ([Intel XE#688]) +2 other tests skip
[85]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@xe_evict@evict-threads-small-multi-vm.html
* igt@xe_exec_basic@multigpu-no-exec-userptr:
- shard-lnl: NOTRUN -> [SKIP][86] ([Intel XE#1392]) +1 other test skip
[86]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@xe_exec_basic@multigpu-no-exec-userptr.html
* igt@xe_exec_fault_mode@many-execqueues-userptr-imm:
- shard-dg2-set2: NOTRUN -> [SKIP][87] ([Intel XE#288]) +10 other tests skip
[87]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-464/igt@xe_exec_fault_mode@many-execqueues-userptr-imm.html
* igt@xe_exec_system_allocator@threads-many-mmap-free-huge-nomemset:
- shard-bmg: NOTRUN -> [SKIP][88] ([Intel XE#4943]) +3 other tests skip
[88]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@xe_exec_system_allocator@threads-many-mmap-free-huge-nomemset.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-large-mmap-new-huge-nomemset:
- shard-lnl: NOTRUN -> [SKIP][89] ([Intel XE#4943]) +6 other tests skip
[89]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@xe_exec_system_allocator@threads-shared-vm-many-large-mmap-new-huge-nomemset.html
* igt@xe_exec_system_allocator@threads-shared-vm-many-stride-new-race-nomemset:
- shard-dg2-set2: NOTRUN -> [SKIP][90] ([Intel XE#4915]) +134 other tests skip
[90]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-464/igt@xe_exec_system_allocator@threads-shared-vm-many-stride-new-race-nomemset.html
* igt@xe_oa@polling-small-buf:
- shard-dg2-set2: NOTRUN -> [SKIP][91] ([Intel XE#3573]) +3 other tests skip
[91]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-435/igt@xe_oa@polling-small-buf.html
* igt@xe_pat@display-vs-wb-transient:
- shard-dg2-set2: NOTRUN -> [SKIP][92] ([Intel XE#1337])
[92]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@xe_pat@display-vs-wb-transient.html
* igt@xe_pat@pat-index-xelp:
- shard-lnl: NOTRUN -> [SKIP][93] ([Intel XE#977])
[93]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@xe_pat@pat-index-xelp.html
* igt@xe_pmu@engine-activity-accuracy-90:
- shard-lnl: [PASS][94] -> [FAIL][95] ([Intel XE#6251]) +2 other tests fail
[94]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-lnl-8/igt@xe_pmu@engine-activity-accuracy-90.html
[95]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-7/igt@xe_pmu@engine-activity-accuracy-90.html
* igt@xe_pmu@fn-engine-activity-sched-if-idle:
- shard-lnl: NOTRUN -> [SKIP][96] ([Intel XE#4650])
[96]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@xe_pmu@fn-engine-activity-sched-if-idle.html
* igt@xe_pmu@gt-c6-idle:
- shard-dg2-set2: NOTRUN -> [FAIL][97] ([Intel XE#6366])
[97]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@xe_pmu@gt-c6-idle.html
* igt@xe_pxp@pxp-termination-key-update-post-suspend:
- shard-dg2-set2: NOTRUN -> [SKIP][98] ([Intel XE#4733]) +1 other test skip
[98]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@xe_pxp@pxp-termination-key-update-post-suspend.html
* igt@xe_query@multigpu-query-engines:
- shard-lnl: NOTRUN -> [SKIP][99] ([Intel XE#944])
[99]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@xe_query@multigpu-query-engines.html
* igt@xe_query@multigpu-query-invalid-cs-cycles:
- shard-dg2-set2: NOTRUN -> [SKIP][100] ([Intel XE#944]) +1 other test skip
[100]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-466/igt@xe_query@multigpu-query-invalid-cs-cycles.html
* igt@xe_render_copy@render-stress-0-copies:
- shard-dg2-set2: NOTRUN -> [SKIP][101] ([Intel XE#4814])
[101]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@xe_render_copy@render-stress-0-copies.html
* igt@xe_sriov_flr@flr-vf1-clear:
- shard-lnl: NOTRUN -> [SKIP][102] ([Intel XE#3342])
[102]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-lnl-4/igt@xe_sriov_flr@flr-vf1-clear.html
* igt@xe_sriov_scheduling@nonpreempt-engine-resets:
- shard-dg2-set2: NOTRUN -> [SKIP][103] ([Intel XE#4351])
[103]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-464/igt@xe_sriov_scheduling@nonpreempt-engine-resets.html
* igt@xe_survivability@i2c-functionality:
- shard-dg2-set2: NOTRUN -> [SKIP][104] ([Intel XE#6529])
[104]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-433/igt@xe_survivability@i2c-functionality.html
#### Possible fixes ####
* igt@intel_hwmon@hwmon-write:
- shard-bmg: [FAIL][105] ([Intel XE#4665]) -> [PASS][106]
[105]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-5/igt@intel_hwmon@hwmon-write.html
[106]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-6/igt@intel_hwmon@hwmon-write.html
* igt@kms_async_flips@crc-atomic@pipe-d-hdmi-a-1:
- shard-adlp: [FAIL][107] ([Intel XE#3884]) -> [PASS][108]
[107]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-8/igt@kms_async_flips@crc-atomic@pipe-d-hdmi-a-1.html
[108]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-3/igt@kms_async_flips@crc-atomic@pipe-d-hdmi-a-1.html
* igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip:
- shard-adlp: [DMESG-FAIL][109] ([Intel XE#4543]) -> [PASS][110] +1 other test pass
[109]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-9/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
[110]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-4/igt@kms_big_fb@y-tiled-max-hw-stride-64bpp-rotate-180-hflip-async-flip.html
* igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p:
- shard-bmg: [SKIP][111] ([Intel XE#2314] / [Intel XE#2894]) -> [PASS][112]
[111]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-6/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html
[112]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@kms_bw@connected-linear-tiling-2-displays-2560x1440p.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs:
- shard-dg2-set2: [INCOMPLETE][113] ([Intel XE#1727] / [Intel XE#2705] / [Intel XE#3113] / [Intel XE#4212] / [Intel XE#4345] / [Intel XE#4522]) -> [PASS][114] +1 other test pass
[113]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-dg2-433/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
[114]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-464/igt@kms_ccs@random-ccs-data-4-tiled-dg2-rc-ccs.html
* igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size:
- shard-bmg: [SKIP][115] ([Intel XE#2291]) -> [PASS][116] +3 other tests pass
[115]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-6/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
[116]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-7/igt@kms_cursor_legacy@cursorb-vs-flipa-atomic-transitions-varying-size.html
* igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
- shard-bmg: [FAIL][117] ([Intel XE#1475]) -> [PASS][118]
[117]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-4/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
[118]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-1/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
* igt@kms_flip@2x-plain-flip-fb-recreate-interruptible:
- shard-bmg: [SKIP][119] ([Intel XE#2316]) -> [PASS][120] +7 other tests pass
[119]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-6/igt@kms_flip@2x-plain-flip-fb-recreate-interruptible.html
[120]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@kms_flip@2x-plain-flip-fb-recreate-interruptible.html
* igt@kms_flip@dpms-off-confusion-interruptible@b-hdmi-a1:
- shard-adlp: [DMESG-WARN][121] ([Intel XE#4543]) -> [PASS][122] +2 other tests pass
[121]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-1/igt@kms_flip@dpms-off-confusion-interruptible@b-hdmi-a1.html
[122]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-1/igt@kms_flip@dpms-off-confusion-interruptible@b-hdmi-a1.html
* igt@kms_joiner@invalid-modeset-force-big-joiner:
- shard-bmg: [SKIP][123] ([Intel XE#3012]) -> [PASS][124]
[123]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-6/igt@kms_joiner@invalid-modeset-force-big-joiner.html
[124]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@kms_joiner@invalid-modeset-force-big-joiner.html
* igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-a-hdmi-a-1:
- shard-adlp: [DMESG-WARN][125] ([Intel XE#2953] / [Intel XE#4173]) -> [PASS][126] +4 other tests pass
[125]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-8/igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-a-hdmi-a-1.html
[126]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-3/igt@kms_plane_alpha_blend@alpha-opaque-fb@pipe-a-hdmi-a-1.html
* igt@kms_setmode@basic@pipe-a-hdmi-a-6-pipe-b-dp-4:
- shard-dg2-set2: [FAIL][127] ([Intel XE#6361]) -> [PASS][128]
[127]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-dg2-466/igt@kms_setmode@basic@pipe-a-hdmi-a-6-pipe-b-dp-4.html
[128]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-435/igt@kms_setmode@basic@pipe-a-hdmi-a-6-pipe-b-dp-4.html
* igt@xe_compute_preempt@compute-preempt-many-vram-evict@engine-drm_xe_engine_class_compute:
- shard-bmg: [ABORT][129] ([Intel XE#3970]) -> [PASS][130] +1 other test pass
[129]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-1/igt@xe_compute_preempt@compute-preempt-many-vram-evict@engine-drm_xe_engine_class_compute.html
[130]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@xe_compute_preempt@compute-preempt-many-vram-evict@engine-drm_xe_engine_class_compute.html
* igt@xe_exec_system_allocator@many-stride-malloc-prefetch:
- shard-bmg: [WARN][131] ([Intel XE#5786]) -> [PASS][132]
[131]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-4/igt@xe_exec_system_allocator@many-stride-malloc-prefetch.html
[132]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@xe_exec_system_allocator@many-stride-malloc-prefetch.html
* igt@xe_exec_system_allocator@threads-many-stride-new-busy-nomemset:
- shard-bmg: [INCOMPLETE][133] ([Intel XE#6480]) -> [PASS][134]
[133]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-8/igt@xe_exec_system_allocator@threads-many-stride-new-busy-nomemset.html
[134]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@xe_exec_system_allocator@threads-many-stride-new-busy-nomemset.html
#### Warnings ####
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs:
- shard-dg2-set2: [INCOMPLETE][135] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#4345]) -> [INCOMPLETE][136] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#4345] / [Intel XE#6168])
[135]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
[136]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs.html
* igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6:
- shard-dg2-set2: [INCOMPLETE][137] ([Intel XE#1727] / [Intel XE#3113]) -> [INCOMPLETE][138] ([Intel XE#1727] / [Intel XE#3113] / [Intel XE#6168])
[137]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-dg2-436/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6.html
[138]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-dg2-434/igt@kms_ccs@random-ccs-data-4-tiled-dg2-mc-ccs@pipe-a-hdmi-a-6.html
* igt@kms_content_protection@legacy:
- shard-bmg: [SKIP][139] ([Intel XE#2341]) -> [FAIL][140] ([Intel XE#1178])
[139]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-6/igt@kms_content_protection@legacy.html
[140]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-7/igt@kms_content_protection@legacy.html
* igt@kms_flip@flip-vs-expired-vblank:
- shard-adlp: [DMESG-FAIL][141] ([Intel XE#4543]) -> [DMESG-WARN][142] ([Intel XE#4543])
[141]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-2/igt@kms_flip@flip-vs-expired-vblank.html
[142]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-2/igt@kms_flip@flip-vs-expired-vblank.html
* igt@kms_flip@flip-vs-expired-vblank@c-hdmi-a1:
- shard-adlp: [FAIL][143] ([Intel XE#301]) -> [DMESG-WARN][144] ([Intel XE#4543])
[143]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-adlp-2/igt@kms_flip@flip-vs-expired-vblank@c-hdmi-a1.html
[144]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-adlp-2/igt@kms_flip@flip-vs-expired-vblank@c-hdmi-a1.html
* igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][145] ([Intel XE#2312]) -> [SKIP][146] ([Intel XE#2311]) +15 other tests skip
[145]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-6/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt.html
[146]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-7/igt@kms_frontbuffer_tracking@drrs-2p-primscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render:
- shard-bmg: [SKIP][147] ([Intel XE#5390]) -> [SKIP][148] ([Intel XE#2312]) +8 other tests skip
[147]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-1/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render.html
[148]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-render.html
* igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-indfb-pgflip-blt:
- shard-bmg: [SKIP][149] ([Intel XE#2312]) -> [SKIP][150] ([Intel XE#5390]) +6 other tests skip
[149]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-6/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-indfb-pgflip-blt.html
[150]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-2/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-indfb-pgflip-blt.html
* igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc:
- shard-bmg: [SKIP][151] ([Intel XE#2311]) -> [SKIP][152] ([Intel XE#2312]) +14 other tests skip
[151]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-8/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
[152]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-6/igt@kms_frontbuffer_tracking@fbcdrrs-2p-scndscrn-cur-indfb-draw-mmap-wc.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt:
- shard-bmg: [SKIP][153] ([Intel XE#2312]) -> [SKIP][154] ([Intel XE#2313]) +16 other tests skip
[153]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
[154]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-1/igt@kms_frontbuffer_tracking@psr-2p-primscrn-indfb-plflip-blt.html
* igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen:
- shard-bmg: [SKIP][155] ([Intel XE#2313]) -> [SKIP][156] ([Intel XE#2312]) +13 other tests skip
[155]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-1/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen.html
[156]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-spr-indfb-fullscreen.html
* igt@kms_hdr@brightness-with-hdr:
- shard-bmg: [SKIP][157] ([Intel XE#3374] / [Intel XE#3544]) -> [SKIP][158] ([Intel XE#3544])
[157]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-8/igt@kms_hdr@brightness-with-hdr.html
[158]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-5/igt@kms_hdr@brightness-with-hdr.html
* igt@kms_plane_multiple@2x-tiling-yf:
- shard-bmg: [SKIP][159] ([Intel XE#4596]) -> [SKIP][160] ([Intel XE#5021])
[159]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b/shard-bmg-6/igt@kms_plane_multiple@2x-tiling-yf.html
[160]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/shard-bmg-1/igt@kms_plane_multiple@2x-tiling-yf.html
[Intel XE#1124]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1124
[Intel XE#1125]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1125
[Intel XE#1126]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1126
[Intel XE#1128]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1128
[Intel XE#1178]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1178
[Intel XE#1337]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1337
[Intel XE#1392]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1392
[Intel XE#1401]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1401
[Intel XE#1406]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1406
[Intel XE#1407]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1407
[Intel XE#1424]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1424
[Intel XE#1435]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1435
[Intel XE#1439]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1439
[Intel XE#1469]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1469
[Intel XE#1475]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1475
[Intel XE#1489]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1489
[Intel XE#1503]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1503
[Intel XE#1727]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1727
[Intel XE#1745]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/1745
[Intel XE#2234]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2234
[Intel XE#2244]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2244
[Intel XE#2252]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2252
[Intel XE#2291]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2291
[Intel XE#2311]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2311
[Intel XE#2312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2312
[Intel XE#2313]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2313
[Intel XE#2314]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2314
[Intel XE#2316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2316
[Intel XE#2321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2321
[Intel XE#2341]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2341
[Intel XE#2705]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2705
[Intel XE#2763]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2763
[Intel XE#2850]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2850
[Intel XE#288]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/288
[Intel XE#2887]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2887
[Intel XE#2893]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2893
[Intel XE#2894]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2894
[Intel XE#2907]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2907
[Intel XE#2953]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/2953
[Intel XE#301]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/301
[Intel XE#3012]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3012
[Intel XE#306]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/306
[Intel XE#307]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/307
[Intel XE#309]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/309
[Intel XE#3098]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3098
[Intel XE#3113]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3113
[Intel XE#316]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/316
[Intel XE#323]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/323
[Intel XE#3342]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3342
[Intel XE#3374]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3374
[Intel XE#3414]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3414
[Intel XE#3442]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3442
[Intel XE#3544]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3544
[Intel XE#3573]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3573
[Intel XE#367]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/367
[Intel XE#373]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/373
[Intel XE#3884]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3884
[Intel XE#3904]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3904
[Intel XE#3970]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/3970
[Intel XE#4173]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4173
[Intel XE#4212]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4212
[Intel XE#4345]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4345
[Intel XE#4351]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4351
[Intel XE#4354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4354
[Intel XE#4417]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4417
[Intel XE#4422]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4422
[Intel XE#4522]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4522
[Intel XE#4543]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4543
[Intel XE#455]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/455
[Intel XE#4596]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4596
[Intel XE#4609]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4609
[Intel XE#4650]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4650
[Intel XE#4665]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4665
[Intel XE#4733]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4733
[Intel XE#4814]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4814
[Intel XE#4837]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4837
[Intel XE#4915]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4915
[Intel XE#4943]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/4943
[Intel XE#5021]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5021
[Intel XE#5354]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5354
[Intel XE#5390]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5390
[Intel XE#5626]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5626
[Intel XE#5786]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/5786
[Intel XE#6010]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6010
[Intel XE#6168]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6168
[Intel XE#619]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/619
[Intel XE#6251]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6251
[Intel XE#6312]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6312
[Intel XE#6321]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6321
[Intel XE#6361]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6361
[Intel XE#6366]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6366
[Intel XE#6480]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6480
[Intel XE#6507]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6507
[Intel XE#651]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/651
[Intel XE#6529]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6529
[Intel XE#653]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/653
[Intel XE#656]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/656
[Intel XE#688]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/688
[Intel XE#787]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/787
[Intel XE#836]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/836
[Intel XE#870]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/870
[Intel XE#929]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/929
[Intel XE#944]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/944
[Intel XE#977]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/977
Build changes
-------------
* Linux: xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b -> xe-pw-157253v3
IGT_8618: 8618
xe-4085-4ce351022716985e9c1dd18583acd4d3d149cb5b: 4ce351022716985e9c1dd18583acd4d3d149cb5b
xe-pw-157253v3: 157253v3
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v3/index.html
[-- Attachment #2: Type: text/html, Size: 54858 bytes --]
^ permalink raw reply [flat|nested] 74+ messages in thread* Re: [PATCH v2 00/30] Scope-based forcewake and runtime PM
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (32 preceding siblings ...)
2025-11-11 10:50 ` ✗ Xe.CI.Full: failure " Patchwork
@ 2025-11-11 10:57 ` Jani Nikula
2025-11-12 16:01 ` Matt Roper
2025-11-13 22:11 ` Matt Roper
34 siblings, 1 reply; 74+ messages in thread
From: Jani Nikula @ 2025-11-11 10:57 UTC (permalink / raw)
To: Matt Roper, intel-xe
Cc: matthew.d.roper, Michal Wajdeczko, Lucas De Marchi,
Syrjala, Ville
On Mon, 10 Nov 2025, Matt Roper <matthew.d.roper@intel.com> wrote:
> Forcewake and runtime PM both follow reference-counted get/put models;
> when used in functions that can encounter errors and return early, it's
> easy for developers to make mistakes and fail to drop a reference on all
> of the error paths. Cleanup of these reference counts is often
> addressed by goto-based error handling which is somewhat ugly and
> subject to its own set of mistakes once we accumulate too many error
> labels in a function.
>
> Scope-based cleanup ([1][2]) has been gaining increasing popularity in
> the Linux kernel for cleaning up various kinds of resources in a more
> automated way when code has lots of error paths and early exits. Let's
> add scope-based cleanup for both forcewake and runtime PM, based on the
> mechanisms provided in include/linux/cleanup.h. Scope-based cleanup
> allows cleanup destructors to be executed automatically when the current
> scope is exited by any means (end of block, return, break, etc.).
>
> For xe_runtime_pm_{get,put} pairs that were grabbed and released within
> a single function or block, the preferred replacement is now just
>
> guard(xe_pm_runtime_noresume)(xe);
>
> which will take care of releasing the runtime PM reference
> automatically. scoped_guard() can be used instead if the reference
> should only be held over part of the block. There are also guard
> variants added for xe_pm_runtime_noresume and xe_pm_runtime_ioctl that
> allow replacement of those alternate functions as well.
>
> Unlike runtime PM, where all reference tracking is done within the
> object parameter itself, forcewake is currently a model where get
> operations return a cookie that needs to be passed back to put
> operations. That necessitates a slightly different type of cleanup
> helper (CLASS instead of guard), although the underlying mechanisms are
> the same. For forcewake that is grabbed and released within a single
> function or block, the preferred form is now:
>
> CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
I think it's bad style to use CLASS() directly. Isn't ACQUIRE()
preferred if you can't use guard() or scoped_guard()?
BR,
Jani.
>
> which, like the runtime PM equivalent, will cause the forcewake
> reference to be dropped automatically. If forcewake needs to be held
> over only a subset of the current block,
>
> xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FW_GT) { ... }
>
> can be used in the same way scoped_guard() is used for runtime PM.
>
> The first few patches in this series make some general cleanups and
> restructuring of the existing force wake code. Then the new guards and
> classes for runtime PM and forcewake are defined. Finally, most of the
> existing runtime PM and forcewake usage in the driver is converted to
> the scope-based form in the remainder of the series. Some of the
> conversions eliminate goto-based cleanup models and/or significantly
> simplify the code. Other conversions don't significantly simplify the
> code (aside from a slight reduction in line count), but are still useful
> for consistency across our codebase.
>
> An advantage of doing the conversion everywhere possible, not just the
> places where it noticeably simplifies the code, is that it helps
> highlight the remaining get/put usage as special cases where wake
> references follow more complicated lifetimes (e.g., obtained in one
> function and released in a different one, often tied to some other type
> of resource or operation). With fewer direct get/put calls overall, its
> easier to identify the ones that remain as special cases and make sure
> they truly are paired up properly.
>
> There's other areas where scope-based cleanup could potentially be
> applied in the future (e.g., mutex locks, bo locking, etc.), but this
> series does not try to address those, even in places where those
> resources are also part of the same error handling cleanup paths as
> forcewake and runtime PM. We can potentially think about converting
> other types of resources to scope-based cleanup down the road if it
> winds up working well here for forcewake and PM.
>
> v2:
> - Add a proper success condition to the xe_pm_runtime_ioctl class so
> that conditional guards properly distinguish success (requiring
> cleanup) from errors (no cleanup). (CI)
> - Don't bother changing the signature of xe_force_wake_get() before
> adding the forcewake class. Simply create a separate constructor
> that wraps xe_force_wake_get(). (Michal)
> - Split ACQUIRE_ERR assignments from the condition checks. Even though
> this takes an extra line of code and deviates from most of the other
> uses in the kernel, it's easier to read (and avoids checkpatch
> warnings).
>
>
> References:
> [1] https://www.kernel.org/doc/html/next/core-api/cleanup.html
> [2] https://lwn.net/Articles/934679/
>
>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>
> Matt Roper (30):
> drm/xe/forcewake: Improve kerneldoc
> drm/xe/eustall: Store forcewake reference in stream structure
> drm/xe/oa: Store forcewake reference in stream structure
> drm/xe/forcewake: Add scope-based cleanup for forcewake
> drm/xe/pm: Add scope-based cleanup helper for runtime PM
> drm/xe/gt: Use scope-based cleanup
> drm/xe/gt_idle: Use scope-based cleanup
> drm/xe/guc: Use scope-based cleanup
> drm/xe/guc_pc: Use scope-based cleanup
> drm/xe/mocs: Use scope-based cleanup
> drm/xe/pat: Use scope-based forcewake
> drm/xe/pxp: Use scope-based cleanup
> drm/xe/gsc: Use scope-based cleanup
> drm/xe/device: Use scope-based cleanup
> drm/xe/devcoredump: Use scope-based cleanup
> drm/xe/display: Use scoped-cleanup
> drm/xe: Create scoped cleanup class for force_wake_get_any_engine()
> drm/xe/drm_client: Use scope-based cleanup
> drm/xe/gt_debugfs: Use scope-based cleanup
> drm/xe/huc: Use scope-based forcewake
> drm/xe/query: Use scope-based forcewake
> drm/xe/reg_sr: Use scope-based forcewake
> drm/xe/vram: Use scope-based forcewake
> drm/xe/bo: Use scope-based runtime PM
> drm/xe/ggtt: Use scope-based runtime pm
> drm/xe/hwmon: Use scope-based runtime PM
> drm/xe/sriov: Use scope-based runtime PM
> drm/xe/tests: Use scope-based runtime PM
> drm/xe/sysfs: Use scope-based runtime power management
> drm/xe/debugfs: Use scope-based runtime PM
>
> drivers/gpu/drm/xe/display/xe_fb_pin.c | 23 ++-
> drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 25 +--
> drivers/gpu/drm/xe/tests/xe_bo.c | 10 +-
> drivers/gpu/drm/xe/tests/xe_dma_buf.c | 3 +-
> drivers/gpu/drm/xe/tests/xe_migrate.c | 10 +-
> drivers/gpu/drm/xe/tests/xe_mocs.c | 27 +---
> drivers/gpu/drm/xe/xe_bo.c | 3 +-
> drivers/gpu/drm/xe/xe_debugfs.c | 16 +-
> drivers/gpu/drm/xe/xe_devcoredump.c | 26 ++-
> drivers/gpu/drm/xe/xe_device.c | 33 ++--
> drivers/gpu/drm/xe/xe_device_sysfs.c | 33 ++--
> drivers/gpu/drm/xe/xe_drm_client.c | 83 +++++-----
> drivers/gpu/drm/xe/xe_eu_stall.c | 8 +-
> drivers/gpu/drm/xe/xe_force_wake.h | 28 ++++
> drivers/gpu/drm/xe/xe_force_wake_types.h | 26 ++-
> drivers/gpu/drm/xe/xe_ggtt.c | 3 +-
> drivers/gpu/drm/xe/xe_gsc.c | 21 +--
> drivers/gpu/drm/xe/xe_gsc_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_gsc_proxy.c | 17 +-
> drivers/gpu/drm/xe/xe_gt.c | 151 ++++++------------
> drivers/gpu/drm/xe/xe_gt_debugfs.c | 29 +---
> drivers/gpu/drm/xe/xe_gt_freq.c | 27 ++--
> drivers/gpu/drm/xe/xe_gt_idle.c | 32 ++--
> drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 12 +-
> drivers/gpu/drm/xe/xe_gt_throttle.c | 3 +-
> drivers/gpu/drm/xe/xe_guc.c | 13 +-
> drivers/gpu/drm/xe/xe_guc_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_guc_log.c | 10 +-
> drivers/gpu/drm/xe/xe_guc_pc.c | 62 ++-----
> drivers/gpu/drm/xe/xe_guc_submit.c | 11 +-
> drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 4 +-
> drivers/gpu/drm/xe/xe_huc.c | 7 +-
> drivers/gpu/drm/xe/xe_huc_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c | 6 +-
> drivers/gpu/drm/xe/xe_hwmon.c | 16 +-
> drivers/gpu/drm/xe/xe_mocs.c | 18 +--
> drivers/gpu/drm/xe/xe_oa.c | 9 +-
> drivers/gpu/drm/xe/xe_oa_types.h | 3 +
> drivers/gpu/drm/xe/xe_pat.c | 36 ++---
> drivers/gpu/drm/xe/xe_pci_sriov.c | 3 +-
> drivers/gpu/drm/xe/xe_pm.h | 17 ++
> drivers/gpu/drm/xe/xe_pxp.c | 55 +++----
> drivers/gpu/drm/xe/xe_query.c | 16 +-
> drivers/gpu/drm/xe/xe_reg_sr.c | 17 +-
> drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c | 6 +-
> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 6 +-
> drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +-
> drivers/gpu/drm/xe/xe_tile_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_vram.c | 7 +-
> 50 files changed, 387 insertions(+), 604 deletions(-)
--
Jani Nikula, Intel
^ permalink raw reply [flat|nested] 74+ messages in thread* Re: [PATCH v2 00/30] Scope-based forcewake and runtime PM
2025-11-11 10:57 ` [PATCH v2 00/30] Scope-based forcewake and runtime PM Jani Nikula
@ 2025-11-12 16:01 ` Matt Roper
0 siblings, 0 replies; 74+ messages in thread
From: Matt Roper @ 2025-11-12 16:01 UTC (permalink / raw)
To: Jani Nikula; +Cc: intel-xe, Michal Wajdeczko, Lucas De Marchi, Syrjala, Ville
On Tue, Nov 11, 2025 at 12:57:13PM +0200, Jani Nikula wrote:
> On Mon, 10 Nov 2025, Matt Roper <matthew.d.roper@intel.com> wrote:
> > Forcewake and runtime PM both follow reference-counted get/put models;
> > when used in functions that can encounter errors and return early, it's
> > easy for developers to make mistakes and fail to drop a reference on all
> > of the error paths. Cleanup of these reference counts is often
> > addressed by goto-based error handling which is somewhat ugly and
> > subject to its own set of mistakes once we accumulate too many error
> > labels in a function.
> >
> > Scope-based cleanup ([1][2]) has been gaining increasing popularity in
> > the Linux kernel for cleaning up various kinds of resources in a more
> > automated way when code has lots of error paths and early exits. Let's
> > add scope-based cleanup for both forcewake and runtime PM, based on the
> > mechanisms provided in include/linux/cleanup.h. Scope-based cleanup
> > allows cleanup destructors to be executed automatically when the current
> > scope is exited by any means (end of block, return, break, etc.).
> >
> > For xe_runtime_pm_{get,put} pairs that were grabbed and released within
> > a single function or block, the preferred replacement is now just
> >
> > guard(xe_pm_runtime_noresume)(xe);
> >
> > which will take care of releasing the runtime PM reference
> > automatically. scoped_guard() can be used instead if the reference
> > should only be held over part of the block. There are also guard
> > variants added for xe_pm_runtime_noresume and xe_pm_runtime_ioctl that
> > allow replacement of those alternate functions as well.
> >
> > Unlike runtime PM, where all reference tracking is done within the
> > object parameter itself, forcewake is currently a model where get
> > operations return a cookie that needs to be passed back to put
> > operations. That necessitates a slightly different type of cleanup
> > helper (CLASS instead of guard), although the underlying mechanisms are
> > the same. For forcewake that is grabbed and released within a single
> > function or block, the preferred form is now:
> >
> > CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>
> I think it's bad style to use CLASS() directly. Isn't ACQUIRE()
> preferred if you can't use guard() or scoped_guard()?
No, ACQUIRE() was something added later to deal with conditional guards
in commit 857d18f23ab1 ("cleanup: Introduce ACQUIRE() and ACQUIRE_ERR()
for conditional locks"). So things like xe_pm_runtime_ioctl() or
mutex_lock_interruptible() which can fail and return error codes would
get converted into an ACQUIRE() / ACQUIRE_ERR() instead of a traditional
guard() so that we can handle the return value.
But CLASS() is something different where you want to create a named
token object in the current scope that the code can also operate on in
other ways. In the forcewake case, we want to be able to inspect
fw_ref.domains to figure out which specific domain(s) were successfully
obtained by the forcewake request.
guard() itself is build on top of CLASS, but class can be used in
additional settings where guard() is insufficient. We already have some
other CLASS usage in Xe such as xe_guc_buf, which is created similarly:
CLASS(xe_guc_buf, buf)(&guc->buf, OPT_IN_MAX_DWORDS);
and then there are various other operations that can be performed on the
'buf' reference. Another example from outside Xe is gpio_chip_guard
which is used heavily by gpiolib.
Matt
>
>
> BR,
> Jani.
>
>
> >
> > which, like the runtime PM equivalent, will cause the forcewake
> > reference to be dropped automatically. If forcewake needs to be held
> > over only a subset of the current block,
> >
> > xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FW_GT) { ... }
> >
> > can be used in the same way scoped_guard() is used for runtime PM.
> >
> > The first few patches in this series make some general cleanups and
> > restructuring of the existing force wake code. Then the new guards and
> > classes for runtime PM and forcewake are defined. Finally, most of the
> > existing runtime PM and forcewake usage in the driver is converted to
> > the scope-based form in the remainder of the series. Some of the
> > conversions eliminate goto-based cleanup models and/or significantly
> > simplify the code. Other conversions don't significantly simplify the
> > code (aside from a slight reduction in line count), but are still useful
> > for consistency across our codebase.
> >
> > An advantage of doing the conversion everywhere possible, not just the
> > places where it noticeably simplifies the code, is that it helps
> > highlight the remaining get/put usage as special cases where wake
> > references follow more complicated lifetimes (e.g., obtained in one
> > function and released in a different one, often tied to some other type
> > of resource or operation). With fewer direct get/put calls overall, its
> > easier to identify the ones that remain as special cases and make sure
> > they truly are paired up properly.
> >
> > There's other areas where scope-based cleanup could potentially be
> > applied in the future (e.g., mutex locks, bo locking, etc.), but this
> > series does not try to address those, even in places where those
> > resources are also part of the same error handling cleanup paths as
> > forcewake and runtime PM. We can potentially think about converting
> > other types of resources to scope-based cleanup down the road if it
> > winds up working well here for forcewake and PM.
> >
> > v2:
> > - Add a proper success condition to the xe_pm_runtime_ioctl class so
> > that conditional guards properly distinguish success (requiring
> > cleanup) from errors (no cleanup). (CI)
> > - Don't bother changing the signature of xe_force_wake_get() before
> > adding the forcewake class. Simply create a separate constructor
> > that wraps xe_force_wake_get(). (Michal)
> > - Split ACQUIRE_ERR assignments from the condition checks. Even though
> > this takes an extra line of code and deviates from most of the other
> > uses in the kernel, it's easier to read (and avoids checkpatch
> > warnings).
> >
> >
> > References:
> > [1] https://www.kernel.org/doc/html/next/core-api/cleanup.html
> > [2] https://lwn.net/Articles/934679/
> >
> >
> > Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > Cc: Lucas De Marchi <lucas.demarchi@intel.com>
> >
> > Matt Roper (30):
> > drm/xe/forcewake: Improve kerneldoc
> > drm/xe/eustall: Store forcewake reference in stream structure
> > drm/xe/oa: Store forcewake reference in stream structure
> > drm/xe/forcewake: Add scope-based cleanup for forcewake
> > drm/xe/pm: Add scope-based cleanup helper for runtime PM
> > drm/xe/gt: Use scope-based cleanup
> > drm/xe/gt_idle: Use scope-based cleanup
> > drm/xe/guc: Use scope-based cleanup
> > drm/xe/guc_pc: Use scope-based cleanup
> > drm/xe/mocs: Use scope-based cleanup
> > drm/xe/pat: Use scope-based forcewake
> > drm/xe/pxp: Use scope-based cleanup
> > drm/xe/gsc: Use scope-based cleanup
> > drm/xe/device: Use scope-based cleanup
> > drm/xe/devcoredump: Use scope-based cleanup
> > drm/xe/display: Use scoped-cleanup
> > drm/xe: Create scoped cleanup class for force_wake_get_any_engine()
> > drm/xe/drm_client: Use scope-based cleanup
> > drm/xe/gt_debugfs: Use scope-based cleanup
> > drm/xe/huc: Use scope-based forcewake
> > drm/xe/query: Use scope-based forcewake
> > drm/xe/reg_sr: Use scope-based forcewake
> > drm/xe/vram: Use scope-based forcewake
> > drm/xe/bo: Use scope-based runtime PM
> > drm/xe/ggtt: Use scope-based runtime pm
> > drm/xe/hwmon: Use scope-based runtime PM
> > drm/xe/sriov: Use scope-based runtime PM
> > drm/xe/tests: Use scope-based runtime PM
> > drm/xe/sysfs: Use scope-based runtime power management
> > drm/xe/debugfs: Use scope-based runtime PM
> >
> > drivers/gpu/drm/xe/display/xe_fb_pin.c | 23 ++-
> > drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 25 +--
> > drivers/gpu/drm/xe/tests/xe_bo.c | 10 +-
> > drivers/gpu/drm/xe/tests/xe_dma_buf.c | 3 +-
> > drivers/gpu/drm/xe/tests/xe_migrate.c | 10 +-
> > drivers/gpu/drm/xe/tests/xe_mocs.c | 27 +---
> > drivers/gpu/drm/xe/xe_bo.c | 3 +-
> > drivers/gpu/drm/xe/xe_debugfs.c | 16 +-
> > drivers/gpu/drm/xe/xe_devcoredump.c | 26 ++-
> > drivers/gpu/drm/xe/xe_device.c | 33 ++--
> > drivers/gpu/drm/xe/xe_device_sysfs.c | 33 ++--
> > drivers/gpu/drm/xe/xe_drm_client.c | 83 +++++-----
> > drivers/gpu/drm/xe/xe_eu_stall.c | 8 +-
> > drivers/gpu/drm/xe/xe_force_wake.h | 28 ++++
> > drivers/gpu/drm/xe/xe_force_wake_types.h | 26 ++-
> > drivers/gpu/drm/xe/xe_ggtt.c | 3 +-
> > drivers/gpu/drm/xe/xe_gsc.c | 21 +--
> > drivers/gpu/drm/xe/xe_gsc_debugfs.c | 3 +-
> > drivers/gpu/drm/xe/xe_gsc_proxy.c | 17 +-
> > drivers/gpu/drm/xe/xe_gt.c | 151 ++++++------------
> > drivers/gpu/drm/xe/xe_gt_debugfs.c | 29 +---
> > drivers/gpu/drm/xe/xe_gt_freq.c | 27 ++--
> > drivers/gpu/drm/xe/xe_gt_idle.c | 32 ++--
> > drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 12 +-
> > drivers/gpu/drm/xe/xe_gt_throttle.c | 3 +-
> > drivers/gpu/drm/xe/xe_guc.c | 13 +-
> > drivers/gpu/drm/xe/xe_guc_debugfs.c | 3 +-
> > drivers/gpu/drm/xe/xe_guc_log.c | 10 +-
> > drivers/gpu/drm/xe/xe_guc_pc.c | 62 ++-----
> > drivers/gpu/drm/xe/xe_guc_submit.c | 11 +-
> > drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 4 +-
> > drivers/gpu/drm/xe/xe_huc.c | 7 +-
> > drivers/gpu/drm/xe/xe_huc_debugfs.c | 3 +-
> > drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c | 6 +-
> > drivers/gpu/drm/xe/xe_hwmon.c | 16 +-
> > drivers/gpu/drm/xe/xe_mocs.c | 18 +--
> > drivers/gpu/drm/xe/xe_oa.c | 9 +-
> > drivers/gpu/drm/xe/xe_oa_types.h | 3 +
> > drivers/gpu/drm/xe/xe_pat.c | 36 ++---
> > drivers/gpu/drm/xe/xe_pci_sriov.c | 3 +-
> > drivers/gpu/drm/xe/xe_pm.h | 17 ++
> > drivers/gpu/drm/xe/xe_pxp.c | 55 +++----
> > drivers/gpu/drm/xe/xe_query.c | 16 +-
> > drivers/gpu/drm/xe/xe_reg_sr.c | 17 +-
> > drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c | 6 +-
> > drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 6 +-
> > drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +-
> > drivers/gpu/drm/xe/xe_tile_debugfs.c | 3 +-
> > drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c | 3 +-
> > drivers/gpu/drm/xe/xe_vram.c | 7 +-
> > 50 files changed, 387 insertions(+), 604 deletions(-)
>
> --
> Jani Nikula, Intel
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
^ permalink raw reply [flat|nested] 74+ messages in thread
* Re: [PATCH v2 00/30] Scope-based forcewake and runtime PM
2025-11-10 23:20 [PATCH v2 00/30] Scope-based forcewake and runtime PM Matt Roper
` (33 preceding siblings ...)
2025-11-11 10:57 ` [PATCH v2 00/30] Scope-based forcewake and runtime PM Jani Nikula
@ 2025-11-13 22:11 ` Matt Roper
34 siblings, 0 replies; 74+ messages in thread
From: Matt Roper @ 2025-11-13 22:11 UTC (permalink / raw)
To: intel-xe
On Mon, Nov 10, 2025 at 03:20:18PM -0800, Matt Roper wrote:
> Forcewake and runtime PM both follow reference-counted get/put models;
> when used in functions that can encounter errors and return early, it's
> easy for developers to make mistakes and fail to drop a reference on all
> of the error paths. Cleanup of these reference counts is often
> addressed by goto-based error handling which is somewhat ugly and
> subject to its own set of mistakes once we accumulate too many error
> labels in a function.
>
> Scope-based cleanup ([1][2]) has been gaining increasing popularity in
> the Linux kernel for cleaning up various kinds of resources in a more
> automated way when code has lots of error paths and early exits. Let's
> add scope-based cleanup for both forcewake and runtime PM, based on the
> mechanisms provided in include/linux/cleanup.h. Scope-based cleanup
> allows cleanup destructors to be executed automatically when the current
> scope is exited by any means (end of block, return, break, etc.).
>
> For xe_runtime_pm_{get,put} pairs that were grabbed and released within
> a single function or block, the preferred replacement is now just
>
> guard(xe_pm_runtime_noresume)(xe);
>
> which will take care of releasing the runtime PM reference
> automatically. scoped_guard() can be used instead if the reference
> should only be held over part of the block. There are also guard
> variants added for xe_pm_runtime_noresume and xe_pm_runtime_ioctl that
> allow replacement of those alternate functions as well.
>
> Unlike runtime PM, where all reference tracking is done within the
> object parameter itself, forcewake is currently a model where get
> operations return a cookie that needs to be passed back to put
> operations. That necessitates a slightly different type of cleanup
> helper (CLASS instead of guard), although the underlying mechanisms are
> the same. For forcewake that is grabbed and released within a single
> function or block, the preferred form is now:
>
> CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>
> which, like the runtime PM equivalent, will cause the forcewake
> reference to be dropped automatically. If forcewake needs to be held
> over only a subset of the current block,
>
> xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FW_GT) { ... }
>
> can be used in the same way scoped_guard() is used for runtime PM.
>
> The first few patches in this series make some general cleanups and
> restructuring of the existing force wake code. Then the new guards and
> classes for runtime PM and forcewake are defined. Finally, most of the
> existing runtime PM and forcewake usage in the driver is converted to
> the scope-based form in the remainder of the series. Some of the
> conversions eliminate goto-based cleanup models and/or significantly
> simplify the code. Other conversions don't significantly simplify the
> code (aside from a slight reduction in line count), but are still useful
> for consistency across our codebase.
>
> An advantage of doing the conversion everywhere possible, not just the
> places where it noticeably simplifies the code, is that it helps
> highlight the remaining get/put usage as special cases where wake
> references follow more complicated lifetimes (e.g., obtained in one
> function and released in a different one, often tied to some other type
> of resource or operation). With fewer direct get/put calls overall, its
> easier to identify the ones that remain as special cases and make sure
> they truly are paired up properly.
>
> There's other areas where scope-based cleanup could potentially be
> applied in the future (e.g., mutex locks, bo locking, etc.), but this
> series does not try to address those, even in places where those
> resources are also part of the same error handling cleanup paths as
> forcewake and runtime PM. We can potentially think about converting
> other types of resources to scope-based cleanup down the road if it
> winds up working well here for forcewake and PM.
>
> v2:
> - Add a proper success condition to the xe_pm_runtime_ioctl class so
> that conditional guards properly distinguish success (requiring
> cleanup) from errors (no cleanup). (CI)
> - Don't bother changing the signature of xe_force_wake_get() before
> adding the forcewake class. Simply create a separate constructor
> that wraps xe_force_wake_get(). (Michal)
> - Split ACQUIRE_ERR assignments from the condition checks. Even though
> this takes an extra line of code and deviates from most of the other
> uses in the kernel, it's easier to read (and avoids checkpatch
> warnings).
>
>
> References:
> [1] https://www.kernel.org/doc/html/next/core-api/cleanup.html
> [2] https://lwn.net/Articles/934679/
>
>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
>
> Matt Roper (30):
> drm/xe/forcewake: Improve kerneldoc
> drm/xe/eustall: Store forcewake reference in stream structure
> drm/xe/oa: Store forcewake reference in stream structure
I went ahead and pushed the first three patches here since they're just
simple cleanup patches that are useful even independently of the rest of
this series. The xe_wedged failure reported by CI isn't related to any
of these three patches.
Matt
> drm/xe/forcewake: Add scope-based cleanup for forcewake
> drm/xe/pm: Add scope-based cleanup helper for runtime PM
> drm/xe/gt: Use scope-based cleanup
> drm/xe/gt_idle: Use scope-based cleanup
> drm/xe/guc: Use scope-based cleanup
> drm/xe/guc_pc: Use scope-based cleanup
> drm/xe/mocs: Use scope-based cleanup
> drm/xe/pat: Use scope-based forcewake
> drm/xe/pxp: Use scope-based cleanup
> drm/xe/gsc: Use scope-based cleanup
> drm/xe/device: Use scope-based cleanup
> drm/xe/devcoredump: Use scope-based cleanup
> drm/xe/display: Use scoped-cleanup
> drm/xe: Create scoped cleanup class for force_wake_get_any_engine()
> drm/xe/drm_client: Use scope-based cleanup
> drm/xe/gt_debugfs: Use scope-based cleanup
> drm/xe/huc: Use scope-based forcewake
> drm/xe/query: Use scope-based forcewake
> drm/xe/reg_sr: Use scope-based forcewake
> drm/xe/vram: Use scope-based forcewake
> drm/xe/bo: Use scope-based runtime PM
> drm/xe/ggtt: Use scope-based runtime pm
> drm/xe/hwmon: Use scope-based runtime PM
> drm/xe/sriov: Use scope-based runtime PM
> drm/xe/tests: Use scope-based runtime PM
> drm/xe/sysfs: Use scope-based runtime power management
> drm/xe/debugfs: Use scope-based runtime PM
>
> drivers/gpu/drm/xe/display/xe_fb_pin.c | 23 ++-
> drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 25 +--
> drivers/gpu/drm/xe/tests/xe_bo.c | 10 +-
> drivers/gpu/drm/xe/tests/xe_dma_buf.c | 3 +-
> drivers/gpu/drm/xe/tests/xe_migrate.c | 10 +-
> drivers/gpu/drm/xe/tests/xe_mocs.c | 27 +---
> drivers/gpu/drm/xe/xe_bo.c | 3 +-
> drivers/gpu/drm/xe/xe_debugfs.c | 16 +-
> drivers/gpu/drm/xe/xe_devcoredump.c | 26 ++-
> drivers/gpu/drm/xe/xe_device.c | 33 ++--
> drivers/gpu/drm/xe/xe_device_sysfs.c | 33 ++--
> drivers/gpu/drm/xe/xe_drm_client.c | 83 +++++-----
> drivers/gpu/drm/xe/xe_eu_stall.c | 8 +-
> drivers/gpu/drm/xe/xe_force_wake.h | 28 ++++
> drivers/gpu/drm/xe/xe_force_wake_types.h | 26 ++-
> drivers/gpu/drm/xe/xe_ggtt.c | 3 +-
> drivers/gpu/drm/xe/xe_gsc.c | 21 +--
> drivers/gpu/drm/xe/xe_gsc_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_gsc_proxy.c | 17 +-
> drivers/gpu/drm/xe/xe_gt.c | 151 ++++++------------
> drivers/gpu/drm/xe/xe_gt_debugfs.c | 29 +---
> drivers/gpu/drm/xe/xe_gt_freq.c | 27 ++--
> drivers/gpu/drm/xe/xe_gt_idle.c | 32 ++--
> drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 12 +-
> drivers/gpu/drm/xe/xe_gt_throttle.c | 3 +-
> drivers/gpu/drm/xe/xe_guc.c | 13 +-
> drivers/gpu/drm/xe/xe_guc_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_guc_log.c | 10 +-
> drivers/gpu/drm/xe/xe_guc_pc.c | 62 ++-----
> drivers/gpu/drm/xe/xe_guc_submit.c | 11 +-
> drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 4 +-
> drivers/gpu/drm/xe/xe_huc.c | 7 +-
> drivers/gpu/drm/xe/xe_huc_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c | 6 +-
> drivers/gpu/drm/xe/xe_hwmon.c | 16 +-
> drivers/gpu/drm/xe/xe_mocs.c | 18 +--
> drivers/gpu/drm/xe/xe_oa.c | 9 +-
> drivers/gpu/drm/xe/xe_oa_types.h | 3 +
> drivers/gpu/drm/xe/xe_pat.c | 36 ++---
> drivers/gpu/drm/xe/xe_pci_sriov.c | 3 +-
> drivers/gpu/drm/xe/xe_pm.h | 17 ++
> drivers/gpu/drm/xe/xe_pxp.c | 55 +++----
> drivers/gpu/drm/xe/xe_query.c | 16 +-
> drivers/gpu/drm/xe/xe_reg_sr.c | 17 +-
> drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c | 6 +-
> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 6 +-
> drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +-
> drivers/gpu/drm/xe/xe_tile_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_vram.c | 7 +-
> 50 files changed, 387 insertions(+), 604 deletions(-)
>
> --
> 2.51.1
>
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
^ permalink raw reply [flat|nested] 74+ messages in thread