* [PATCH 00/33] Scope-based forcewake and runtime PM
@ 2025-11-07 18:13 Matt Roper
2025-11-07 18:13 ` [PATCH 01/33] drm/xe/forcewake: Improve kerneldoc Matt Roper
` (37 more replies)
0 siblings, 38 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Forcewake and runtime PM both follow reference-counted get/put models;
when used in functions that can encounter errors and return early, it's
easy for developers to make mistakes and fail to drop a reference on all
of the error paths. Cleanup of these reference counts is often
addressed by goto-based error handling which is somewhat ugly and
subject to its own set of mistakes once we accumulate too many error
labels in a function.
Scope-based cleanup ([1][2]) has been gaining increasing popularity in
the Linux kernel for cleaning up various kinds of resources in a more
automated way when code has lots of error paths and early exits. Let's
add scope-based cleanup for both forcewake and runtime PM, based on the
mechanisms provided in include/linux/cleanup.h. Scope-based cleanup
allows cleanup destructors to be executed automatically when the current
scope is exited by any means (end of block, return, break, etc.).
For xe_runtime_pm_{get,put} pairs that were grabbed and released within
a single function or block, the preferred replacement is now just
guard(xe_pm_runtime_noresume)(xe);
which will take care of releasing the runtime PM reference
automatically. scoped_guard() can be used instead if the reference
should only be held over part of the block. There are also guard
variants added for xe_pm_runtime_noresume and xe_pm_runtime_ioctl that
allow replacement of those alternate functions as well.
Unlike runtime PM, where all reference tracking is done within the
object parameter itself, forcewake is currently a model where get
operations return a cookie that needs to be passed back to put
operations. That necessitates a slightly different type of cleanup
helper (CLASS instead of guard), although the underlying mechanisms are
the same. For forcewake that is grabbed and released within a single
function or block, the preferred form is now:
CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
which, like the runtime PM equivalent, will cause the forcewake
reference to be dropped automatically. If forcewake needs to be held
over only a subset of the current block,
xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FW_GT) { ... }
can be used in the same way scoped_guard() is used for runtime PM.
The first few patches in this series make some general cleanups and
restructuring of the existing force wake code. Then the new guards and
classes for runtime PM and forcewake are defined. Finally, most of the
existing runtime PM and forcewake usage in the driver is converted to
the scope-based form in the remainder of the series. Some of the
conversions eliminate goto-based cleanup models and/or significantly
simplify the code. Other conversions don't significantly simplify the
code (aside from a slight reduction in line count), but are still useful
for consistency across our codebase.
An advantage of doing the conversion everywhere possible, not just the
places where it noticeably simplifies the code, is that it helps
highlight the remaining get/put usage as special cases where wake
references follow more complicated lifetimes (e.g., obtained in one
function and released in a different one, often tied to some other type
of resource or operation). With fewer direct get/put calls overall, its
easier to identify the ones that remain as special cases and make sure
they truly are paired up properly.
There's other areas where scope-based cleanup could potentially be
applied in the future (e.g., mutex locks, bo locking, etc.), but this
series does not try to address those, even in places where those
resources are also part of the same error handling cleanup paths as
forcewake and runtime PM. We can potentially think about converting
other types of resources to scope-based cleanup down the road if it
winds up working well here for forcewake and PM.
References:
[1] https://www.kernel.org/doc/html/next/core-api/cleanup.html
[2] https://lwn.net/Articles/934679/
Matt Roper (33):
drm/xe/forcewake: Improve kerneldoc
drm/xe/eustall: Store forcewake reference in stream structure
drm/xe/oa: Store forcewake reference in stream structure
drm/xe/forcewake: Create dedicated type for forcewake references
squash! drm/xe/forcewake: Create dedicated type for forcewake
references
squash! squash! drm/xe/forcewake: Create dedicated type for forcewake
references
drm/xe/forcewake: Add scope-based cleanup for forcewake
drm/xe/pm: Add scope-based cleanup helper for runtime PM
drm/xe/gt: Use scope-based cleanup
drm/xe/gt_idle: Use scope-based cleanup
drm/xe/guc: Use scope-based cleanup
drm/xe/guc_pc: Use scope-based cleanup
drm/xe/mocs: Use scope-based cleanup
drm/xe/pat: Use scope-based forcewake
drm/xe/pxp: Use scope-based cleanup
drm/xe/gsc: Use scope-based cleanup
drm/xe/device: Use scope-based cleanup
drm/xe/devcoredump: Use scope-based cleanup
drm/xe/display: Use scoped-cleanup
drm/xe: Create scoped cleanup class for force_wake_get_any_engine()
drm/xe/drm_client: Use scope-based cleanup
drm/xe/gt_debugfs: Use scope-based cleanup
drm/xe/huc: Use scope-based forcewake
drm/xe/query: Use scope-based forcewake
drm/xe/reg_sr: Use scope-based forcewake
drm/xe/vram: Use scope-based forcewake
drm/xe/bo: Use scope-based runtime PM
drm/xe/ggtt: Use scope-based runtime pm
drm/xe/hwmon: Use scope-based runtime PM
drm/xe/sriov: Use scope-based runtime PM
drm/xe/tests: Use scope-based runtime PM
drm/xe/sysfs: Use scope-based runtime power management
drm/xe/debugfs: Use scope-based runtime PM
drivers/gpu/drm/xe/display/xe_fb_pin.c | 24 ++-
drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 25 +--
drivers/gpu/drm/xe/tests/xe_bo.c | 10 +-
drivers/gpu/drm/xe/tests/xe_dma_buf.c | 3 +-
drivers/gpu/drm/xe/tests/xe_migrate.c | 10 +-
drivers/gpu/drm/xe/tests/xe_mocs.c | 27 +---
drivers/gpu/drm/xe/xe_bo.c | 3 +-
drivers/gpu/drm/xe/xe_debugfs.c | 39 +++--
drivers/gpu/drm/xe/xe_devcoredump.c | 26 ++-
drivers/gpu/drm/xe/xe_device.c | 35 ++--
drivers/gpu/drm/xe/xe_device_sysfs.c | 33 ++--
drivers/gpu/drm/xe/xe_drm_client.c | 77 +++++----
drivers/gpu/drm/xe/xe_eu_stall.c | 8 +-
drivers/gpu/drm/xe/xe_force_wake.c | 19 ++-
drivers/gpu/drm/xe/xe_force_wake.h | 23 ++-
drivers/gpu/drm/xe/xe_force_wake_types.h | 41 ++++-
drivers/gpu/drm/xe/xe_ggtt.c | 3 +-
drivers/gpu/drm/xe/xe_gsc.c | 28 ++--
drivers/gpu/drm/xe/xe_gsc_debugfs.c | 3 +-
drivers/gpu/drm/xe/xe_gsc_proxy.c | 17 +-
drivers/gpu/drm/xe/xe_gt.c | 149 ++++++------------
drivers/gpu/drm/xe/xe_gt_debugfs.c | 29 +---
drivers/gpu/drm/xe/xe_gt_freq.c | 27 ++--
drivers/gpu/drm/xe/xe_gt_idle.c | 32 ++--
drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 12 +-
drivers/gpu/drm/xe/xe_gt_throttle.c | 3 +-
drivers/gpu/drm/xe/xe_guc.c | 13 +-
drivers/gpu/drm/xe/xe_guc_debugfs.c | 3 +-
drivers/gpu/drm/xe/xe_guc_log.c | 10 +-
drivers/gpu/drm/xe/xe_guc_pc.c | 62 ++------
drivers/gpu/drm/xe/xe_guc_submit.c | 9 +-
drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 4 +-
drivers/gpu/drm/xe/xe_huc.c | 7 +-
drivers/gpu/drm/xe/xe_huc_debugfs.c | 3 +-
drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c | 6 +-
drivers/gpu/drm/xe/xe_hwmon.c | 16 +-
drivers/gpu/drm/xe/xe_mocs.c | 18 +--
drivers/gpu/drm/xe/xe_oa.c | 9 +-
drivers/gpu/drm/xe/xe_oa_types.h | 3 +
drivers/gpu/drm/xe/xe_pat.c | 36 ++---
drivers/gpu/drm/xe/xe_pci_sriov.c | 3 +-
drivers/gpu/drm/xe/xe_pm.h | 17 ++
drivers/gpu/drm/xe/xe_pmu.c | 10 +-
drivers/gpu/drm/xe/xe_pxp.c | 49 ++----
drivers/gpu/drm/xe/xe_query.c | 16 +-
drivers/gpu/drm/xe/xe_reg_sr.c | 17 +-
drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c | 6 +-
drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 6 +-
drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +-
drivers/gpu/drm/xe/xe_tile_debugfs.c | 3 +-
drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c | 3 +-
drivers/gpu/drm/xe/xe_vram.c | 7 +-
52 files changed, 422 insertions(+), 625 deletions(-)
--
2.51.1
^ permalink raw reply [flat|nested] 44+ messages in thread
* [PATCH 01/33] drm/xe/forcewake: Improve kerneldoc
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-10 23:33 ` Summers, Stuart
2025-11-07 18:13 ` [PATCH 02/33] drm/xe/eustall: Store forcewake reference in stream structure Matt Roper
` (36 subsequent siblings)
37 siblings, 1 reply; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Improve the kerneldoc for forcewake a bit to give more detail about what
the structures represent.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_force_wake_types.h | 26 ++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h b/drivers/gpu/drm/xe/xe_force_wake_types.h
index 12d6e2367455..9cfa28faf7bc 100644
--- a/drivers/gpu/drm/xe/xe_force_wake_types.h
+++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
@@ -52,7 +52,22 @@ enum xe_force_wake_domains {
};
/**
- * struct xe_force_wake_domain - Xe force wake domains
+ * struct xe_force_wake_domain - Xe force wake power domain
+ *
+ * Represents a individual device-internal power domain. The driver must
+ * ensure the power domain is awake before accessing registers or other
+ * hardware functionality that is part of the power domain. Since different
+ * driver threads may access hardware units simultaneously, a reference count
+ * is used to ensure that the domain remains awake as long as any software
+ * is using the part of the hardware covered by the power domain.
+ *
+ * Hardware provides a register interface to allow the driver to request
+ * wake/sleep of power domains, although in most cases the actual action of
+ * powering the hardware up/down is handled by firmware (and may be subject to
+ * requirements and constraints outside of the driver's visibility) so the
+ * driver needs to wait for an acknowledgment that a wake request has been
+ * acted upon before accessing the parts of the hardware that reside within the
+ * power domain.
*/
struct xe_force_wake_domain {
/** @id: domain force wake id */
@@ -70,7 +85,14 @@ struct xe_force_wake_domain {
};
/**
- * struct xe_force_wake - Xe force wake
+ * struct xe_force_wake - Xe force wake collection
+ *
+ * Represents a collection of related power domains (struct
+ * xe_force_wake_domain) associated with a subunit of the device.
+ *
+ * Currently only used for GT power domains (where the term "forcewake" is used
+ * in the hardware documentation), although the interface could be extended to
+ * power wells in other parts of the hardware in the future.
*/
struct xe_force_wake {
/** @gt: back pointers to GT */
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 02/33] drm/xe/eustall: Store forcewake reference in stream structure
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
2025-11-07 18:13 ` [PATCH 01/33] drm/xe/forcewake: Improve kerneldoc Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 19:52 ` Harish Chegondi
2025-11-07 18:13 ` [PATCH 03/33] drm/xe/oa: " Matt Roper
` (35 subsequent siblings)
37 siblings, 1 reply; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper, Harish Chegondi
Calls to xe_force_wake_put() should generally pass the exact reference
returned by xe_force_wake_get(). Since EU stall grabs and releases
forcewake in different functions, xe_eu_stall_disable_locked() is
currently calling put with a hardcoded RENDER domain. Although this
works for now, it's somewhat fragile in case the power domain(s)
required by stall sampling change in the future, or if workarounds show
up that require us to obtain additional domains.
Stash the original reference obtained during stream enable inside the
stream structure so that we can use it directly when the stream is
disabled.
Cc: Harish Chegondi <harish.chegondi@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_eu_stall.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_eu_stall.c b/drivers/gpu/drm/xe/xe_eu_stall.c
index 650e45f6a7c7..97dfb7945b7a 100644
--- a/drivers/gpu/drm/xe/xe_eu_stall.c
+++ b/drivers/gpu/drm/xe/xe_eu_stall.c
@@ -49,6 +49,7 @@ struct xe_eu_stall_data_stream {
wait_queue_head_t poll_wq;
size_t data_record_size;
size_t per_xecore_buf_size;
+ unsigned int fw_ref;
struct xe_gt *gt;
struct xe_bo *bo;
@@ -660,13 +661,12 @@ static int xe_eu_stall_stream_enable(struct xe_eu_stall_data_stream *stream)
struct per_xecore_buf *xecore_buf;
struct xe_gt *gt = stream->gt;
u16 group, instance;
- unsigned int fw_ref;
int xecore;
/* Take runtime pm ref and forcewake to disable RC6 */
xe_pm_runtime_get(gt_to_xe(gt));
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_RENDER);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_RENDER)) {
+ stream->fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_RENDER);
+ if (!xe_force_wake_ref_has_domain(stream->fw_ref, XE_FW_RENDER)) {
xe_gt_err(gt, "Failed to get RENDER forcewake\n");
xe_pm_runtime_put(gt_to_xe(gt));
return -ETIMEDOUT;
@@ -832,7 +832,7 @@ static int xe_eu_stall_disable_locked(struct xe_eu_stall_data_stream *stream)
xe_gt_mcr_multicast_write(gt, ROW_CHICKEN2,
_MASKED_BIT_DISABLE(DISABLE_DOP_GATING));
- xe_force_wake_put(gt_to_fw(gt), XE_FW_RENDER);
+ xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
xe_pm_runtime_put(gt_to_xe(gt));
return 0;
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 03/33] drm/xe/oa: Store forcewake reference in stream structure
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
2025-11-07 18:13 ` [PATCH 01/33] drm/xe/forcewake: Improve kerneldoc Matt Roper
2025-11-07 18:13 ` [PATCH 02/33] drm/xe/eustall: Store forcewake reference in stream structure Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 04/33] drm/xe/forcewake: Create dedicated type for forcewake references Matt Roper
` (34 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper, Ashutosh Dixit
Calls to xe_force_wake_put() should generally pass the exact reference
returned by xe_force_wake_get(). Since OA grabs and releases forcewake
in different functions, xe_oa_stream_destroy() is currently calling put
with a hardcoded ALL mask. Although this works for now, it's somewhat
fragile in case OA moves to more precise power domain management in the
future.
Stash the original reference obtained during stream initialization
inside the stream structure so that we can use it directly when the
stream is destroyed.
Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_oa.c | 9 ++++-----
drivers/gpu/drm/xe/xe_oa_types.h | 3 +++
2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
index 7a13a7bd99a6..87a2bf53d661 100644
--- a/drivers/gpu/drm/xe/xe_oa.c
+++ b/drivers/gpu/drm/xe/xe_oa.c
@@ -870,7 +870,7 @@ static void xe_oa_stream_destroy(struct xe_oa_stream *stream)
xe_oa_free_oa_buffer(stream);
- xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
xe_pm_runtime_put(stream->oa->xe);
/* Wa_1509372804:pvc: Unset the override of GUCRC mode to enable rc6 */
@@ -1717,7 +1717,6 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
struct xe_oa_open_param *param)
{
struct xe_gt *gt = param->hwe->gt;
- unsigned int fw_ref;
int ret;
stream->exec_q = param->exec_q;
@@ -1772,8 +1771,8 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
/* Take runtime pm ref and forcewake to disable RC6 */
xe_pm_runtime_get(stream->oa->xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
+ stream->fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(stream->fw_ref, XE_FORCEWAKE_ALL)) {
ret = -ETIMEDOUT;
goto err_fw_put;
}
@@ -1818,7 +1817,7 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
err_free_oa_buf:
xe_oa_free_oa_buffer(stream);
err_fw_put:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
xe_pm_runtime_put(stream->oa->xe);
if (stream->override_gucrc)
xe_gt_WARN_ON(gt, xe_guc_pc_unset_gucrc_mode(>->uc.guc.pc));
diff --git a/drivers/gpu/drm/xe/xe_oa_types.h b/drivers/gpu/drm/xe/xe_oa_types.h
index daf701b5d48b..cf080f412189 100644
--- a/drivers/gpu/drm/xe/xe_oa_types.h
+++ b/drivers/gpu/drm/xe/xe_oa_types.h
@@ -264,5 +264,8 @@ struct xe_oa_stream {
/** @syncs: syncs to wait on and to signal */
struct xe_sync_entry *syncs;
+
+ /** @fw_ref: Forcewake reference */
+ unsigned int fw_ref;
};
#endif
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 04/33] drm/xe/forcewake: Create dedicated type for forcewake references
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (2 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 03/33] drm/xe/oa: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 19:27 ` Michal Wajdeczko
2025-11-07 18:13 ` [PATCH 05/33] squash! " Matt Roper
` (33 subsequent siblings)
37 siblings, 1 reply; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
xe_force_wake_get() currently returns an integer mask of power domains
that were successfully awoken; both this mask and a pointer to the force
wake collection must be passed to xe_force_wake_put() to release the
wake reference.
Create a dedicated structure type to hold both the mask and the
collection pointer. While this change does little on its own, it will
make it easier for us to add scope-based cleanup of forcewake in the
future.
FIXME:
For ease of review, this patch contains only the manual changes to
add the structure and change the get/put function definitions; it
does not build on its own since the rest of the driver is still
trying to call the get/put functions with the old signature. The
next patch contains the coccinelle-generated changes necessary
elsewhere in the driver to adapt to the new interface. The two
patches will be squashed together when applied, remain separate for
now to help reviewers.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_debugfs.c | 23 ++++++++++++++++++-----
drivers/gpu/drm/xe/xe_drm_client.c | 4 ++--
drivers/gpu/drm/xe/xe_eu_stall.c | 2 +-
drivers/gpu/drm/xe/xe_force_wake.c | 19 ++++++++++++-------
drivers/gpu/drm/xe/xe_force_wake.h | 11 ++++++-----
drivers/gpu/drm/xe/xe_force_wake_types.h | 15 +++++++++++++++
drivers/gpu/drm/xe/xe_oa_types.h | 2 +-
drivers/gpu/drm/xe/xe_pmu.c | 6 ++----
8 files changed, 57 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
index e91da9589c5f..2d858110922b 100644
--- a/drivers/gpu/drm/xe/xe_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_debugfs.c
@@ -198,7 +198,7 @@ static int forcewake_open(struct inode *inode, struct file *file)
struct xe_device *xe = inode->i_private;
struct xe_gt *gt;
u8 id, last_gt;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
xe_pm_runtime_get(xe);
for_each_gt(gt, xe, id) {
@@ -213,10 +213,19 @@ static int forcewake_open(struct inode *inode, struct file *file)
err_fw_get:
for_each_gt(gt, xe, id) {
+ struct xe_force_wake_ref all_fw_ref;
+
+ /*
+ * A bit of a hack since we didn't save the actual forcewake
+ * reference above.
+ */
+ all_fw_ref.fw = gt_to_fw(gt);
+ all_fw_ref.domains = XE_FORCEWAKE_ALL;
+
if (id < last_gt)
- xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ xe_force_wake_put(all_fw_ref);
else if (id == last_gt)
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
else
break;
}
@@ -228,11 +237,15 @@ static int forcewake_open(struct inode *inode, struct file *file)
static int forcewake_release(struct inode *inode, struct file *file)
{
struct xe_device *xe = inode->i_private;
+ struct xe_force_wake_ref all_fw_ref;
struct xe_gt *gt;
u8 id;
- for_each_gt(gt, xe, id)
- xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ all_fw_ref.domains = XE_FORCEWAKE_ALL;
+ for_each_gt(gt, xe, id) {
+ all_fw_ref.fw = gt_to_fw(gt);
+ xe_force_wake_put(all_fw_ref);
+ }
xe_pm_runtime_put(xe);
return 0;
diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
index f931ff9b1ec0..60a6ea7c88e4 100644
--- a/drivers/gpu/drm/xe/xe_drm_client.c
+++ b/drivers/gpu/drm/xe/xe_drm_client.c
@@ -287,7 +287,7 @@ static struct xe_hw_engine *any_engine(struct xe_device *xe)
static bool force_wake_get_any_engine(struct xe_device *xe,
struct xe_hw_engine **phwe,
- unsigned int *pfw_ref)
+ struct xe_force_wake_ref *pfw_ref)
{
enum xe_force_wake_domains domain;
unsigned int fw_ref;
@@ -322,7 +322,7 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
struct xe_hw_engine *hwe;
struct xe_exec_queue *q;
u64 gpu_timestamp;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
/*
* RING_TIMESTAMP registers are inaccessible in VF mode.
diff --git a/drivers/gpu/drm/xe/xe_eu_stall.c b/drivers/gpu/drm/xe/xe_eu_stall.c
index 97dfb7945b7a..8b3da9ae6888 100644
--- a/drivers/gpu/drm/xe/xe_eu_stall.c
+++ b/drivers/gpu/drm/xe/xe_eu_stall.c
@@ -49,7 +49,7 @@ struct xe_eu_stall_data_stream {
wait_queue_head_t poll_wq;
size_t data_record_size;
size_t per_xecore_buf_size;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
struct xe_gt *gt;
struct xe_bo *bo;
diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
index c59a9b330697..2e675536b36e 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.c
+++ b/drivers/gpu/drm/xe/xe_force_wake.c
@@ -169,11 +169,12 @@ static int domain_sleep_wait(struct xe_gt *gt,
* Return: opaque reference to woken domains or zero if none of requested
* domains were awake.
*/
-unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
- enum xe_force_wake_domains domains)
+struct xe_force_wake_ref __must_check xe_force_wake_get(struct xe_force_wake *fw,
+ enum xe_force_wake_domains domains)
{
struct xe_gt *gt = fw->gt;
struct xe_force_wake_domain *domain;
+ struct xe_force_wake_ref fw_ref;
unsigned int ref_incr = 0, awake_rqst = 0, awake_failed = 0;
unsigned int tmp, ref_rqst;
unsigned long flags;
@@ -208,7 +209,10 @@ unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
if (domains == XE_FORCEWAKE_ALL && ref_incr == fw->initialized_domains)
ref_incr |= XE_FORCEWAKE_ALL;
- return ref_incr;
+ fw_ref.fw = fw;
+ fw_ref.domains = ref_incr;
+
+ return fw_ref;
}
/**
@@ -221,8 +225,9 @@ unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
* and waits for acknowledgment for domain to sleep within 50 milisec timeout.
* Warns in case of timeout of ack from domain.
*/
-void xe_force_wake_put(struct xe_force_wake *fw, unsigned int fw_ref)
+void xe_force_wake_put(struct xe_force_wake_ref fw_ref)
{
+ struct xe_force_wake *fw = fw_ref.fw;
struct xe_gt *gt = fw->gt;
struct xe_force_wake_domain *domain;
unsigned int tmp, sleep = 0;
@@ -233,14 +238,14 @@ void xe_force_wake_put(struct xe_force_wake *fw, unsigned int fw_ref)
* Avoid unnecessary lock and unlock when the function is called
* in error path of individual domains.
*/
- if (!fw_ref)
+ if (!fw_ref.domains)
return;
if (xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- fw_ref = fw->initialized_domains;
+ fw_ref.domains = fw->initialized_domains;
spin_lock_irqsave(&fw->lock, flags);
- for_each_fw_domain_masked(domain, fw_ref, fw, tmp) {
+ for_each_fw_domain_masked(domain, fw_ref.domains, fw, tmp) {
xe_gt_assert(gt, domain->ref);
if (!--domain->ref) {
diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
index 0e3e84bfa51c..86e9bca7cac9 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.h
+++ b/drivers/gpu/drm/xe/xe_force_wake.h
@@ -15,9 +15,9 @@ void xe_force_wake_init_gt(struct xe_gt *gt,
struct xe_force_wake *fw);
void xe_force_wake_init_engines(struct xe_gt *gt,
struct xe_force_wake *fw);
-unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
- enum xe_force_wake_domains domains);
-void xe_force_wake_put(struct xe_force_wake *fw, unsigned int fw_ref);
+struct xe_force_wake_ref __must_check xe_force_wake_get(struct xe_force_wake *fw,
+ enum xe_force_wake_domains domains);
+void xe_force_wake_put(struct xe_force_wake_ref fw_ref);
static inline int
xe_force_wake_ref(struct xe_force_wake *fw,
@@ -56,9 +56,10 @@ xe_force_wake_assert_held(struct xe_force_wake *fw,
* Return: true if domain is refcounted.
*/
static inline bool
-xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains domain)
+xe_force_wake_ref_has_domain(struct xe_force_wake_ref fw_ref,
+ enum xe_force_wake_domains domain)
{
- return fw_ref & domain;
+ return fw_ref.domains & domain;
}
#endif
diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h b/drivers/gpu/drm/xe/xe_force_wake_types.h
index 9cfa28faf7bc..26df4adba4c5 100644
--- a/drivers/gpu/drm/xe/xe_force_wake_types.h
+++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
@@ -107,4 +107,19 @@ struct xe_force_wake {
struct xe_force_wake_domain domains[XE_FW_DOMAIN_ID_COUNT];
};
+/**
+ * struct xe_force_wake_ref - Xe force wake reference
+ *
+ * Represents a wakeref for a subset of the power domains belonging to an
+ * xe_force_wake collection. Returned by xe_force_wake_get() and passed
+ * to xe_force_wake_put().
+ */
+struct xe_force_wake_ref {
+ /** @fw: back pointer to force wake collection */
+ struct xe_force_wake *fw;
+
+ /** @domains: mask of individual domains held by this reference */
+ unsigned int domains;
+};
+
#endif
diff --git a/drivers/gpu/drm/xe/xe_oa_types.h b/drivers/gpu/drm/xe/xe_oa_types.h
index cf080f412189..84bd5018d0f3 100644
--- a/drivers/gpu/drm/xe/xe_oa_types.h
+++ b/drivers/gpu/drm/xe/xe_oa_types.h
@@ -266,6 +266,6 @@ struct xe_oa_stream {
struct xe_sync_entry *syncs;
/** @fw_ref: Forcewake reference */
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
};
#endif
diff --git a/drivers/gpu/drm/xe/xe_pmu.c b/drivers/gpu/drm/xe/xe_pmu.c
index c63335eb69e5..dbd95327f9fc 100644
--- a/drivers/gpu/drm/xe/xe_pmu.c
+++ b/drivers/gpu/drm/xe/xe_pmu.c
@@ -214,12 +214,10 @@ static bool event_param_valid(struct perf_event *event)
static void xe_pmu_event_destroy(struct perf_event *event)
{
struct xe_device *xe = container_of(event->pmu, typeof(*xe), pmu.base);
- struct xe_gt *gt;
- unsigned int *fw_ref = event->pmu_private;
+ struct xe_force_wake_ref *fw_ref = event->pmu_private;
if (fw_ref) {
- gt = xe_device_get_gt(xe, config_to_gt_id(event->attr.config));
- xe_force_wake_put(gt_to_fw(gt), *fw_ref);
+ xe_force_wake_put(*fw_ref);
kfree(fw_ref);
event->pmu_private = NULL;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 05/33] squash! drm/xe/forcewake: Create dedicated type for forcewake references
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (3 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 04/33] drm/xe/forcewake: Create dedicated type for forcewake references Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 06/33] squash! " Matt Roper
` (32 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
The changes here were generated with Coccinelle from the following
semantic patch:
@@
identifier ref;
expression fw, domains;
@@
(
- unsigned int ref;
+ struct xe_force_wake_ref ref;
|
- unsigned int ref = 0;
+ struct xe_force_wake_ref ref;
)
<+...
ref = xe_force_wake_get(fw, domains);
...+>
@@
expression fw, ref;
@@
- xe_force_wake_put(fw, ref);
+ xe_force_wake_put(ref);
@@
struct xe_force_wake_ref ref;
@@
- !ref
+ !ref.domains
This patch should be squashed into the previous patch before merging.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 6 +--
drivers/gpu/drm/xe/tests/xe_mocs.c | 6 +--
drivers/gpu/drm/xe/xe_devcoredump.c | 8 ++--
drivers/gpu/drm/xe/xe_device.c | 18 ++++----
drivers/gpu/drm/xe/xe_drm_client.c | 6 +--
drivers/gpu/drm/xe/xe_eu_stall.c | 2 +-
drivers/gpu/drm/xe/xe_gsc.c | 17 +++----
drivers/gpu/drm/xe/xe_gsc_proxy.c | 6 +--
drivers/gpu/drm/xe/xe_gt.c | 58 ++++++++++++------------
drivers/gpu/drm/xe/xe_gt_debugfs.c | 4 +-
drivers/gpu/drm/xe/xe_gt_idle.c | 20 ++++----
drivers/gpu/drm/xe/xe_guc.c | 10 ++--
drivers/gpu/drm/xe/xe_guc_log.c | 6 +--
drivers/gpu/drm/xe/xe_guc_pc.c | 16 +++----
drivers/gpu/drm/xe/xe_guc_submit.c | 4 +-
drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 4 +-
drivers/gpu/drm/xe/xe_huc.c | 6 +--
drivers/gpu/drm/xe/xe_mocs.c | 2 +-
drivers/gpu/drm/xe/xe_oa.c | 4 +-
drivers/gpu/drm/xe/xe_pat.c | 36 +++++++--------
drivers/gpu/drm/xe/xe_pxp.c | 16 +++----
drivers/gpu/drm/xe/xe_query.c | 6 +--
drivers/gpu/drm/xe/xe_reg_sr.c | 6 +--
drivers/gpu/drm/xe/xe_vram.c | 6 +--
24 files changed, 137 insertions(+), 136 deletions(-)
diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
index 4ae847b628e2..80fd3844c41c 100644
--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
@@ -37,7 +37,7 @@ bool intel_hdcp_gsc_check_status(struct drm_device *drm)
struct xe_gt *gt = tile->media_gt;
struct xe_gsc *gsc = >->uc.gsc;
bool ret = true;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
if (!gsc || !xe_uc_fw_is_enabled(&gsc->fw)) {
drm_dbg_kms(&xe->drm,
@@ -47,7 +47,7 @@ bool intel_hdcp_gsc_check_status(struct drm_device *drm)
xe_pm_runtime_get(xe);
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
- if (!fw_ref) {
+ if (!fw_ref.domains) {
drm_dbg_kms(&xe->drm,
"failed to get forcewake to check proxy status\n");
ret = false;
@@ -57,7 +57,7 @@ bool intel_hdcp_gsc_check_status(struct drm_device *drm)
if (!xe_gsc_proxy_init_done(gsc))
ret = false;
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
out:
xe_pm_runtime_put(xe);
return ret;
diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
index 0e502feaca81..9c774b44328e 100644
--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
+++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
@@ -48,7 +48,7 @@ static void read_l3cc_table(struct xe_gt *gt,
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
KUNIT_ASSERT_TRUE_MSG(test, true, "Forcewake Failed.\n");
}
@@ -74,7 +74,7 @@ static void read_l3cc_table(struct xe_gt *gt,
KUNIT_EXPECT_EQ_MSG(test, l3cc_expected, l3cc,
"l3cc idx=%u has incorrect val.\n", i);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
static void read_mocs_table(struct xe_gt *gt,
@@ -107,7 +107,7 @@ static void read_mocs_table(struct xe_gt *gt,
"mocs reg 0x%x has incorrect val.\n", i);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
static int mocs_kernel_test_run_device(struct xe_device *xe)
diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
index 203e3038cc81..eb0b40ffffaa 100644
--- a/drivers/gpu/drm/xe/xe_devcoredump.c
+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
@@ -276,7 +276,7 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
struct xe_device *xe = coredump_to_xe(coredump);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
/*
* NB: Despite passing a GFP_ flags parameter here, more allocations are done
@@ -295,7 +295,7 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
xe_vm_snapshot_capture_delayed(ss->vm);
xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
- xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
+ xe_force_wake_put(fw_ref);
ss->read.chunk_position = 0;
@@ -332,7 +332,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
struct xe_guc *guc = exec_queue_to_guc(q);
const char *process_name = "no process";
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
bool cookie;
ss->snapshot_time = ktime_get_real();
@@ -364,7 +364,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
queue_work(system_unbound_wq, &ss->work);
- xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
+ xe_force_wake_put(fw_ref);
dma_fence_end_signalling(cookie);
}
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index c7d373c70f0f..9ae2b29a1cab 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -775,7 +775,7 @@ ALLOW_ERROR_INJECTION(xe_device_probe_early, ERRNO); /* See xe_pci_probe() */
static int probe_has_flat_ccs(struct xe_device *xe)
{
struct xe_gt *gt;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
u32 reg;
/* Always enabled/disabled, no runtime check to do */
@@ -787,7 +787,7 @@ static int probe_has_flat_ccs(struct xe_device *xe)
return 0;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
reg = xe_gt_mcr_unicast_read_any(gt, XE2_FLAT_CCS_BASE_RANGE_LOWER);
@@ -797,7 +797,7 @@ static int probe_has_flat_ccs(struct xe_device *xe)
drm_dbg(&xe->drm,
"Flat CCS has been disabled in bios, May lead to performance impact");
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
@@ -1034,7 +1034,7 @@ void xe_device_wmb(struct xe_device *xe)
*/
static void tdf_request_sync(struct xe_device *xe)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
struct xe_gt *gt;
u8 id;
@@ -1043,7 +1043,7 @@ static void tdf_request_sync(struct xe_device *xe)
continue;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return;
xe_mmio_write32(>->mmio, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST);
@@ -1059,14 +1059,14 @@ static void tdf_request_sync(struct xe_device *xe)
150, NULL, false))
xe_gt_err_once(gt, "TD flush timeout\n");
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
}
void xe_device_l2_flush(struct xe_device *xe)
{
struct xe_gt *gt;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
gt = xe_root_mmio_gt(xe);
if (!gt)
@@ -1076,7 +1076,7 @@ void xe_device_l2_flush(struct xe_device *xe)
return;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return;
spin_lock(>->global_invl_lock);
@@ -1087,7 +1087,7 @@ void xe_device_l2_flush(struct xe_device *xe)
spin_unlock(>->global_invl_lock);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
/**
diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
index 60a6ea7c88e4..182526864286 100644
--- a/drivers/gpu/drm/xe/xe_drm_client.c
+++ b/drivers/gpu/drm/xe/xe_drm_client.c
@@ -290,7 +290,7 @@ static bool force_wake_get_any_engine(struct xe_device *xe,
struct xe_force_wake_ref *pfw_ref)
{
enum xe_force_wake_domains domain;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
struct xe_hw_engine *hwe;
struct xe_force_wake *fw;
@@ -303,7 +303,7 @@ static bool force_wake_get_any_engine(struct xe_device *xe,
fw_ref = xe_force_wake_get(fw, domain);
if (!xe_force_wake_ref_has_domain(fw_ref, domain)) {
- xe_force_wake_put(fw, fw_ref);
+ xe_force_wake_put(fw_ref);
return false;
}
@@ -360,7 +360,7 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
gpu_timestamp = xe_hw_engine_read_timestamp(hwe);
- xe_force_wake_put(gt_to_fw(hwe->gt), fw_ref);
+ xe_force_wake_put(fw_ref);
xe_pm_runtime_put(xe);
for (class = 0; class < XE_ENGINE_CLASS_MAX; class++) {
diff --git a/drivers/gpu/drm/xe/xe_eu_stall.c b/drivers/gpu/drm/xe/xe_eu_stall.c
index 8b3da9ae6888..95b2bfd403ad 100644
--- a/drivers/gpu/drm/xe/xe_eu_stall.c
+++ b/drivers/gpu/drm/xe/xe_eu_stall.c
@@ -832,7 +832,7 @@ static int xe_eu_stall_disable_locked(struct xe_eu_stall_data_stream *stream)
xe_gt_mcr_multicast_write(gt, ROW_CHICKEN2,
_MASKED_BIT_DISABLE(DISABLE_DOP_GATING));
- xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
+ xe_force_wake_put(stream->fw_ref);
xe_pm_runtime_put(gt_to_xe(gt));
return 0;
diff --git a/drivers/gpu/drm/xe/xe_gsc.c b/drivers/gpu/drm/xe/xe_gsc.c
index dd69cb834f8e..59519c9023bd 100644
--- a/drivers/gpu/drm/xe/xe_gsc.c
+++ b/drivers/gpu/drm/xe/xe_gsc.c
@@ -263,7 +263,7 @@ static int gsc_upload_and_init(struct xe_gsc *gsc)
{
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_tile *tile = gt_to_tile(gt);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int ret;
if (tile->primary_gt && XE_GT_WA(tile->primary_gt, 14018094691)) {
@@ -281,8 +281,9 @@ static int gsc_upload_and_init(struct xe_gsc *gsc)
ret = gsc_upload(gsc);
- if (tile->primary_gt && XE_GT_WA(tile->primary_gt, 14018094691))
- xe_force_wake_put(gt_to_fw(tile->primary_gt), fw_ref);
+ if (tile->primary_gt && XE_GT_WA(tile->primary_gt, 14018094691)) {
+ xe_force_wake_put(fw_ref);
+ }
if (ret)
return ret;
@@ -352,7 +353,7 @@ static void gsc_work(struct work_struct *work)
struct xe_gsc *gsc = container_of(work, typeof(*gsc), work);
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
u32 actions;
int ret;
@@ -382,7 +383,7 @@ static void gsc_work(struct work_struct *work)
xe_gsc_proxy_request_handler(gsc);
out:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
xe_pm_runtime_put(xe);
}
@@ -615,7 +616,7 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
{
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_mmio *mmio = >->mmio;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
xe_uc_fw_print(&gsc->fw, p);
@@ -625,7 +626,7 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
return;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
- if (!fw_ref)
+ if (!fw_ref.domains)
return;
drm_printf(p, "\nHECI1 FWSTS: 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x\n",
@@ -636,5 +637,5 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
xe_mmio_read32(mmio, HECI_FWSTS5(MTL_GSC_HECI1_BASE)),
xe_mmio_read32(mmio, HECI_FWSTS6(MTL_GSC_HECI1_BASE)));
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
diff --git a/drivers/gpu/drm/xe/xe_gsc_proxy.c b/drivers/gpu/drm/xe/xe_gsc_proxy.c
index 464282a89eef..ba1211fe5a60 100644
--- a/drivers/gpu/drm/xe/xe_gsc_proxy.c
+++ b/drivers/gpu/drm/xe/xe_gsc_proxy.c
@@ -440,7 +440,7 @@ static void xe_gsc_proxy_remove(void *arg)
struct xe_gsc *gsc = arg;
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref = 0;
+ struct xe_force_wake_ref fw_ref;
if (!gsc->proxy.component_added)
return;
@@ -448,13 +448,13 @@ static void xe_gsc_proxy_remove(void *arg)
/* disable HECI2 IRQs */
xe_pm_runtime_get(xe);
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
- if (!fw_ref)
+ if (!fw_ref.domains)
xe_gt_err(gt, "failed to get forcewake to disable GSC interrupts\n");
/* try do disable irq even if forcewake failed */
gsc_proxy_irq_toggle(gsc, false);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
xe_pm_runtime_put(xe);
xe_gsc_wait_for_worker_completion(gsc);
diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index 6d479948bf21..d39bf8cb64eb 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -103,14 +103,14 @@ void xe_gt_sanitize(struct xe_gt *gt)
static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
u32 reg;
if (!XE_GT_WA(gt, 16023588340))
return;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return;
if (xe_gt_is_main_type(gt)) {
@@ -120,12 +120,12 @@ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
}
xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0xF);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
u32 reg;
if (!XE_GT_WA(gt, 16023588340))
@@ -135,14 +135,14 @@ static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
return;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return;
reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
reg &= ~CG_DIS_CNTLBUS;
xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
static void gt_reset_worker(struct work_struct *w);
@@ -389,7 +389,7 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
int xe_gt_init_early(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int err;
if (IS_SRIOV_PF(gt_to_xe(gt))) {
@@ -437,12 +437,12 @@ int xe_gt_init_early(struct xe_gt *gt)
return err;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
xe_gt_mcr_init_early(gt);
xe_pat_init(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
@@ -460,11 +460,11 @@ static void dump_pat_on_error(struct xe_gt *gt)
static int gt_init_with_gt_forcewake(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int err;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
err = xe_uc_init(>->uc);
@@ -510,18 +510,18 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
*/
gt->info.gmdid = xe_mmio_read32(>->mmio, GMD_ID);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return err;
}
static int gt_init_with_all_forcewake(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int err;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
@@ -592,12 +592,12 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
if (IS_SRIOV_PF(gt_to_xe(gt)))
xe_gt_sriov_pf_init_hw(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return err;
}
@@ -819,7 +819,7 @@ static int do_gt_restart(struct xe_gt *gt)
static void gt_reset_worker(struct work_struct *w)
{
struct xe_gt *gt = container_of(w, typeof(*gt), reset.worker);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int err;
if (xe_device_wedged(gt_to_xe(gt)))
@@ -863,7 +863,7 @@ static void gt_reset_worker(struct work_struct *w)
if (err)
goto err_out;
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
/* Pair with get while enqueueing the work in xe_gt_reset_async() */
xe_pm_runtime_put(gt_to_xe(gt));
@@ -873,7 +873,7 @@ static void gt_reset_worker(struct work_struct *w)
return;
err_out:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
XE_WARN_ON(xe_uc_start(>->uc));
err_fail:
@@ -902,18 +902,18 @@ void xe_gt_reset_async(struct xe_gt *gt)
void xe_gt_suspend_prepare(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
xe_uc_suspend_prepare(>->uc);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
int xe_gt_suspend(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int err;
xe_gt_dbg(gt, "suspending\n");
@@ -931,7 +931,7 @@ int xe_gt_suspend(struct xe_gt *gt)
xe_gt_disable_host_l2_vram(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
xe_gt_dbg(gt, "suspended\n");
return 0;
@@ -939,7 +939,7 @@ int xe_gt_suspend(struct xe_gt *gt)
err_msg:
err = -ETIMEDOUT;
err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
return err;
@@ -947,11 +947,11 @@ int xe_gt_suspend(struct xe_gt *gt)
void xe_gt_shutdown(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
do_gt_reset(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
/**
@@ -976,7 +976,7 @@ int xe_gt_sanitize_freq(struct xe_gt *gt)
int xe_gt_resume(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int err;
xe_gt_dbg(gt, "resuming\n");
@@ -990,7 +990,7 @@ int xe_gt_resume(struct xe_gt *gt)
xe_gt_idle_enable_pg(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
xe_gt_dbg(gt, "resumed\n");
return 0;
@@ -998,7 +998,7 @@ int xe_gt_resume(struct xe_gt *gt)
err_msg:
err = -ETIMEDOUT;
err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(err));
return err;
diff --git a/drivers/gpu/drm/xe/xe_gt_debugfs.c b/drivers/gpu/drm/xe/xe_gt_debugfs.c
index e4fd632f43cf..0b2c5d3ff8bb 100644
--- a/drivers/gpu/drm/xe/xe_gt_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_gt_debugfs.c
@@ -118,7 +118,7 @@ static int hw_engines(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_hw_engine *hwe;
enum xe_hw_engine_id id;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int ret = 0;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
@@ -131,7 +131,7 @@ static int hw_engines(struct xe_gt *gt, struct drm_printer *p)
xe_hw_engine_print(hwe, p);
fw_put:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return ret;
}
diff --git a/drivers/gpu/drm/xe/xe_gt_idle.c b/drivers/gpu/drm/xe/xe_gt_idle.c
index bdc9d9877ec4..503aeabbe4c3 100644
--- a/drivers/gpu/drm/xe/xe_gt_idle.c
+++ b/drivers/gpu/drm/xe/xe_gt_idle.c
@@ -103,7 +103,7 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
struct xe_gt_idle *gtidle = >->gtidle;
struct xe_mmio *mmio = >->mmio;
u32 vcs_mask, vecs_mask;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int i, j;
if (IS_SRIOV_VF(xe))
@@ -146,13 +146,13 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
}
xe_mmio_write32(mmio, POWERGATE_ENABLE, gtidle->powergate_enable);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
void xe_gt_idle_disable_pg(struct xe_gt *gt)
{
struct xe_gt_idle *gtidle = >->gtidle;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
if (IS_SRIOV_VF(gt_to_xe(gt)))
return;
@@ -162,7 +162,7 @@ void xe_gt_idle_disable_pg(struct xe_gt *gt)
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
xe_mmio_write32(>->mmio, POWERGATE_ENABLE, gtidle->powergate_enable);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
/**
@@ -181,7 +181,7 @@ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p)
enum xe_gt_idle_state state;
u32 pg_enabled, pg_status = 0;
u32 vcs_mask, vecs_mask;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int n;
/*
* Media Slices
@@ -219,13 +219,13 @@ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p)
/* Do not wake the GT to read powergating status */
if (state != GT_IDLE_C6) {
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
pg_enabled = xe_mmio_read32(>->mmio, POWERGATE_ENABLE);
pg_status = xe_mmio_read32(>->mmio, POWERGATE_DOMAIN_STATUS);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
if (gt->info.engine_mask & XE_HW_ENGINE_RCS_MASK) {
@@ -396,7 +396,7 @@ void xe_gt_idle_enable_c6(struct xe_gt *gt)
int xe_gt_idle_disable_c6(struct xe_gt *gt)
{
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
xe_device_assert_mem_access(gt_to_xe(gt));
@@ -404,13 +404,13 @@ int xe_gt_idle_disable_c6(struct xe_gt *gt)
return 0;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
xe_mmio_write32(>->mmio, RC_CONTROL, 0);
xe_mmio_write32(>->mmio, RC_STATE, 0);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
index ecc3e091b89e..edff47a3235a 100644
--- a/drivers/gpu/drm/xe/xe_guc.c
+++ b/drivers/gpu/drm/xe/xe_guc.c
@@ -658,11 +658,11 @@ static void guc_fini_hw(void *arg)
{
struct xe_guc *guc = arg;
struct xe_gt *gt = guc_to_gt(guc);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
xe_uc_sanitize_reset(&guc_to_gt(guc)->uc);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
guc_g2g_fini(guc);
}
@@ -1610,7 +1610,7 @@ int xe_guc_start(struct xe_guc *guc)
void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
{
struct xe_gt *gt = guc_to_gt(guc);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
u32 status;
int i;
@@ -1618,7 +1618,7 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
if (!IS_SRIOV_VF(gt_to_xe(gt))) {
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return;
status = xe_mmio_read32(>->mmio, GUC_STATUS);
@@ -1639,7 +1639,7 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
i, xe_mmio_read32(>->mmio, SOFT_SCRATCH(i)));
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
drm_puts(p, "\n");
diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
index c01ccb35dc75..f1d25e542f98 100644
--- a/drivers/gpu/drm/xe/xe_guc_log.c
+++ b/drivers/gpu/drm/xe/xe_guc_log.c
@@ -145,7 +145,7 @@ struct xe_guc_log_snapshot *xe_guc_log_snapshot_capture(struct xe_guc_log *log,
struct xe_device *xe = log_to_xe(log);
struct xe_guc *guc = log_to_guc(log);
struct xe_gt *gt = log_to_gt(log);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
size_t remain;
int i;
@@ -166,11 +166,11 @@ struct xe_guc_log_snapshot *xe_guc_log_snapshot_capture(struct xe_guc_log *log,
}
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref) {
+ if (!fw_ref.domains) {
snapshot->stamp = ~0ULL;
} else {
snapshot->stamp = xe_mmio_read64_2x32(>->mmio, GUC_PMTIMESTAMP_LO);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
snapshot->ktime = ktime_get_boottime_ns();
snapshot->level = log->level;
diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
index ff22235857f8..034c87d7bf10 100644
--- a/drivers/gpu/drm/xe/xe_guc_pc.c
+++ b/drivers/gpu/drm/xe/xe_guc_pc.c
@@ -511,7 +511,7 @@ u32 xe_guc_pc_get_cur_freq_fw(struct xe_guc_pc *pc)
int xe_guc_pc_get_cur_freq(struct xe_guc_pc *pc, u32 *freq)
{
struct xe_gt *gt = pc_to_gt(pc);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
/*
* GuC SLPC plays with cur freq request when GuCRC is enabled
@@ -519,13 +519,13 @@ int xe_guc_pc_get_cur_freq(struct xe_guc_pc *pc, u32 *freq)
*/
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return -ETIMEDOUT;
}
*freq = get_cur_freq(gt);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
@@ -1223,7 +1223,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
struct xe_device *xe = pc_to_xe(pc);
struct xe_gt *gt = pc_to_gt(pc);
u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data));
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
ktime_t earlier;
int ret;
@@ -1231,7 +1231,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return -ETIMEDOUT;
}
@@ -1298,7 +1298,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
xe_gt_err(gt, "Failed to set SLPC power profile: %pe\n", ERR_PTR(ret));
out:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return ret;
}
@@ -1330,7 +1330,7 @@ static void xe_guc_pc_fini_hw(void *arg)
{
struct xe_guc_pc *pc = arg;
struct xe_device *xe = pc_to_xe(pc);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
if (xe_device_wedged(xe))
return;
@@ -1342,7 +1342,7 @@ static void xe_guc_pc_fini_hw(void *arg)
/* Bind requested freq to mert_freq_cap before unload */
pc_set_cur_freq(pc, min(pc_max_freq_cap(pc), pc->rpe_freq));
- xe_force_wake_put(gt_to_fw(pc_to_gt(pc)), fw_ref);
+ xe_force_wake_put(fw_ref);
}
/**
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index d4ffdb71ef3d..40514c270d6b 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1225,7 +1225,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
struct xe_guc *guc = exec_queue_to_guc(q);
const char *process_name = "no process";
struct xe_device *xe = guc_to_xe(guc);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int err = -ETIME;
pid_t pid = -1;
int i = 0;
@@ -1264,7 +1264,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
xe_engine_snapshot_capture_for_queue(q);
- xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
/*
diff --git a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
index a80175c7c478..321e5072b43e 100644
--- a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
+++ b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
@@ -71,7 +71,7 @@ static int send_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval, u32 seqno)
return send_tlb_inval(guc, action, ARRAY_SIZE(action));
} else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
struct xe_mmio *mmio = >->mmio;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
if (IS_SRIOV_VF(xe))
return -ECANCELED;
@@ -86,7 +86,7 @@ static int send_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval, u32 seqno)
xe_mmio_write32(mmio, GUC_TLB_INV_CR,
GUC_TLB_INV_CR_INVALIDATE);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
return -ECANCELED;
diff --git a/drivers/gpu/drm/xe/xe_huc.c b/drivers/gpu/drm/xe/xe_huc.c
index 0a70c8924582..5136e310515e 100644
--- a/drivers/gpu/drm/xe/xe_huc.c
+++ b/drivers/gpu/drm/xe/xe_huc.c
@@ -300,7 +300,7 @@ void xe_huc_sanitize(struct xe_huc *huc)
void xe_huc_print_info(struct xe_huc *huc, struct drm_printer *p)
{
struct xe_gt *gt = huc_to_gt(huc);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
xe_uc_fw_print(&huc->fw, p);
@@ -308,11 +308,11 @@ void xe_huc_print_info(struct xe_huc *huc, struct drm_printer *p)
return;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return;
drm_printf(p, "\nHuC status: 0x%08x\n",
xe_mmio_read32(>->mmio, HUC_KERNEL_LOAD_INFO));
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
}
diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
index 6613d3b48a84..73f9401d9907 100644
--- a/drivers/gpu/drm/xe/xe_mocs.c
+++ b/drivers/gpu/drm/xe/xe_mocs.c
@@ -828,7 +828,7 @@ int xe_mocs_dump(struct xe_gt *gt, struct drm_printer *p)
table.ops->dump(&table, flags, gt, p);
err_fw:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
xe_pm_runtime_put(xe);
return err;
}
diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
index 87a2bf53d661..2e38391c44f7 100644
--- a/drivers/gpu/drm/xe/xe_oa.c
+++ b/drivers/gpu/drm/xe/xe_oa.c
@@ -870,7 +870,7 @@ static void xe_oa_stream_destroy(struct xe_oa_stream *stream)
xe_oa_free_oa_buffer(stream);
- xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
+ xe_force_wake_put(stream->fw_ref);
xe_pm_runtime_put(stream->oa->xe);
/* Wa_1509372804:pvc: Unset the override of GUCRC mode to enable rc6 */
@@ -1817,7 +1817,7 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
err_free_oa_buf:
xe_oa_free_oa_buffer(stream);
err_fw_put:
- xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
+ xe_force_wake_put(stream->fw_ref);
xe_pm_runtime_put(stream->oa->xe);
if (stream->override_gucrc)
xe_gt_WARN_ON(gt, xe_guc_pc_unset_gucrc_mode(>->uc.guc.pc));
diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
index 68171cceea18..2a963b8ff807 100644
--- a/drivers/gpu/drm/xe/xe_pat.c
+++ b/drivers/gpu/drm/xe/xe_pat.c
@@ -233,11 +233,11 @@ static void program_pat_mcr(struct xe_gt *gt, const struct xe_pat_table_entry ta
static int xelp_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int i;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table:\n");
@@ -250,7 +250,7 @@ static int xelp_dump(struct xe_gt *gt, struct drm_printer *p)
XELP_MEM_TYPE_STR_MAP[mem_type], pat);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
@@ -262,11 +262,11 @@ static const struct xe_pat_ops xelp_pat_ops = {
static int xehp_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int i;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table:\n");
@@ -281,7 +281,7 @@ static int xehp_dump(struct xe_gt *gt, struct drm_printer *p)
XELP_MEM_TYPE_STR_MAP[mem_type], pat);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
@@ -293,11 +293,11 @@ static const struct xe_pat_ops xehp_pat_ops = {
static int xehpc_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int i;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table:\n");
@@ -310,7 +310,7 @@ static int xehpc_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XEHPC_CLOS_LEVEL_MASK, pat), pat);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
@@ -322,11 +322,11 @@ static const struct xe_pat_ops xehpc_pat_ops = {
static int xelpg_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int i;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table:\n");
@@ -344,7 +344,7 @@ static int xelpg_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XELPG_INDEX_COH_MODE_MASK, pat), pat);
}
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
@@ -361,12 +361,12 @@ static const struct xe_pat_ops xelpg_pat_ops = {
static int xe2_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
u32 pat;
int i;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table: (* = reserved entry)\n");
@@ -406,7 +406,7 @@ static int xe2_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XE2_COH_MODE, pat),
pat);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
@@ -419,12 +419,12 @@ static const struct xe_pat_ops xe2_pat_ops = {
static int xe3p_xpc_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
u32 pat;
int i;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
drm_printf(p, "PAT table: (* = reserved entry)\n");
@@ -456,7 +456,7 @@ static int xe3p_xpc_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XE2_COH_MODE, pat),
pat);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index bdbdbbf6a678..5dc536189184 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -58,7 +58,7 @@ bool xe_pxp_is_enabled(const struct xe_pxp *pxp)
static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
{
struct xe_gt *gt = pxp->gt;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
bool ready;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
@@ -77,7 +77,7 @@ static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
ready = xe_huc_is_authenticated(>->uc.huc, XE_HUC_AUTH_VIA_GSC) &&
xe_gsc_proxy_init_done(>->uc.gsc);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return ready;
}
@@ -135,7 +135,7 @@ static void pxp_invalidate_queues(struct xe_pxp *pxp);
static int pxp_terminate_hw(struct xe_pxp *pxp)
{
struct xe_gt *gt = pxp->gt;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
int ret = 0;
drm_dbg(&pxp->xe->drm, "Terminating PXP\n");
@@ -162,7 +162,7 @@ static int pxp_terminate_hw(struct xe_pxp *pxp)
ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
out:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return ret;
}
@@ -326,14 +326,14 @@ static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
{
u32 val = enable ? _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
_MASKED_BIT_DISABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES);
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
fw_ref = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
return -EIO;
xe_mmio_write32(&pxp->gt->mmio, KCR_INIT, val);
- xe_force_wake_put(gt_to_fw(pxp->gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
@@ -453,7 +453,7 @@ int xe_pxp_init(struct xe_device *xe)
static int __pxp_start_arb_session(struct xe_pxp *pxp)
{
int ret;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
fw_ref = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
@@ -479,7 +479,7 @@ static int __pxp_start_arb_session(struct xe_pxp *pxp)
drm_dbg(&pxp->xe->drm, "PXP ARB session is active\n");
out_force_wake:
- xe_force_wake_put(gt_to_fw(pxp->gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return ret;
}
diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
index 1c0915e2cc16..6d84651c76cd 100644
--- a/drivers/gpu/drm/xe/xe_query.c
+++ b/drivers/gpu/drm/xe/xe_query.c
@@ -122,7 +122,7 @@ query_engine_cycles(struct xe_device *xe,
__ktime_func_t cpu_clock;
struct xe_hw_engine *hwe;
struct xe_gt *gt;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
if (IS_SRIOV_VF(xe))
return -EOPNOTSUPP;
@@ -160,14 +160,14 @@ query_engine_cycles(struct xe_device *xe,
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return -EIO;
}
hwe_read_timestamp(hwe, &resp.engine_cycles, &resp.cpu_timestamp,
&resp.cpu_delta, cpu_clock);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
if (GRAPHICS_VER(xe) >= 20)
resp.width = 64;
diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
index fc8447a838c4..c155880646d4 100644
--- a/drivers/gpu/drm/xe/xe_reg_sr.c
+++ b/drivers/gpu/drm/xe/xe_reg_sr.c
@@ -168,7 +168,7 @@ void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
{
struct xe_reg_sr_entry *entry;
unsigned long reg;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
if (xa_empty(&sr->xa))
return;
@@ -185,12 +185,12 @@ void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
xa_for_each(&sr->xa, reg, entry)
apply_one_mmio(gt, entry);
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return;
err_force_wake:
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
xe_gt_err(gt, "Failed to apply, err=-ETIMEDOUT\n");
}
diff --git a/drivers/gpu/drm/xe/xe_vram.c b/drivers/gpu/drm/xe/xe_vram.c
index b62a96f8ef9e..8e43ebf8ee3b 100644
--- a/drivers/gpu/drm/xe/xe_vram.c
+++ b/drivers/gpu/drm/xe/xe_vram.c
@@ -245,7 +245,7 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
{
struct xe_device *xe = tile_to_xe(tile);
struct xe_gt *gt = tile->primary_gt;
- unsigned int fw_ref;
+ struct xe_force_wake_ref fw_ref;
u64 offset;
u32 reg;
@@ -266,7 +266,7 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
}
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref)
+ if (!fw_ref.domains)
return -ETIMEDOUT;
/* actual size */
@@ -289,7 +289,7 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
/* remove the tile offset so we have just the available size */
*vram_size = offset - *tile_offset;
- xe_force_wake_put(gt_to_fw(gt), fw_ref);
+ xe_force_wake_put(fw_ref);
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 06/33] squash! squash! drm/xe/forcewake: Create dedicated type for forcewake references
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (4 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 05/33] squash! " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 07/33] drm/xe/forcewake: Add scope-based cleanup for forcewake Matt Roper
` (31 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Additional xe_force_wake_ref conversions that are unusual and thus
weren't picked up automatically by Coccinelle and needed to be adjusted
by hand. Most of these are cases where 'fw_ref' was declared on the
same line as other unsigned int variables (I'm sure there's a proper way
to split it out automatically in smpl, but I couldn't figure it out).
This will be squashed into the conversion patch before applying. It
remains separate now to make the conversion easier to review.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/tests/xe_mocs.c | 8 +++++---
drivers/gpu/drm/xe/xe_mocs.c | 3 ++-
drivers/gpu/drm/xe/xe_pmu.c | 4 ++--
3 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
index 9c774b44328e..611050d49d0b 100644
--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
+++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
@@ -42,8 +42,9 @@ static void read_l3cc_table(struct xe_gt *gt,
const struct xe_mocs_info *info)
{
struct kunit *test = kunit_get_current_test();
+ struct xe_force_wake_ref fw_ref;
u32 l3cc, l3cc_expected;
- unsigned int fw_ref, i;
+ unsigned int i;
u32 reg_val;
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
@@ -81,15 +82,16 @@ static void read_mocs_table(struct xe_gt *gt,
const struct xe_mocs_info *info)
{
struct kunit *test = kunit_get_current_test();
+ struct xe_force_wake_ref fw_ref;
u32 mocs, mocs_expected;
- unsigned int fw_ref, i;
+ unsigned int i;
u32 reg_val;
KUNIT_EXPECT_TRUE_MSG(test, info->unused_entries_index,
"Unused entries index should have been defined\n");
fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- KUNIT_ASSERT_NE_MSG(test, fw_ref, 0, "Forcewake Failed.\n");
+ KUNIT_ASSERT_NE_MSG(test, fw_ref.domains, 0, "Forcewake Failed.\n");
for (i = 0; i < info->num_mocs_regs; i++) {
if (regs_are_mcr(gt))
diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
index 73f9401d9907..7b7b71ef82f1 100644
--- a/drivers/gpu/drm/xe/xe_mocs.c
+++ b/drivers/gpu/drm/xe/xe_mocs.c
@@ -811,7 +811,8 @@ int xe_mocs_dump(struct xe_gt *gt, struct drm_printer *p)
struct xe_device *xe = gt_to_xe(gt);
enum xe_force_wake_domains domain;
struct xe_mocs_info table;
- unsigned int fw_ref, flags;
+ struct xe_force_wake_ref fw_ref;
+ unsigned int flags;
int err = 0;
flags = get_mocs_settings(xe, &table);
diff --git a/drivers/gpu/drm/xe/xe_pmu.c b/drivers/gpu/drm/xe/xe_pmu.c
index dbd95327f9fc..9c88e6159b32 100644
--- a/drivers/gpu/drm/xe/xe_pmu.c
+++ b/drivers/gpu/drm/xe/xe_pmu.c
@@ -135,7 +135,7 @@ static bool event_gt_forcewake(struct perf_event *event)
struct xe_device *xe = container_of(event->pmu, typeof(*xe), pmu.base);
u64 config = event->attr.config;
struct xe_gt *gt;
- unsigned int *fw_ref;
+ struct xe_force_wake_ref *fw_ref;
if (!is_engine_event(config) && !is_gt_frequency_event(event))
return true;
@@ -147,7 +147,7 @@ static bool event_gt_forcewake(struct perf_event *event)
return false;
*fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!*fw_ref) {
+ if (fw_ref->domains) {
kfree(fw_ref);
return false;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 07/33] drm/xe/forcewake: Add scope-based cleanup for forcewake
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (5 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 06/33] squash! " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 08/33] drm/xe/pm: Add scope-based cleanup helper for runtime PM Matt Roper
` (30 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Since forcewake uses a reference counting get/put model, there are many
places where we need to be careful to drop the forcewake reference when
bailing out of a function early on an error path. Add scope-based
cleanup options that can be used in place of explicit get/put to help
prevent mistakes in this area.
Examples:
CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
Obtain forcewake on the XE_FW_GT domain and hold it until the
end of the current block. The wakeref will be dropped
automatically when the current scope is exited by any means
(return, break, reaching the end of the block, etc.).
xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
...
}
Hold all forcewake domains for the following block. As with the
CLASS usage, forcewake will be dropped automatically when the
block is exited by any means.
Use of these cleanup helpers should allow us to remove some ugly
goto-based error handling and help avoid mistakes in functions with lots
of early error exits.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_force_wake.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
index 86e9bca7cac9..14f32bdaa10e 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.h
+++ b/drivers/gpu/drm/xe/xe_force_wake.h
@@ -62,4 +62,16 @@ xe_force_wake_ref_has_domain(struct xe_force_wake_ref fw_ref,
return fw_ref.domains & domain;
}
+DEFINE_CLASS(xe_force_wake, struct xe_force_wake_ref,
+ xe_force_wake_put(_T), xe_force_wake_get(fw, domains),
+ struct xe_force_wake *fw, unsigned int domains);
+
+/*
+ * Scoped helper for the forcewake class, using the same trick as scoped_guard()
+ * to bind the lifetime to the next statement/block.
+ */
+#define xe_with_force_wake(ref, fw, domains) \
+ for (CLASS(xe_force_wake, ref)(fw, domains), *done = NULL; \
+ !done; done = (void *)1)
+
#endif
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 08/33] drm/xe/pm: Add scope-based cleanup helper for runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (6 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 07/33] drm/xe/forcewake: Add scope-based cleanup for forcewake Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-10 21:59 ` Matt Roper
2025-11-07 18:13 ` [PATCH 09/33] drm/xe/gt: Use scope-based cleanup Matt Roper
` (29 subsequent siblings)
37 siblings, 1 reply; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Add a scope-based helpers for runtime PM that may be used to simplify
cleanup logic and potentially avoid goto-based cleanup.
For example, using
guard(xe_pm_runtime)(xe);
will get runtime PM and cause a corresponding put to occur automatically
when the current scope is exited. 'xe_pm_runtime_noresume' can be used
as a guard replacement for the corresponding 'noresume' variant.
There's also an xe_pm_runtime_ioctl conditional guard that can be used
as a replacement for xe_runtime_ioctl():
ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
if ((ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm)) < 0)
/* failed */
In a few rare cases (such as gt_reset_worker()) we need to ensure that
runtime PM is dropped when the function is exited by any means
(including error paths), but the function does not need to acquire
runtime PM because that has already been done earlier by a different
function. For these special cases, an 'xe_pm_runtime_release_only'
guard can be used to handle the release without doing an acquisition.
These guards will be used in future patches to eliminate some of our
goto-based cleanup.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_pm.h | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
index f7f89a18b6fc..c2cde906aeaf 100644
--- a/drivers/gpu/drm/xe/xe_pm.h
+++ b/drivers/gpu/drm/xe/xe_pm.h
@@ -6,6 +6,7 @@
#ifndef _XE_PM_H_
#define _XE_PM_H_
+#include <linux/cleanup.h>
#include <linux/pm_runtime.h>
#define DEFAULT_VRAM_THRESHOLD 300 /* in MB */
@@ -37,4 +38,20 @@ int xe_pm_block_on_suspend(struct xe_device *xe);
void xe_pm_might_block_on_suspend(void);
int xe_pm_module_init(void);
+static inline void __xe_pm_runtime_noop(struct xe_device *xe) {}
+
+DEFINE_GUARD(xe_pm_runtime, struct xe_device *,
+ xe_pm_runtime_get(_T), xe_pm_runtime_put(_T))
+DEFINE_GUARD(xe_pm_runtime_noresume, struct xe_device *,
+ xe_pm_runtime_get_noresume(_T), xe_pm_runtime_put(_T))
+DEFINE_GUARD_COND(xe_pm_runtime, _ioctl, xe_pm_runtime_get_ioctl(_T))
+
+/*
+ * Used when a function needs to release runtime PM in all possible cases
+ * and error paths, but the wakeref was already acquired by a different
+ * function (i.e., get() has already happened so only a put() is needed).
+ */
+DEFINE_GUARD(xe_pm_runtime_release_only, struct xe_device *,
+ __xe_pm_runtime_noop(_T), xe_pm_runtime_put(_T));
+
#endif
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 09/33] drm/xe/gt: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (7 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 08/33] drm/xe/pm: Add scope-based cleanup helper for runtime PM Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 10/33] drm/xe/gt_idle: " Matt Roper
` (28 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Using scope-based cleanup for forcewake and runtime PM allows us to
reduce or eliminate some of the goto-based error handling and simplify
several functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_gt.c | 141 +++++++++++--------------------------
1 file changed, 43 insertions(+), 98 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
index d39bf8cb64eb..9111d7d60e33 100644
--- a/drivers/gpu/drm/xe/xe_gt.c
+++ b/drivers/gpu/drm/xe/xe_gt.c
@@ -103,13 +103,12 @@ void xe_gt_sanitize(struct xe_gt *gt)
static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
u32 reg;
if (!XE_GT_WA(gt, 16023588340))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return;
@@ -120,12 +119,10 @@ static void xe_gt_enable_host_l2_vram(struct xe_gt *gt)
}
xe_gt_mcr_multicast_write(gt, XEHPC_L3CLOS_MASK(3), 0xF);
- xe_force_wake_put(fw_ref);
}
static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
u32 reg;
if (!XE_GT_WA(gt, 16023588340))
@@ -134,15 +131,13 @@ static void xe_gt_disable_host_l2_vram(struct xe_gt *gt)
if (xe_gt_is_media_type(gt))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return;
reg = xe_gt_mcr_unicast_read_any(gt, XE2_GAMREQSTRM_CTRL);
reg &= ~CG_DIS_CNTLBUS;
xe_gt_mcr_multicast_write(gt, XE2_GAMREQSTRM_CTRL, reg);
-
- xe_force_wake_put(fw_ref);
}
static void gt_reset_worker(struct work_struct *w);
@@ -389,7 +384,6 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
int xe_gt_init_early(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
int err;
if (IS_SRIOV_PF(gt_to_xe(gt))) {
@@ -436,13 +430,12 @@ int xe_gt_init_early(struct xe_gt *gt)
if (err)
return err;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
xe_gt_mcr_init_early(gt);
xe_pat_init(gt);
- xe_force_wake_put(fw_ref);
return 0;
}
@@ -460,16 +453,15 @@ static void dump_pat_on_error(struct xe_gt *gt)
static int gt_init_with_gt_forcewake(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
int err;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
err = xe_uc_init(>->uc);
if (err)
- goto err_force_wake;
+ return err;
xe_gt_topology_init(gt);
xe_gt_mcr_init(gt);
@@ -478,7 +470,7 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
if (xe_gt_is_main_type(gt)) {
err = xe_ggtt_init(gt_to_tile(gt)->mem.ggtt);
if (err)
- goto err_force_wake;
+ return err;
if (IS_SRIOV_PF(gt_to_xe(gt)))
xe_lmtt_init(>_to_tile(gt)->sriov.pf.lmtt);
}
@@ -492,17 +484,17 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
err = xe_hw_engines_init_early(gt);
if (err) {
dump_pat_on_error(gt);
- goto err_force_wake;
+ return err;
}
err = xe_hw_engine_class_sysfs_init(gt);
if (err)
- goto err_force_wake;
+ return err;
/* Initialize CCS mode sysfs after early initialization of HW engines */
err = xe_gt_ccs_mode_sysfs_init(gt);
if (err)
- goto err_force_wake;
+ return err;
/*
* Stash hardware-reported version. Since this register does not exist
@@ -510,25 +502,16 @@ static int gt_init_with_gt_forcewake(struct xe_gt *gt)
*/
gt->info.gmdid = xe_mmio_read32(>->mmio, GMD_ID);
- xe_force_wake_put(fw_ref);
return 0;
-
-err_force_wake:
- xe_force_wake_put(fw_ref);
-
- return err;
}
static int gt_init_with_all_forcewake(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
int err;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- err = -ETIMEDOUT;
- goto err_force_wake;
- }
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+ return -ETIMEDOUT;
xe_gt_mcr_set_implicit_defaults(gt);
xe_wa_process_gt(gt);
@@ -537,20 +520,20 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
err = xe_gt_clock_init(gt);
if (err)
- goto err_force_wake;
+ return err;
xe_mocs_init(gt);
err = xe_execlist_init(gt);
if (err)
- goto err_force_wake;
+ return err;
err = xe_hw_engines_init(gt);
if (err)
- goto err_force_wake;
+ return err;
err = xe_uc_init_post_hwconfig(>->uc);
if (err)
- goto err_force_wake;
+ return err;
if (xe_gt_is_main_type(gt)) {
/*
@@ -561,10 +544,8 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
gt->usm.bb_pool = xe_sa_bo_manager_init(gt_to_tile(gt),
IS_DGFX(xe) ? SZ_1M : SZ_512K, 16);
- if (IS_ERR(gt->usm.bb_pool)) {
- err = PTR_ERR(gt->usm.bb_pool);
- goto err_force_wake;
- }
+ if (IS_ERR(gt->usm.bb_pool))
+ return PTR_ERR(gt->usm.bb_pool);
}
}
@@ -573,12 +554,12 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
err = xe_migrate_init(tile->migrate);
if (err)
- goto err_force_wake;
+ return err;
}
err = xe_uc_load_hw(>->uc);
if (err)
- goto err_force_wake;
+ return err;
/* Configure default CCS mode of 1 engine with all resources */
if (xe_gt_ccs_mode_enabled(gt)) {
@@ -592,14 +573,7 @@ static int gt_init_with_all_forcewake(struct xe_gt *gt)
if (IS_SRIOV_PF(gt_to_xe(gt)))
xe_gt_sriov_pf_init_hw(gt);
- xe_force_wake_put(fw_ref);
-
return 0;
-
-err_force_wake:
- xe_force_wake_put(fw_ref);
-
- return err;
}
static void xe_gt_fini(void *arg)
@@ -819,15 +793,17 @@ static int do_gt_restart(struct xe_gt *gt)
static void gt_reset_worker(struct work_struct *w)
{
struct xe_gt *gt = container_of(w, typeof(*gt), reset.worker);
- struct xe_force_wake_ref fw_ref;
int err;
+ /* Drop the existing runtime PM reference when exiting this function */
+ guard(xe_pm_runtime_release_only)(gt_to_xe(gt));
+
if (xe_device_wedged(gt_to_xe(gt)))
- goto err_pm_put;
+ return;
/* We only support GT resets with GuC submission */
if (!xe_device_uc_enabled(gt_to_xe(gt)))
- goto err_pm_put;
+ return;
xe_gt_info(gt, "reset started\n");
@@ -838,7 +814,7 @@ static void gt_reset_worker(struct work_struct *w)
xe_gt_sanitize(gt);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
err = -ETIMEDOUT;
goto err_out;
@@ -863,25 +839,16 @@ static void gt_reset_worker(struct work_struct *w)
if (err)
goto err_out;
- xe_force_wake_put(fw_ref);
-
- /* Pair with get while enqueueing the work in xe_gt_reset_async() */
- xe_pm_runtime_put(gt_to_xe(gt));
-
xe_gt_info(gt, "reset done\n");
return;
err_out:
- xe_force_wake_put(fw_ref);
XE_WARN_ON(xe_uc_start(>->uc));
err_fail:
xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err));
xe_device_declare_wedged(gt_to_xe(gt));
-
-err_pm_put:
- xe_pm_runtime_put(gt_to_xe(gt));
}
void xe_gt_reset_async(struct xe_gt *gt)
@@ -902,56 +869,42 @@ void xe_gt_reset_async(struct xe_gt *gt)
void xe_gt_suspend_prepare(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
-
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
-
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
xe_uc_suspend_prepare(>->uc);
-
- xe_force_wake_put(fw_ref);
}
int xe_gt_suspend(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
int err;
xe_gt_dbg(gt, "suspending\n");
xe_gt_sanitize(gt);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- goto err_msg;
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
+ xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(-ETIMEDOUT));
+ return -ETIMEDOUT;
+ }
err = xe_uc_suspend(>->uc);
- if (err)
- goto err_force_wake;
+ if (err) {
+ xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
+ return err;
+ }
xe_gt_idle_disable_pg(gt);
xe_gt_disable_host_l2_vram(gt);
- xe_force_wake_put(fw_ref);
xe_gt_dbg(gt, "suspended\n");
return 0;
-
-err_msg:
- err = -ETIMEDOUT;
-err_force_wake:
- xe_force_wake_put(fw_ref);
- xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
-
- return err;
}
void xe_gt_shutdown(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
-
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
do_gt_reset(gt);
- xe_force_wake_put(fw_ref);
}
/**
@@ -976,32 +929,24 @@ int xe_gt_sanitize_freq(struct xe_gt *gt)
int xe_gt_resume(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
int err;
xe_gt_dbg(gt, "resuming\n");
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- goto err_msg;
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
+ xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(-ETIMEDOUT));
+ return -ETIMEDOUT;
+ }
err = do_gt_restart(gt);
if (err)
- goto err_force_wake;
+ return err;
xe_gt_idle_enable_pg(gt);
- xe_force_wake_put(fw_ref);
xe_gt_dbg(gt, "resumed\n");
return 0;
-
-err_msg:
- err = -ETIMEDOUT;
-err_force_wake:
- xe_force_wake_put(fw_ref);
- xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(err));
-
- return err;
}
struct xe_hw_engine *xe_gt_hw_engine(struct xe_gt *gt,
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 10/33] drm/xe/gt_idle: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (8 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 09/33] drm/xe/gt: Use scope-based cleanup Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 11/33] drm/xe/guc: " Matt Roper
` (27 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for runtime PM and forcewake in the GT idle
code.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_gt_idle.c | 28 +++++++---------------------
1 file changed, 7 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt_idle.c b/drivers/gpu/drm/xe/xe_gt_idle.c
index 503aeabbe4c3..6a63b7ad69a7 100644
--- a/drivers/gpu/drm/xe/xe_gt_idle.c
+++ b/drivers/gpu/drm/xe/xe_gt_idle.c
@@ -103,7 +103,6 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
struct xe_gt_idle *gtidle = >->gtidle;
struct xe_mmio *mmio = >->mmio;
u32 vcs_mask, vecs_mask;
- struct xe_force_wake_ref fw_ref;
int i, j;
if (IS_SRIOV_VF(xe))
@@ -135,7 +134,7 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
}
}
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (xe->info.skip_guc_pc) {
/*
* GuC sets the hysteresis value when GuC PC is enabled
@@ -146,13 +145,11 @@ void xe_gt_idle_enable_pg(struct xe_gt *gt)
}
xe_mmio_write32(mmio, POWERGATE_ENABLE, gtidle->powergate_enable);
- xe_force_wake_put(fw_ref);
}
void xe_gt_idle_disable_pg(struct xe_gt *gt)
{
struct xe_gt_idle *gtidle = >->gtidle;
- struct xe_force_wake_ref fw_ref;
if (IS_SRIOV_VF(gt_to_xe(gt)))
return;
@@ -160,9 +157,8 @@ void xe_gt_idle_disable_pg(struct xe_gt *gt)
xe_device_assert_mem_access(gt_to_xe(gt));
gtidle->powergate_enable = 0;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
xe_mmio_write32(>->mmio, POWERGATE_ENABLE, gtidle->powergate_enable);
- xe_force_wake_put(fw_ref);
}
/**
@@ -181,7 +177,6 @@ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p)
enum xe_gt_idle_state state;
u32 pg_enabled, pg_status = 0;
u32 vcs_mask, vecs_mask;
- struct xe_force_wake_ref fw_ref;
int n;
/*
* Media Slices
@@ -218,14 +213,12 @@ int xe_gt_idle_pg_print(struct xe_gt *gt, struct drm_printer *p)
/* Do not wake the GT to read powergating status */
if (state != GT_IDLE_C6) {
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
pg_enabled = xe_mmio_read32(>->mmio, POWERGATE_ENABLE);
pg_status = xe_mmio_read32(>->mmio, POWERGATE_DOMAIN_STATUS);
-
- xe_force_wake_put(fw_ref);
}
if (gt->info.engine_mask & XE_HW_ENGINE_RCS_MASK) {
@@ -265,9 +258,8 @@ static ssize_t name_show(struct kobject *kobj,
struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
ssize_t ret;
- xe_pm_runtime_get(pc_to_xe(pc));
+ guard(xe_pm_runtime)(pc_to_xe(pc));
ret = sysfs_emit(buff, "%s\n", gtidle->name);
- xe_pm_runtime_put(pc_to_xe(pc));
return ret;
}
@@ -281,9 +273,8 @@ static ssize_t idle_status_show(struct kobject *kobj,
struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
enum xe_gt_idle_state state;
- xe_pm_runtime_get(pc_to_xe(pc));
+ guard(xe_pm_runtime)(pc_to_xe(pc));
state = gtidle->idle_status(pc);
- xe_pm_runtime_put(pc_to_xe(pc));
return sysfs_emit(buff, "%s\n", gt_idle_state_to_string(state));
}
@@ -311,9 +302,8 @@ static ssize_t idle_residency_ms_show(struct kobject *kobj,
struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
u64 residency;
- xe_pm_runtime_get(pc_to_xe(pc));
+ guard(xe_pm_runtime)(pc_to_xe(pc));
residency = xe_gt_idle_residency_msec(gtidle);
- xe_pm_runtime_put(pc_to_xe(pc));
return sysfs_emit(buff, "%llu\n", residency);
}
@@ -396,21 +386,17 @@ void xe_gt_idle_enable_c6(struct xe_gt *gt)
int xe_gt_idle_disable_c6(struct xe_gt *gt)
{
- struct xe_force_wake_ref fw_ref;
-
xe_device_assert_mem_access(gt_to_xe(gt));
if (IS_SRIOV_VF(gt_to_xe(gt)))
return 0;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
xe_mmio_write32(>->mmio, RC_CONTROL, 0);
xe_mmio_write32(>->mmio, RC_STATE, 0);
- xe_force_wake_put(fw_ref);
-
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 11/33] drm/xe/guc: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (9 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 10/33] drm/xe/gt_idle: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 12/33] drm/xe/guc_pc: " Matt Roper
` (26 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_guc.c | 11 +++--------
drivers/gpu/drm/xe/xe_guc_log.c | 10 ++++------
drivers/gpu/drm/xe/xe_guc_submit.c | 9 ++-------
drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 4 +---
4 files changed, 10 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
index edff47a3235a..e47292b2aab0 100644
--- a/drivers/gpu/drm/xe/xe_guc.c
+++ b/drivers/gpu/drm/xe/xe_guc.c
@@ -658,11 +658,9 @@ static void guc_fini_hw(void *arg)
{
struct xe_guc *guc = arg;
struct xe_gt *gt = guc_to_gt(guc);
- struct xe_force_wake_ref fw_ref;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- xe_uc_sanitize_reset(&guc_to_gt(guc)->uc);
- xe_force_wake_put(fw_ref);
+ xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FORCEWAKE_ALL)
+ xe_uc_sanitize_reset(&guc_to_gt(guc)->uc);
guc_g2g_fini(guc);
}
@@ -1610,14 +1608,13 @@ int xe_guc_start(struct xe_guc *guc)
void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
{
struct xe_gt *gt = guc_to_gt(guc);
- struct xe_force_wake_ref fw_ref;
u32 status;
int i;
xe_uc_fw_print(&guc->fw, p);
if (!IS_SRIOV_VF(gt_to_xe(gt))) {
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return;
@@ -1638,8 +1635,6 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p)
drm_printf(p, "\t%2d: \t0x%x\n",
i, xe_mmio_read32(>->mmio, SOFT_SCRATCH(i)));
}
-
- xe_force_wake_put(fw_ref);
}
drm_puts(p, "\n");
diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
index f1d25e542f98..0c704a11078a 100644
--- a/drivers/gpu/drm/xe/xe_guc_log.c
+++ b/drivers/gpu/drm/xe/xe_guc_log.c
@@ -145,7 +145,6 @@ struct xe_guc_log_snapshot *xe_guc_log_snapshot_capture(struct xe_guc_log *log,
struct xe_device *xe = log_to_xe(log);
struct xe_guc *guc = log_to_guc(log);
struct xe_gt *gt = log_to_gt(log);
- struct xe_force_wake_ref fw_ref;
size_t remain;
int i;
@@ -165,13 +164,12 @@ struct xe_guc_log_snapshot *xe_guc_log_snapshot_capture(struct xe_guc_log *log,
remain -= size;
}
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!fw_ref.domains) {
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!fw_ref.domains)
snapshot->stamp = ~0ULL;
- } else {
+ else
snapshot->stamp = xe_mmio_read64_2x32(>->mmio, GUC_PMTIMESTAMP_LO);
- xe_force_wake_put(fw_ref);
- }
+
snapshot->ktime = ktime_get_boottime_ns();
snapshot->level = log->level;
snapshot->ver_found = guc->fw.versions.found[XE_UC_FW_VER_RELEASE];
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 40514c270d6b..040b18792ac4 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1225,7 +1225,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
struct xe_guc *guc = exec_queue_to_guc(q);
const char *process_name = "no process";
struct xe_device *xe = guc_to_xe(guc);
- struct xe_force_wake_ref fw_ref;
int err = -ETIME;
pid_t pid = -1;
int i = 0;
@@ -1258,13 +1257,11 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
if (!exec_queue_killed(q) && !xe->devcoredump.captured &&
!xe_guc_capture_get_matching_and_lock(q)) {
/* take force wake before engine register manual capture */
- fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
xe_gt_info(q->gt, "failed to get forcewake for coredump capture\n");
xe_engine_snapshot_capture_for_queue(q);
-
- xe_force_wake_put(fw_ref);
}
/*
@@ -1455,7 +1452,7 @@ static void __guc_exec_queue_destroy_async(struct work_struct *w)
struct xe_exec_queue *q = ge->q;
struct xe_guc *guc = exec_queue_to_guc(q);
- xe_pm_runtime_get(guc_to_xe(guc));
+ guard(xe_pm_runtime)(guc_to_xe(guc));
trace_xe_exec_queue_destroy(q);
if (xe_exec_queue_is_lr(q))
@@ -1464,8 +1461,6 @@ static void __guc_exec_queue_destroy_async(struct work_struct *w)
cancel_delayed_work_sync(&ge->sched.base.work_tdr);
xe_exec_queue_fini(q);
-
- xe_pm_runtime_put(guc_to_xe(guc));
}
static void guc_exec_queue_destroy_async(struct xe_exec_queue *q)
diff --git a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
index 321e5072b43e..848d3493df10 100644
--- a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
+++ b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
@@ -71,12 +71,11 @@ static int send_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval, u32 seqno)
return send_tlb_inval(guc, action, ARRAY_SIZE(action));
} else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {
struct xe_mmio *mmio = >->mmio;
- struct xe_force_wake_ref fw_ref;
if (IS_SRIOV_VF(xe))
return -ECANCELED;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (xe->info.platform == XE_PVC || GRAPHICS_VER(xe) >= 20) {
xe_mmio_write32(mmio, PVC_GUC_TLB_INV_DESC1,
PVC_GUC_TLB_INV_DESC1_INVALIDATE);
@@ -86,7 +85,6 @@ static int send_tlb_inval_ggtt(struct xe_tlb_inval *tlb_inval, u32 seqno)
xe_mmio_write32(mmio, GUC_TLB_INV_CR,
GUC_TLB_INV_CR_INVALIDATE);
}
- xe_force_wake_put(fw_ref);
}
return -ECANCELED;
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 12/33] drm/xe/guc_pc: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (10 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 11/33] drm/xe/guc: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 13/33] drm/xe/mocs: " Matt Roper
` (25 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM in the GuC PC code.
This allows us to eliminate to goto-based cleanup and simplifies some
other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_guc_pc.c | 62 ++++++++++------------------------
1 file changed, 17 insertions(+), 45 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c b/drivers/gpu/drm/xe/xe_guc_pc.c
index 034c87d7bf10..354c1454c7c0 100644
--- a/drivers/gpu/drm/xe/xe_guc_pc.c
+++ b/drivers/gpu/drm/xe/xe_guc_pc.c
@@ -511,21 +511,17 @@ u32 xe_guc_pc_get_cur_freq_fw(struct xe_guc_pc *pc)
int xe_guc_pc_get_cur_freq(struct xe_guc_pc *pc, u32 *freq)
{
struct xe_gt *gt = pc_to_gt(pc);
- struct xe_force_wake_ref fw_ref;
/*
* GuC SLPC plays with cur freq request when GuCRC is enabled
* Block RC6 for a more reliable read.
*/
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
- xe_force_wake_put(fw_ref);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
return -ETIMEDOUT;
- }
*freq = get_cur_freq(gt);
- xe_force_wake_put(fw_ref);
return 0;
}
@@ -1085,13 +1081,8 @@ int xe_guc_pc_gucrc_disable(struct xe_guc_pc *pc)
*/
int xe_guc_pc_override_gucrc_mode(struct xe_guc_pc *pc, enum slpc_gucrc_mode mode)
{
- int ret;
-
- xe_pm_runtime_get(pc_to_xe(pc));
- ret = pc_action_set_param(pc, SLPC_PARAM_PWRGATE_RC_MODE, mode);
- xe_pm_runtime_put(pc_to_xe(pc));
-
- return ret;
+ guard(xe_pm_runtime)(pc_to_xe(pc));
+ return pc_action_set_param(pc, SLPC_PARAM_PWRGATE_RC_MODE, mode);
}
/**
@@ -1102,13 +1093,8 @@ int xe_guc_pc_override_gucrc_mode(struct xe_guc_pc *pc, enum slpc_gucrc_mode mod
*/
int xe_guc_pc_unset_gucrc_mode(struct xe_guc_pc *pc)
{
- int ret;
-
- xe_pm_runtime_get(pc_to_xe(pc));
- ret = pc_action_unset_param(pc, SLPC_PARAM_PWRGATE_RC_MODE);
- xe_pm_runtime_put(pc_to_xe(pc));
-
- return ret;
+ guard(xe_pm_runtime)(pc_to_xe(pc));
+ return pc_action_unset_param(pc, SLPC_PARAM_PWRGATE_RC_MODE);
}
static void pc_init_pcode_freq(struct xe_guc_pc *pc)
@@ -1198,7 +1184,7 @@ int xe_guc_pc_set_power_profile(struct xe_guc_pc *pc, const char *buf)
return -EINVAL;
guard(mutex)(&pc->freq_lock);
- xe_pm_runtime_get_noresume(pc_to_xe(pc));
+ guard(xe_pm_runtime_noresume)(pc_to_xe(pc));
ret = pc_action_set_param(pc,
SLPC_PARAM_POWER_PROFILE,
@@ -1209,8 +1195,6 @@ int xe_guc_pc_set_power_profile(struct xe_guc_pc *pc, const char *buf)
else
pc->power_profile = val;
- xe_pm_runtime_put(pc_to_xe(pc));
-
return ret;
}
@@ -1223,17 +1207,14 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
struct xe_device *xe = pc_to_xe(pc);
struct xe_gt *gt = pc_to_gt(pc);
u32 size = PAGE_ALIGN(sizeof(struct slpc_shared_data));
- struct xe_force_wake_ref fw_ref;
ktime_t earlier;
int ret;
xe_gt_assert(gt, xe_device_uc_enabled(xe));
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
- xe_force_wake_put(fw_ref);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
return -ETIMEDOUT;
- }
if (xe->info.skip_guc_pc) {
if (xe->info.platform != XE_PVC)
@@ -1241,9 +1222,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
/* Request max possible since dynamic freq mgmt is not enabled */
pc_set_cur_freq(pc, UINT_MAX);
-
- ret = 0;
- goto out;
+ return 0;
}
xe_map_memset(xe, &pc->bo->vmap, 0, 0, size);
@@ -1252,7 +1231,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
earlier = ktime_get();
ret = pc_action_reset(pc);
if (ret)
- goto out;
+ return ret;
if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING,
SLPC_RESET_TIMEOUT_MS)) {
@@ -1263,8 +1242,7 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
if (wait_for_pc_state(pc, SLPC_GLOBAL_STATE_RUNNING,
SLPC_RESET_EXTENDED_TIMEOUT_MS)) {
xe_gt_err(gt, "GuC PC Start failed: Dynamic GT frequency control and GT sleep states are now disabled.\n");
- ret = -EIO;
- goto out;
+ return -EIO;
}
xe_gt_warn(gt, "GuC PC excessive start time: %lldms",
@@ -1273,21 +1251,20 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
ret = pc_init_freqs(pc);
if (ret)
- goto out;
+ return ret;
ret = pc_set_mert_freq_cap(pc);
if (ret)
- goto out;
+ return ret;
if (xe->info.platform == XE_PVC) {
xe_guc_pc_gucrc_disable(pc);
- ret = 0;
- goto out;
+ return 0;
}
ret = pc_action_setup_gucrc(pc, GUCRC_FIRMWARE_CONTROL);
if (ret)
- goto out;
+ return ret;
/* Enable SLPC Optimized Strategy for compute */
ret = pc_action_set_strategy(pc, SLPC_OPTIMIZED_STRATEGY_COMPUTE);
@@ -1297,8 +1274,6 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
if (unlikely(ret))
xe_gt_err(gt, "Failed to set SLPC power profile: %pe\n", ERR_PTR(ret));
-out:
- xe_force_wake_put(fw_ref);
return ret;
}
@@ -1330,19 +1305,16 @@ static void xe_guc_pc_fini_hw(void *arg)
{
struct xe_guc_pc *pc = arg;
struct xe_device *xe = pc_to_xe(pc);
- struct xe_force_wake_ref fw_ref;
if (xe_device_wedged(xe))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(pc_to_gt(pc)), XE_FORCEWAKE_ALL);
xe_guc_pc_gucrc_disable(pc);
XE_WARN_ON(xe_guc_pc_stop(pc));
/* Bind requested freq to mert_freq_cap before unload */
pc_set_cur_freq(pc, min(pc_max_freq_cap(pc), pc->rpe_freq));
-
- xe_force_wake_put(fw_ref);
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 13/33] drm/xe/mocs: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (11 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 12/33] drm/xe/guc_pc: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 14/33] drm/xe/pat: Use scope-based forcewake Matt Roper
` (24 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Using scope-based cleanup for runtime PM and forcewake in the MOCS code
allows us to eliminate some goto-based error handling and simplify some
other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/tests/xe_mocs.c | 13 +++----------
drivers/gpu/drm/xe/xe_mocs.c | 17 +++++------------
2 files changed, 8 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
index 611050d49d0b..16681a0f1f77 100644
--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
+++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
@@ -42,16 +42,13 @@ static void read_l3cc_table(struct xe_gt *gt,
const struct xe_mocs_info *info)
{
struct kunit *test = kunit_get_current_test();
- struct xe_force_wake_ref fw_ref;
u32 l3cc, l3cc_expected;
unsigned int i;
u32 reg_val;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- xe_force_wake_put(fw_ref);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
KUNIT_ASSERT_TRUE_MSG(test, true, "Forcewake Failed.\n");
- }
for (i = 0; i < info->num_mocs_regs; i++) {
if (!(i & 1)) {
@@ -75,14 +72,12 @@ static void read_l3cc_table(struct xe_gt *gt,
KUNIT_EXPECT_EQ_MSG(test, l3cc_expected, l3cc,
"l3cc idx=%u has incorrect val.\n", i);
}
- xe_force_wake_put(fw_ref);
}
static void read_mocs_table(struct xe_gt *gt,
const struct xe_mocs_info *info)
{
struct kunit *test = kunit_get_current_test();
- struct xe_force_wake_ref fw_ref;
u32 mocs, mocs_expected;
unsigned int i;
u32 reg_val;
@@ -90,7 +85,7 @@ static void read_mocs_table(struct xe_gt *gt,
KUNIT_EXPECT_TRUE_MSG(test, info->unused_entries_index,
"Unused entries index should have been defined\n");
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
KUNIT_ASSERT_NE_MSG(test, fw_ref.domains, 0, "Forcewake Failed.\n");
for (i = 0; i < info->num_mocs_regs; i++) {
@@ -108,8 +103,6 @@ static void read_mocs_table(struct xe_gt *gt,
KUNIT_EXPECT_EQ_MSG(test, mocs_expected, mocs,
"mocs reg 0x%x has incorrect val.\n", i);
}
-
- xe_force_wake_put(fw_ref);
}
static int mocs_kernel_test_run_device(struct xe_device *xe)
diff --git a/drivers/gpu/drm/xe/xe_mocs.c b/drivers/gpu/drm/xe/xe_mocs.c
index 7b7b71ef82f1..1c590fe5eca3 100644
--- a/drivers/gpu/drm/xe/xe_mocs.c
+++ b/drivers/gpu/drm/xe/xe_mocs.c
@@ -811,27 +811,20 @@ int xe_mocs_dump(struct xe_gt *gt, struct drm_printer *p)
struct xe_device *xe = gt_to_xe(gt);
enum xe_force_wake_domains domain;
struct xe_mocs_info table;
- struct xe_force_wake_ref fw_ref;
unsigned int flags;
- int err = 0;
flags = get_mocs_settings(xe, &table);
domain = flags & HAS_LNCF_MOCS ? XE_FORCEWAKE_ALL : XE_FW_GT;
- xe_pm_runtime_get_noresume(xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), domain);
- if (!xe_force_wake_ref_has_domain(fw_ref, domain)) {
- err = -ETIMEDOUT;
- goto err_fw;
- }
+ guard(xe_pm_runtime_noresume)(xe);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), domain);
+ if (!xe_force_wake_ref_has_domain(fw_ref, domain))
+ return -ETIMEDOUT;
table.ops->dump(&table, flags, gt, p);
-err_fw:
- xe_force_wake_put(fw_ref);
- xe_pm_runtime_put(xe);
- return err;
+ return 0;
}
#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 14/33] drm/xe/pat: Use scope-based forcewake
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (12 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 13/33] drm/xe/mocs: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 15/33] drm/xe/pxp: Use scope-based cleanup Matt Roper
` (23 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake in the PAT code to slightly
simplify the code.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_pat.c | 24 ++++++------------------
1 file changed, 6 insertions(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pat.c b/drivers/gpu/drm/xe/xe_pat.c
index 2a963b8ff807..97b5e995f7d7 100644
--- a/drivers/gpu/drm/xe/xe_pat.c
+++ b/drivers/gpu/drm/xe/xe_pat.c
@@ -233,10 +233,9 @@ static void program_pat_mcr(struct xe_gt *gt, const struct xe_pat_table_entry ta
static int xelp_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- struct xe_force_wake_ref fw_ref;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
@@ -250,7 +249,6 @@ static int xelp_dump(struct xe_gt *gt, struct drm_printer *p)
XELP_MEM_TYPE_STR_MAP[mem_type], pat);
}
- xe_force_wake_put(fw_ref);
return 0;
}
@@ -262,10 +260,9 @@ static const struct xe_pat_ops xelp_pat_ops = {
static int xehp_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- struct xe_force_wake_ref fw_ref;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
@@ -281,7 +278,6 @@ static int xehp_dump(struct xe_gt *gt, struct drm_printer *p)
XELP_MEM_TYPE_STR_MAP[mem_type], pat);
}
- xe_force_wake_put(fw_ref);
return 0;
}
@@ -293,10 +289,9 @@ static const struct xe_pat_ops xehp_pat_ops = {
static int xehpc_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- struct xe_force_wake_ref fw_ref;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
@@ -310,7 +305,6 @@ static int xehpc_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XEHPC_CLOS_LEVEL_MASK, pat), pat);
}
- xe_force_wake_put(fw_ref);
return 0;
}
@@ -322,10 +316,9 @@ static const struct xe_pat_ops xehpc_pat_ops = {
static int xelpg_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- struct xe_force_wake_ref fw_ref;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
@@ -344,7 +337,6 @@ static int xelpg_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XELPG_INDEX_COH_MODE_MASK, pat), pat);
}
- xe_force_wake_put(fw_ref);
return 0;
}
@@ -361,11 +353,10 @@ static const struct xe_pat_ops xelpg_pat_ops = {
static int xe2_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- struct xe_force_wake_ref fw_ref;
u32 pat;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
@@ -406,7 +397,6 @@ static int xe2_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XE2_COH_MODE, pat),
pat);
- xe_force_wake_put(fw_ref);
return 0;
}
@@ -419,11 +409,10 @@ static const struct xe_pat_ops xe2_pat_ops = {
static int xe3p_xpc_dump(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_device *xe = gt_to_xe(gt);
- struct xe_force_wake_ref fw_ref;
u32 pat;
int i;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
@@ -456,7 +445,6 @@ static int xe3p_xpc_dump(struct xe_gt *gt, struct drm_printer *p)
REG_FIELD_GET(XE2_COH_MODE, pat),
pat);
- xe_force_wake_put(fw_ref);
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 15/33] drm/xe/pxp: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (13 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 14/33] drm/xe/pat: Use scope-based forcewake Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 16/33] drm/xe/gsc: " Matt Roper
` (22 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime pm. This allows us to
eliminate some goto-based error handling and simplify other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_pxp.c | 49 ++++++++++++-------------------------
1 file changed, 15 insertions(+), 34 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index 5dc536189184..6f0dc791b4c1 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -58,10 +58,9 @@ bool xe_pxp_is_enabled(const struct xe_pxp *pxp)
static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
{
struct xe_gt *gt = pxp->gt;
- struct xe_force_wake_ref fw_ref;
bool ready;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
/*
* If force_wake fails we could falsely report the prerequisites as not
@@ -77,8 +76,6 @@ static bool pxp_prerequisites_done(const struct xe_pxp *pxp)
ready = xe_huc_is_authenticated(>->uc.huc, XE_HUC_AUTH_VIA_GSC) &&
xe_gsc_proxy_init_done(>->uc.gsc);
- xe_force_wake_put(fw_ref);
-
return ready;
}
@@ -104,13 +101,12 @@ int xe_pxp_get_readiness_status(struct xe_pxp *pxp)
xe_uc_fw_status_to_error(pxp->gt->uc.gsc.fw.status))
return -EIO;
- xe_pm_runtime_get(pxp->xe);
+ guard(xe_pm_runtime)(pxp->xe);
/* PXP requires both HuC loaded and GSC proxy initialized */
if (pxp_prerequisites_done(pxp))
ret = 1;
- xe_pm_runtime_put(pxp->xe);
return ret;
}
@@ -135,35 +131,28 @@ static void pxp_invalidate_queues(struct xe_pxp *pxp);
static int pxp_terminate_hw(struct xe_pxp *pxp)
{
struct xe_gt *gt = pxp->gt;
- struct xe_force_wake_ref fw_ref;
int ret = 0;
drm_dbg(&pxp->xe->drm, "Terminating PXP\n");
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT)) {
- ret = -EIO;
- goto out;
- }
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
+ return -EIO;
/* terminate the hw session */
ret = xe_pxp_submit_session_termination(pxp, ARB_SESSION);
if (ret)
- goto out;
+ return ret;
ret = pxp_wait_for_session_state(pxp, ARB_SESSION, false);
if (ret)
- goto out;
+ return ret;
/* Trigger full HW cleanup */
xe_mmio_write32(>->mmio, KCR_GLOBAL_TERMINATE, 1);
/* now we can tell the GSC to clean up its own state */
- ret = xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
-
-out:
- xe_force_wake_put(fw_ref);
- return ret;
+ return xe_pxp_submit_session_invalidation(&pxp->gsc_res, ARB_SESSION);
}
static void mark_termination_in_progress(struct xe_pxp *pxp)
@@ -326,14 +315,12 @@ static int kcr_pxp_set_status(const struct xe_pxp *pxp, bool enable)
{
u32 val = enable ? _MASKED_BIT_ENABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES) :
_MASKED_BIT_DISABLE(KCR_INIT_ALLOW_DISPLAY_ME_WRITES);
- struct xe_force_wake_ref fw_ref;
- fw_ref = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(pxp->gt), XE_FW_GT);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
return -EIO;
xe_mmio_write32(&pxp->gt->mmio, KCR_INIT, val);
- xe_force_wake_put(fw_ref);
return 0;
}
@@ -453,34 +440,28 @@ int xe_pxp_init(struct xe_device *xe)
static int __pxp_start_arb_session(struct xe_pxp *pxp)
{
int ret;
- struct xe_force_wake_ref fw_ref;
- fw_ref = xe_force_wake_get(gt_to_fw(pxp->gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(pxp->gt), XE_FW_GT);
if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_GT))
return -EIO;
- if (pxp_session_is_in_play(pxp, ARB_SESSION)) {
- ret = -EEXIST;
- goto out_force_wake;
- }
+ if (pxp_session_is_in_play(pxp, ARB_SESSION))
+ return -EEXIST;
ret = xe_pxp_submit_session_init(&pxp->gsc_res, ARB_SESSION);
if (ret) {
drm_err(&pxp->xe->drm, "Failed to init PXP arb session: %pe\n", ERR_PTR(ret));
- goto out_force_wake;
+ return ret;
}
ret = pxp_wait_for_session_state(pxp, ARB_SESSION, true);
if (ret) {
drm_err(&pxp->xe->drm, "PXP ARB session failed to go in play%pe\n", ERR_PTR(ret));
- goto out_force_wake;
+ return ret;
}
drm_dbg(&pxp->xe->drm, "PXP ARB session is active\n");
-
-out_force_wake:
- xe_force_wake_put(fw_ref);
- return ret;
+ return 0;
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 16/33] drm/xe/gsc: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (14 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 15/33] drm/xe/pxp: Use scope-based cleanup Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 17/33] drm/xe/device: " Matt Roper
` (21 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM to eliminate some
goto-based error handling and simplify other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_gsc.c | 19 +++++--------------
drivers/gpu/drm/xe/xe_gsc_proxy.c | 17 +++++++----------
2 files changed, 12 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gsc.c b/drivers/gpu/drm/xe/xe_gsc.c
index 59519c9023bd..aa1cec8f57bc 100644
--- a/drivers/gpu/drm/xe/xe_gsc.c
+++ b/drivers/gpu/drm/xe/xe_gsc.c
@@ -353,7 +353,6 @@ static void gsc_work(struct work_struct *work)
struct xe_gsc *gsc = container_of(work, typeof(*gsc), work);
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_device *xe = gt_to_xe(gt);
- struct xe_force_wake_ref fw_ref;
u32 actions;
int ret;
@@ -362,13 +361,12 @@ static void gsc_work(struct work_struct *work)
gsc->work_actions = 0;
spin_unlock_irq(&gsc->lock);
- xe_pm_runtime_get(xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
+ guard(xe_pm_runtime)(xe);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
if (actions & GSC_ACTION_ER_COMPLETE) {
- ret = gsc_er_complete(gt);
- if (ret)
- goto out;
+ if (gsc_er_complete(gt))
+ return;
}
if (actions & GSC_ACTION_FW_LOAD) {
@@ -381,10 +379,6 @@ static void gsc_work(struct work_struct *work)
if (actions & GSC_ACTION_SW_PROXY)
xe_gsc_proxy_request_handler(gsc);
-
-out:
- xe_force_wake_put(fw_ref);
- xe_pm_runtime_put(xe);
}
void xe_gsc_hwe_irq_handler(struct xe_hw_engine *hwe, u16 intr_vec)
@@ -616,7 +610,6 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
{
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_mmio *mmio = >->mmio;
- struct xe_force_wake_ref fw_ref;
xe_uc_fw_print(&gsc->fw, p);
@@ -625,7 +618,7 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
if (!xe_uc_fw_is_enabled(&gsc->fw))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
if (!fw_ref.domains)
return;
@@ -636,6 +629,4 @@ void xe_gsc_print_info(struct xe_gsc *gsc, struct drm_printer *p)
xe_mmio_read32(mmio, HECI_FWSTS4(MTL_GSC_HECI1_BASE)),
xe_mmio_read32(mmio, HECI_FWSTS5(MTL_GSC_HECI1_BASE)),
xe_mmio_read32(mmio, HECI_FWSTS6(MTL_GSC_HECI1_BASE)));
-
- xe_force_wake_put(fw_ref);
}
diff --git a/drivers/gpu/drm/xe/xe_gsc_proxy.c b/drivers/gpu/drm/xe/xe_gsc_proxy.c
index ba1211fe5a60..e7573a0c5e5d 100644
--- a/drivers/gpu/drm/xe/xe_gsc_proxy.c
+++ b/drivers/gpu/drm/xe/xe_gsc_proxy.c
@@ -440,22 +440,19 @@ static void xe_gsc_proxy_remove(void *arg)
struct xe_gsc *gsc = arg;
struct xe_gt *gt = gsc_to_gt(gsc);
struct xe_device *xe = gt_to_xe(gt);
- struct xe_force_wake_ref fw_ref;
if (!gsc->proxy.component_added)
return;
/* disable HECI2 IRQs */
- xe_pm_runtime_get(xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
- if (!fw_ref.domains)
- xe_gt_err(gt, "failed to get forcewake to disable GSC interrupts\n");
+ scoped_guard(xe_pm_runtime, xe) {
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
+ if (!fw_ref.domains)
+ xe_gt_err(gt, "failed to get forcewake to disable GSC interrupts\n");
- /* try do disable irq even if forcewake failed */
- gsc_proxy_irq_toggle(gsc, false);
-
- xe_force_wake_put(fw_ref);
- xe_pm_runtime_put(xe);
+ /* try do disable irq even if forcewake failed */
+ gsc_proxy_irq_toggle(gsc, false);
+ }
xe_gsc_wait_for_worker_completion(gsc);
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 17/33] drm/xe/device: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (15 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 16/33] drm/xe/gsc: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 18/33] drm/xe/devcoredump: " Matt Roper
` (20 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Convert device code to use scope-based forcewake and runtime PM.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 29 ++++++++---------------------
1 file changed, 8 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 9ae2b29a1cab..b8ff3a120280 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -166,7 +166,7 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
struct xe_exec_queue *q;
unsigned long idx;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
/*
* No need for exec_queue.lock here as there is no contention for it
@@ -184,8 +184,6 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
xe_vm_close_and_put(vm);
xe_file_put(xef);
-
- xe_pm_runtime_put(xe);
}
static const struct drm_ioctl_desc xe_ioctls[] = {
@@ -220,10 +218,9 @@ static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
if (xe_device_wedged(xe))
return -ECANCELED;
- ret = xe_pm_runtime_get_ioctl(xe);
- if (ret >= 0)
+ ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
+ if ((ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm)) >= 0)
ret = drm_ioctl(file, cmd, arg);
- xe_pm_runtime_put(xe);
return ret;
}
@@ -238,10 +235,9 @@ static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned lo
if (xe_device_wedged(xe))
return -ECANCELED;
- ret = xe_pm_runtime_get_ioctl(xe);
- if (ret >= 0)
+ ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
+ if ((ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm)) >= 0)
ret = drm_compat_ioctl(file, cmd, arg);
- xe_pm_runtime_put(xe);
return ret;
}
@@ -775,7 +771,6 @@ ALLOW_ERROR_INJECTION(xe_device_probe_early, ERRNO); /* See xe_pci_probe() */
static int probe_has_flat_ccs(struct xe_device *xe)
{
struct xe_gt *gt;
- struct xe_force_wake_ref fw_ref;
u32 reg;
/* Always enabled/disabled, no runtime check to do */
@@ -786,7 +781,7 @@ static int probe_has_flat_ccs(struct xe_device *xe)
if (!gt)
return 0;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
@@ -797,8 +792,6 @@ static int probe_has_flat_ccs(struct xe_device *xe)
drm_dbg(&xe->drm,
"Flat CCS has been disabled in bios, May lead to performance impact");
- xe_force_wake_put(fw_ref);
-
return 0;
}
@@ -1034,7 +1027,6 @@ void xe_device_wmb(struct xe_device *xe)
*/
static void tdf_request_sync(struct xe_device *xe)
{
- struct xe_force_wake_ref fw_ref;
struct xe_gt *gt;
u8 id;
@@ -1042,7 +1034,7 @@ static void tdf_request_sync(struct xe_device *xe)
if (xe_gt_is_media_type(gt))
continue;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return;
@@ -1058,15 +1050,12 @@ static void tdf_request_sync(struct xe_device *xe)
if (xe_mmio_wait32(>->mmio, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST, 0,
150, NULL, false))
xe_gt_err_once(gt, "TD flush timeout\n");
-
- xe_force_wake_put(fw_ref);
}
}
void xe_device_l2_flush(struct xe_device *xe)
{
struct xe_gt *gt;
- struct xe_force_wake_ref fw_ref;
gt = xe_root_mmio_gt(xe);
if (!gt)
@@ -1075,7 +1064,7 @@ void xe_device_l2_flush(struct xe_device *xe)
if (!XE_GT_WA(gt, 16023588340))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return;
@@ -1086,8 +1075,6 @@ void xe_device_l2_flush(struct xe_device *xe)
xe_gt_err_once(gt, "Global invalidation timeout\n");
spin_unlock(>->global_invl_lock);
-
- xe_force_wake_put(fw_ref);
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 18/33] drm/xe/devcoredump: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (16 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 17/33] drm/xe/device: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 19/33] drm/xe/display: Use scoped-cleanup Matt Roper
` (19 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM in the devcoredump
code. This eliminates some goto-based error handling and slightly
simplifies other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_devcoredump.c | 26 ++++++++++----------------
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
index eb0b40ffffaa..3f58ee4b7c0f 100644
--- a/drivers/gpu/drm/xe/xe_devcoredump.c
+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
@@ -276,7 +276,6 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
struct xe_device *xe = coredump_to_xe(coredump);
- struct xe_force_wake_ref fw_ref;
/*
* NB: Despite passing a GFP_ flags parameter here, more allocations are done
@@ -287,15 +286,15 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
xe_devcoredump_read, xe_devcoredump_free,
XE_COREDUMP_TIMEOUT_JIFFIES);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
/* keep going if fw fails as we still want to save the memory and SW data */
- fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
- xe_vm_snapshot_capture_delayed(ss->vm);
- xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
- xe_force_wake_put(fw_ref);
+ xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+ xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
+ xe_vm_snapshot_capture_delayed(ss->vm);
+ xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
+ }
ss->read.chunk_position = 0;
@@ -306,7 +305,7 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
ss->read.buffer = kvmalloc(XE_DEVCOREDUMP_CHUNK_MAX,
GFP_USER);
if (!ss->read.buffer)
- goto put_pm;
+ return;
__xe_devcoredump_read(ss->read.buffer,
XE_DEVCOREDUMP_CHUNK_MAX,
@@ -314,15 +313,12 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
} else {
ss->read.buffer = kvmalloc(ss->read.size, GFP_USER);
if (!ss->read.buffer)
- goto put_pm;
+ return;
__xe_devcoredump_read(ss->read.buffer, ss->read.size, 0,
coredump);
xe_devcoredump_snapshot_free(ss);
}
-
-put_pm:
- xe_pm_runtime_put(xe);
}
static void devcoredump_snapshot(struct xe_devcoredump *coredump,
@@ -332,7 +328,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
struct xe_guc *guc = exec_queue_to_guc(q);
const char *process_name = "no process";
- struct xe_force_wake_ref fw_ref;
bool cookie;
ss->snapshot_time = ktime_get_real();
@@ -351,7 +346,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
cookie = dma_fence_begin_signalling();
/* keep going if fw fails as we still want to save the memory and SW data */
- fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
ss->guc.log = xe_guc_log_snapshot_capture(&guc->log, true);
ss->guc.ct = xe_guc_ct_snapshot_capture(&guc->ct);
@@ -364,7 +359,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
queue_work(system_unbound_wq, &ss->work);
- xe_force_wake_put(fw_ref);
dma_fence_end_signalling(cookie);
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 19/33] drm/xe/display: Use scoped-cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (17 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 18/33] drm/xe/devcoredump: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 20/33] drm/xe: Create scoped cleanup class for force_wake_get_any_engine() Matt Roper
` (18 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Eliminate some goto-based cleanup by utilizing scoped cleanup helpers.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/display/xe_fb_pin.c | 24 +++++++++---------------
drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 23 +++++++----------------
2 files changed, 16 insertions(+), 31 deletions(-)
diff --git a/drivers/gpu/drm/xe/display/xe_fb_pin.c b/drivers/gpu/drm/xe/display/xe_fb_pin.c
index 1fd4a815e784..33f5b031a21c 100644
--- a/drivers/gpu/drm/xe/display/xe_fb_pin.c
+++ b/drivers/gpu/drm/xe/display/xe_fb_pin.c
@@ -210,10 +210,10 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
/* TODO: Consider sharing framebuffer mapping?
* embed i915_vma inside intel_framebuffer
*/
- xe_pm_runtime_get_noresume(xe);
- ret = mutex_lock_interruptible(&ggtt->lock);
- if (ret)
- goto out;
+ guard(xe_pm_runtime_noresume)(xe);
+ ACQUIRE(mutex_intr, lock)(&ggtt->lock);
+ if ((ret = ACQUIRE_ERR(mutex_intr, &lock)))
+ return ret;
align = XE_PAGE_SIZE;
if (xe_bo_is_vram(bo) && ggtt->flags & XE_GGTT_FLAGS_64K)
@@ -223,15 +223,13 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
vma->node = bo->ggtt_node[tile0->id];
} else if (view->type == I915_GTT_VIEW_NORMAL) {
vma->node = xe_ggtt_node_init(ggtt);
- if (IS_ERR(vma->node)) {
- ret = PTR_ERR(vma->node);
- goto out_unlock;
- }
+ if (IS_ERR(vma->node))
+ return PTR_ERR(vma->node);
ret = xe_ggtt_node_insert_locked(vma->node, xe_bo_size(bo), align, 0);
if (ret) {
xe_ggtt_node_fini(vma->node);
- goto out_unlock;
+ return ret;
}
xe_ggtt_map_bo(ggtt, vma->node, bo, xe->pat.idx[XE_CACHE_NONE]);
@@ -245,13 +243,13 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
vma->node = xe_ggtt_node_init(ggtt);
if (IS_ERR(vma->node)) {
ret = PTR_ERR(vma->node);
- goto out_unlock;
+ return ret;
}
ret = xe_ggtt_node_insert_locked(vma->node, size, align, 0);
if (ret) {
xe_ggtt_node_fini(vma->node);
- goto out_unlock;
+ return ret;
}
ggtt_ofs = vma->node->base.start;
@@ -265,10 +263,6 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
rot_info->plane[i].dst_stride);
}
-out_unlock:
- mutex_unlock(&ggtt->lock);
-out:
- xe_pm_runtime_put(xe);
return ret;
}
diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
index 80fd3844c41c..084baddb160e 100644
--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
@@ -37,7 +37,6 @@ bool intel_hdcp_gsc_check_status(struct drm_device *drm)
struct xe_gt *gt = tile->media_gt;
struct xe_gsc *gsc = >->uc.gsc;
bool ret = true;
- struct xe_force_wake_ref fw_ref;
if (!gsc || !xe_uc_fw_is_enabled(&gsc->fw)) {
drm_dbg_kms(&xe->drm,
@@ -45,21 +44,17 @@ bool intel_hdcp_gsc_check_status(struct drm_device *drm)
return false;
}
- xe_pm_runtime_get(xe);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
+ guard(xe_pm_runtime)(xe);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
if (!fw_ref.domains) {
drm_dbg_kms(&xe->drm,
"failed to get forcewake to check proxy status\n");
- ret = false;
- goto out;
+ return false;
}
if (!xe_gsc_proxy_init_done(gsc))
ret = false;
- xe_force_wake_put(fw_ref);
-out:
- xe_pm_runtime_put(xe);
return ret;
}
@@ -166,17 +161,15 @@ ssize_t intel_hdcp_gsc_msg_send(struct intel_hdcp_gsc_context *gsc_context,
u32 addr_out_off, addr_in_wr_off = 0;
int ret, tries = 0;
- if (msg_in_len > max_msg_size || msg_out_len > max_msg_size) {
- ret = -ENOSPC;
- goto out;
- }
+ if (msg_in_len > max_msg_size || msg_out_len > max_msg_size)
+ return -ENOSPC;
msg_size_in = msg_in_len + HDCP_GSC_HEADER_SIZE;
msg_size_out = msg_out_len + HDCP_GSC_HEADER_SIZE;
addr_out_off = PAGE_SIZE;
host_session_id = xe_gsc_create_host_session_id();
- xe_pm_runtime_get_noresume(xe);
+ guard(xe_pm_runtime_noresume)(xe);
addr_in_wr_off = xe_gsc_emit_header(xe, &gsc_context->hdcp_bo->vmap,
addr_in_wr_off, HECI_MEADDRESS_HDCP,
host_session_id, msg_in_len);
@@ -201,13 +194,11 @@ ssize_t intel_hdcp_gsc_msg_send(struct intel_hdcp_gsc_context *gsc_context,
} while (++tries < 20);
if (ret)
- goto out;
+ return ret;
xe_map_memcpy_from(xe, msg_out, &gsc_context->hdcp_bo->vmap,
addr_out_off + HDCP_GSC_HEADER_SIZE,
msg_out_len);
-out:
- xe_pm_runtime_put(xe);
return ret;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 20/33] drm/xe: Create scoped cleanup class for force_wake_get_any_engine()
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (18 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 19/33] drm/xe/display: Use scoped-cleanup Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 21/33] drm/xe/drm_client: Use scope-based cleanup Matt Roper
` (17 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
force_wake_get_any_engine() is a single-use function to pick any engine
present on the platform and grab its forcewake. The signature
(returning a boolean success and both the engine pointer and a forcewake
ref by reference) is a bit awkward. Rewrite it such that the
forcewake ref is the function's return value and the caller can
determine success/failure by checking the engine pointer against NULL.
With this new signature, the function can serve as a scoped cleanup
class constructor, so define the corresponding class. Note that if we
fail to obtain forcewake (or if the platform somehow has no engines),
the constructor can fail, returning an invalid fw_ref. In such cases,
fw_ref.fw will be NULL, making it clear that the reference is invalid;
this fact can be used to create a thin wrapper around xe_force_wake_put
that can be used as a destructor for this class.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_drm_client.c | 45 ++++++++++++++++++++----------
1 file changed, 31 insertions(+), 14 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
index 182526864286..6e0a05eeb587 100644
--- a/drivers/gpu/drm/xe/xe_drm_client.c
+++ b/drivers/gpu/drm/xe/xe_drm_client.c
@@ -6,6 +6,7 @@
#include <drm/drm_print.h>
#include <uapi/drm/xe_drm.h>
+#include <linux/cleanup.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/types.h>
@@ -285,34 +286,49 @@ static struct xe_hw_engine *any_engine(struct xe_device *xe)
return NULL;
}
-static bool force_wake_get_any_engine(struct xe_device *xe,
- struct xe_hw_engine **phwe,
- struct xe_force_wake_ref *pfw_ref)
+/*
+ * Pick any engine and grab its forcewake. On error phwe will be NULL and
+ * the returned forcewake reference will be invalid. Callers should check
+ * phwe against NULL.
+ */
+static struct xe_force_wake_ref force_wake_get_any_engine(struct xe_device *xe,
+ struct xe_hw_engine **phwe)
{
enum xe_force_wake_domains domain;
- struct xe_force_wake_ref fw_ref;
+ struct xe_force_wake_ref fw_ref = {};
struct xe_hw_engine *hwe;
struct xe_force_wake *fw;
+ *phwe = NULL;
+
hwe = any_engine(xe);
if (!hwe)
- return false;
+ return fw_ref; /* will be invalid */
domain = xe_hw_engine_to_fw_domain(hwe);
fw = gt_to_fw(hwe->gt);
fw_ref = xe_force_wake_get(fw, domain);
- if (!xe_force_wake_ref_has_domain(fw_ref, domain)) {
+ if (xe_force_wake_ref_has_domain(fw_ref, domain))
+ *phwe = hwe; /* valid forcewake */
+
+ return fw_ref;
+}
+
+static void drop_fw_if_valid(struct xe_force_wake_ref fw_ref)
+{
+ /*
+ * If force_wake_get_any_engine() fails, there's no real forcewake
+ * reference to drop, and fw_ref.fw will be NULL.
+ */
+ if (fw_ref.fw)
xe_force_wake_put(fw_ref);
- return false;
- }
-
- *phwe = hwe;
- *pfw_ref = fw_ref;
-
- return true;
}
+DEFINE_CLASS(xe_force_wake_any_engine, struct xe_force_wake_ref,
+ drop_fw_if_valid(_T), force_wake_get_any_engine(xe, phwe),
+ struct xe_device *xe, struct xe_hw_engine **phwe);
+
static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
{
unsigned long class, i, gt_id, capacity[XE_ENGINE_CLASS_MAX] = { };
@@ -340,7 +356,8 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
!atomic_read(&xef->exec_queue.pending_removal));
xe_pm_runtime_get(xe);
- if (!force_wake_get_any_engine(xe, &hwe, &fw_ref)) {
+ fw_ref = force_wake_get_any_engine(xe, &hwe);
+ if (!hwe) {
xe_pm_runtime_put(xe);
return;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 21/33] drm/xe/drm_client: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (19 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 20/33] drm/xe: Create scoped cleanup class for force_wake_get_any_engine() Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 22/33] drm/xe/gt_debugfs: " Matt Roper
` (16 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_drm_client.c | 39 +++++++++++++-----------------
1 file changed, 17 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
index 6e0a05eeb587..b3486d748951 100644
--- a/drivers/gpu/drm/xe/xe_drm_client.c
+++ b/drivers/gpu/drm/xe/xe_drm_client.c
@@ -338,7 +338,6 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
struct xe_hw_engine *hwe;
struct xe_exec_queue *q;
u64 gpu_timestamp;
- struct xe_force_wake_ref fw_ref;
/*
* RING_TIMESTAMP registers are inaccessible in VF mode.
@@ -355,30 +354,26 @@ static void show_run_ticks(struct drm_printer *p, struct drm_file *file)
wait_var_event(&xef->exec_queue.pending_removal,
!atomic_read(&xef->exec_queue.pending_removal));
- xe_pm_runtime_get(xe);
- fw_ref = force_wake_get_any_engine(xe, &hwe);
- if (!hwe) {
- xe_pm_runtime_put(xe);
- return;
- }
-
- /* Accumulate all the exec queues from this client */
- mutex_lock(&xef->exec_queue.lock);
- xa_for_each(&xef->exec_queue.xa, i, q) {
- xe_exec_queue_get(q);
- mutex_unlock(&xef->exec_queue.lock);
-
- xe_exec_queue_update_run_ticks(q);
+ scoped_guard(xe_pm_runtime, xe) {
+ CLASS(xe_force_wake_any_engine, fw_ref)(xe, &hwe);
+ if (!hwe)
+ return;
+ /* Accumulate all the exec queues from this client */
mutex_lock(&xef->exec_queue.lock);
- xe_exec_queue_put(q);
+ xa_for_each(&xef->exec_queue.xa, i, q) {
+ xe_exec_queue_get(q);
+ mutex_unlock(&xef->exec_queue.lock);
+
+ xe_exec_queue_update_run_ticks(q);
+
+ mutex_lock(&xef->exec_queue.lock);
+ xe_exec_queue_put(q);
+ }
+ mutex_unlock(&xef->exec_queue.lock);
+
+ gpu_timestamp = xe_hw_engine_read_timestamp(hwe);
}
- mutex_unlock(&xef->exec_queue.lock);
-
- gpu_timestamp = xe_hw_engine_read_timestamp(hwe);
-
- xe_force_wake_put(fw_ref);
- xe_pm_runtime_put(xe);
for (class = 0; class < XE_ENGINE_CLASS_MAX; class++) {
const char *class_name;
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 22/33] drm/xe/gt_debugfs: Use scope-based cleanup
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (20 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 21/33] drm/xe/drm_client: Use scope-based cleanup Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 23/33] drm/xe/huc: Use scope-based forcewake Matt Roper
` (15 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based cleanup for forcewake and runtime PM to simplify the
debugfs code slightly.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_gt_debugfs.c | 29 ++++++++---------------------
1 file changed, 8 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_gt_debugfs.c b/drivers/gpu/drm/xe/xe_gt_debugfs.c
index 0b2c5d3ff8bb..e6dacf7fd842 100644
--- a/drivers/gpu/drm/xe/xe_gt_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_gt_debugfs.c
@@ -105,35 +105,24 @@ int xe_gt_debugfs_show_with_rpm(struct seq_file *m, void *data)
struct drm_info_node *node = m->private;
struct xe_gt *gt = node_to_gt(node);
struct xe_device *xe = gt_to_xe(gt);
- int ret;
- xe_pm_runtime_get(xe);
- ret = xe_gt_debugfs_simple_show(m, data);
- xe_pm_runtime_put(xe);
-
- return ret;
+ guard(xe_pm_runtime)(xe);
+ return xe_gt_debugfs_simple_show(m, data);
}
static int hw_engines(struct xe_gt *gt, struct drm_printer *p)
{
struct xe_hw_engine *hwe;
enum xe_hw_engine_id id;
- struct xe_force_wake_ref fw_ref;
- int ret = 0;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- ret = -ETIMEDOUT;
- goto fw_put;
- }
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+ return -ETIMEDOUT;
for_each_hw_engine(hwe, gt, id)
xe_hw_engine_print(hwe, p);
-fw_put:
- xe_force_wake_put(fw_ref);
-
- return ret;
+ return 0;
}
static int steering(struct xe_gt *gt, struct drm_printer *p)
@@ -269,9 +258,8 @@ static void force_reset(struct xe_gt *gt)
{
struct xe_device *xe = gt_to_xe(gt);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_gt_reset_async(gt);
- xe_pm_runtime_put(xe);
}
static ssize_t force_reset_write(struct file *file,
@@ -297,9 +285,8 @@ static void force_reset_sync(struct xe_gt *gt)
{
struct xe_device *xe = gt_to_xe(gt);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_gt_reset(gt);
- xe_pm_runtime_put(xe);
}
static ssize_t force_reset_sync_write(struct file *file,
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 23/33] drm/xe/huc: Use scope-based forcewake
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (21 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 22/33] drm/xe/gt_debugfs: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 24/33] drm/xe/query: " Matt Roper
` (14 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based forcewake in the HuC code for a small simplification and
consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_huc.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_huc.c b/drivers/gpu/drm/xe/xe_huc.c
index 5136e310515e..4212162913af 100644
--- a/drivers/gpu/drm/xe/xe_huc.c
+++ b/drivers/gpu/drm/xe/xe_huc.c
@@ -300,19 +300,16 @@ void xe_huc_sanitize(struct xe_huc *huc)
void xe_huc_print_info(struct xe_huc *huc, struct drm_printer *p)
{
struct xe_gt *gt = huc_to_gt(huc);
- struct xe_force_wake_ref fw_ref;
xe_uc_fw_print(&huc->fw, p);
if (!xe_uc_fw_is_enabled(&huc->fw))
return;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return;
drm_printf(p, "\nHuC status: 0x%08x\n",
xe_mmio_read32(>->mmio, HUC_KERNEL_LOAD_INFO));
-
- xe_force_wake_put(fw_ref);
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 24/33] drm/xe/query: Use scope-based forcewake
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (22 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 23/33] drm/xe/huc: Use scope-based forcewake Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 25/33] drm/xe/reg_sr: " Matt Roper
` (13 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based forcewake handling for consistency with other parts of
the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_query.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
index 6d84651c76cd..d11578f77c77 100644
--- a/drivers/gpu/drm/xe/xe_query.c
+++ b/drivers/gpu/drm/xe/xe_query.c
@@ -122,7 +122,6 @@ query_engine_cycles(struct xe_device *xe,
__ktime_func_t cpu_clock;
struct xe_hw_engine *hwe;
struct xe_gt *gt;
- struct xe_force_wake_ref fw_ref;
if (IS_SRIOV_VF(xe))
return -EOPNOTSUPP;
@@ -158,17 +157,14 @@ query_engine_cycles(struct xe_device *xe,
if (!hwe)
return -EINVAL;
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
- xe_force_wake_put(fw_ref);
- return -EIO;
+ xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FORCEWAKE_ALL) {
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+ return -EIO;
+
+ hwe_read_timestamp(hwe, &resp.engine_cycles, &resp.cpu_timestamp,
+ &resp.cpu_delta, cpu_clock);
}
- hwe_read_timestamp(hwe, &resp.engine_cycles, &resp.cpu_timestamp,
- &resp.cpu_delta, cpu_clock);
-
- xe_force_wake_put(fw_ref);
-
if (GRAPHICS_VER(xe) >= 20)
resp.width = 64;
else
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 25/33] drm/xe/reg_sr: Use scope-based forcewake
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (23 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 24/33] drm/xe/query: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 26/33] drm/xe/vram: " Matt Roper
` (12 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based forcewake to slightly simplify the reg_sr code.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_reg_sr.c | 17 +++++------------
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
index c155880646d4..328ce3a17af6 100644
--- a/drivers/gpu/drm/xe/xe_reg_sr.c
+++ b/drivers/gpu/drm/xe/xe_reg_sr.c
@@ -168,7 +168,6 @@ void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
{
struct xe_reg_sr_entry *entry;
unsigned long reg;
- struct xe_force_wake_ref fw_ref;
if (xa_empty(&sr->xa))
return;
@@ -178,20 +177,14 @@ void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)
xe_gt_dbg(gt, "Applying %s save-restore MMIOs\n", sr->name);
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- goto err_force_wake;
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FORCEWAKE_ALL);
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL)) {
+ xe_gt_err(gt, "Failed to apply, err=-ETIMEDOUT\n");
+ return;
+ }
xa_for_each(&sr->xa, reg, entry)
apply_one_mmio(gt, entry);
-
- xe_force_wake_put(fw_ref);
-
- return;
-
-err_force_wake:
- xe_force_wake_put(fw_ref);
- xe_gt_err(gt, "Failed to apply, err=-ETIMEDOUT\n");
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 26/33] drm/xe/vram: Use scope-based forcewake
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (24 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 25/33] drm/xe/reg_sr: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 27/33] drm/xe/bo: Use scope-based runtime PM Matt Roper
` (11 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Switch VRAM code to use scope-based forcewake for consistency with other
parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_vram.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vram.c b/drivers/gpu/drm/xe/xe_vram.c
index 8e43ebf8ee3b..11d18741313b 100644
--- a/drivers/gpu/drm/xe/xe_vram.c
+++ b/drivers/gpu/drm/xe/xe_vram.c
@@ -245,7 +245,6 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
{
struct xe_device *xe = tile_to_xe(tile);
struct xe_gt *gt = tile->primary_gt;
- struct xe_force_wake_ref fw_ref;
u64 offset;
u32 reg;
@@ -265,7 +264,7 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
return 0;
}
- fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
if (!fw_ref.domains)
return -ETIMEDOUT;
@@ -289,8 +288,6 @@ static int tile_vram_size(struct xe_tile *tile, u64 *vram_size,
/* remove the tile offset so we have just the available size */
*vram_size = offset - *tile_offset;
- xe_force_wake_put(fw_ref);
-
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 27/33] drm/xe/bo: Use scope-based runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (25 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 26/33] drm/xe/vram: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 28/33] drm/xe/ggtt: Use scope-based runtime pm Matt Roper
` (10 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based runtime power management in the BO code for consistency
with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_bo.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
index b0bd31d14bb9..03d81664706a 100644
--- a/drivers/gpu/drm/xe/xe_bo.c
+++ b/drivers/gpu/drm/xe/xe_bo.c
@@ -2035,9 +2035,8 @@ static int xe_bo_vm_access(struct vm_area_struct *vma, unsigned long addr,
struct xe_device *xe = xe_bo_device(bo);
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = ttm_bo_vm_access(vma, addr, buf, len, write);
- xe_pm_runtime_put(xe);
return ret;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 28/33] drm/xe/ggtt: Use scope-based runtime pm
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (26 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 27/33] drm/xe/bo: Use scope-based runtime PM Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 29/33] drm/xe/hwmon: Use scope-based runtime PM Matt Roper
` (9 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Switch the GGTT code to scope-based runtime PM for consistency with
other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_ggtt.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c
index 20d226d90c50..5e1cd18ec611 100644
--- a/drivers/gpu/drm/xe/xe_ggtt.c
+++ b/drivers/gpu/drm/xe/xe_ggtt.c
@@ -385,9 +385,8 @@ static void ggtt_node_remove_work_func(struct work_struct *work)
delayed_removal_work);
struct xe_device *xe = tile_to_xe(node->ggtt->tile);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ggtt_node_remove(node);
- xe_pm_runtime_put(xe);
}
/**
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 29/33] drm/xe/hwmon: Use scope-based runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (27 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 28/33] drm/xe/ggtt: Use scope-based runtime pm Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 30/33] drm/xe/sriov: " Matt Roper
` (8 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based runtime power management in the hwmon code for
consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_hwmon.c | 16 ++++------------
1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_hwmon.c b/drivers/gpu/drm/xe/xe_hwmon.c
index 97879daeefc1..5ad351fad6e2 100644
--- a/drivers/gpu/drm/xe/xe_hwmon.c
+++ b/drivers/gpu/drm/xe/xe_hwmon.c
@@ -502,7 +502,7 @@ xe_hwmon_power_max_interval_show(struct device *dev, struct device_attribute *at
int ret = 0;
- xe_pm_runtime_get(hwmon->xe);
+ guard(xe_pm_runtime)(hwmon->xe);
mutex_lock(&hwmon->hwmon_lock);
@@ -521,8 +521,6 @@ xe_hwmon_power_max_interval_show(struct device *dev, struct device_attribute *at
mutex_unlock(&hwmon->hwmon_lock);
- xe_pm_runtime_put(hwmon->xe);
-
x = REG_FIELD_GET(PWR_LIM_TIME_X, reg_val);
y = REG_FIELD_GET(PWR_LIM_TIME_Y, reg_val);
@@ -604,7 +602,7 @@ xe_hwmon_power_max_interval_store(struct device *dev, struct device_attribute *a
rxy = REG_FIELD_PREP(PWR_LIM_TIME_X, x) |
REG_FIELD_PREP(PWR_LIM_TIME_Y, y);
- xe_pm_runtime_get(hwmon->xe);
+ guard(xe_pm_runtime)(hwmon->xe);
mutex_lock(&hwmon->hwmon_lock);
@@ -616,8 +614,6 @@ xe_hwmon_power_max_interval_store(struct device *dev, struct device_attribute *a
mutex_unlock(&hwmon->hwmon_lock);
- xe_pm_runtime_put(hwmon->xe);
-
return count;
}
@@ -1126,7 +1122,7 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
struct xe_hwmon *hwmon = dev_get_drvdata(dev);
int ret;
- xe_pm_runtime_get(hwmon->xe);
+ guard(xe_pm_runtime)(hwmon->xe);
switch (type) {
case hwmon_temp:
@@ -1152,8 +1148,6 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
break;
}
- xe_pm_runtime_put(hwmon->xe);
-
return ret;
}
@@ -1164,7 +1158,7 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
struct xe_hwmon *hwmon = dev_get_drvdata(dev);
int ret;
- xe_pm_runtime_get(hwmon->xe);
+ guard(xe_pm_runtime)(hwmon->xe);
switch (type) {
case hwmon_power:
@@ -1178,8 +1172,6 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
break;
}
- xe_pm_runtime_put(hwmon->xe);
-
return ret;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 30/33] drm/xe/sriov: Use scope-based runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (28 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 29/33] drm/xe/hwmon: Use scope-based runtime PM Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 31/33] drm/xe/tests: " Matt Roper
` (7 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based runtime power management in the SRIOV code for
consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_pci_sriov.c | 3 +--
drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c | 6 ++----
drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 6 ++----
drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +----
drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c | 3 +--
5 files changed, 7 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.c b/drivers/gpu/drm/xe/xe_pci_sriov.c
index d0fcde66a774..4b16748fe2ed 100644
--- a/drivers/gpu/drm/xe/xe_pci_sriov.c
+++ b/drivers/gpu/drm/xe/xe_pci_sriov.c
@@ -212,12 +212,11 @@ int xe_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
if (num_vfs && pci_num_vf(pdev))
return -EBUSY;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
if (num_vfs > 0)
ret = pf_enable_vfs(xe, num_vfs);
else
ret = pf_disable_vfs(xe);
- xe_pm_runtime_put(xe);
return ret;
}
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c
index a81aa05c5532..21eafe333cb5 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c
@@ -69,9 +69,8 @@ static ssize_t from_file_write_to_xe_call(struct file *file, const char __user *
if (ret < 0)
return ret;
if (yes) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = call(xe);
- xe_pm_runtime_put(xe);
}
if (ret < 0)
return ret;
@@ -157,9 +156,8 @@ static ssize_t from_file_write_to_vf_call(struct file *file, const char __user *
if (ret < 0)
return ret;
if (yes) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = call(xe, vfid);
- xe_pm_runtime_put(xe);
}
if (ret < 0)
return ret;
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index c0b767ac735c..f0777976335c 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -394,9 +394,8 @@ static ssize_t xe_sriov_dev_attr_store(struct kobject *kobj, struct attribute *a
if (!vattr->store)
return -EPERM;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_sriov_pf_wait_ready(xe) ?: vattr->store(xe, buf, count);
- xe_pm_runtime_put(xe);
return ret;
}
@@ -430,9 +429,8 @@ static ssize_t xe_sriov_vf_attr_store(struct kobject *kobj, struct attribute *at
if (!vattr->store)
return -EPERM;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_sriov_pf_wait_ready(xe) ?: vattr->store(xe, vfid, buf, count);
- xe_pm_runtime_get(xe);
return ret;
}
diff --git a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
index 797a4b866226..e1cdc46ad710 100644
--- a/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_vf_ccs.c
@@ -463,8 +463,7 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
if (!IS_VF_CCS_READY(xe))
return;
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_ccs_rw_ctx(ctx_id) {
bb_pool = xe->sriov.vf.ccs.contexts[ctx_id].mem.ccs_bb_pool;
if (!bb_pool)
@@ -475,6 +474,4 @@ void xe_sriov_vf_ccs_print(struct xe_device *xe, struct drm_printer *p)
drm_suballoc_dump_debug_info(&bb_pool->base, p, xe_sa_manager_gpu_addr(bb_pool));
drm_puts(p, "\n");
}
-
- xe_pm_runtime_put(xe);
}
diff --git a/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c
index f3f478f14ff5..7f97db2f89bb 100644
--- a/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c
@@ -141,12 +141,11 @@ static int NAME##_set(void *data, u64 val) \
if (val > (TYPE)~0ull) \
return -EOVERFLOW; \
\
- xe_pm_runtime_get(xe); \
+ guard(xe_pm_runtime)(xe); \
err = xe_sriov_pf_wait_ready(xe) ?: \
xe_gt_sriov_pf_config_set_##CONFIG(gt, vfid, val); \
if (!err) \
xe_sriov_pf_provision_set_custom_mode(xe); \
- xe_pm_runtime_put(xe); \
\
return err; \
} \
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 31/33] drm/xe/tests: Use scope-based runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (29 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 30/33] drm/xe/sriov: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 32/33] drm/xe/sysfs: Use scope-based runtime power management Matt Roper
` (6 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Use scope-based handling of runtime PM in the kunit tests for
consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/tests/xe_bo.c | 10 ++--------
drivers/gpu/drm/xe/tests/xe_dma_buf.c | 3 +--
drivers/gpu/drm/xe/tests/xe_migrate.c | 10 ++--------
drivers/gpu/drm/xe/tests/xe_mocs.c | 10 ++--------
4 files changed, 7 insertions(+), 26 deletions(-)
diff --git a/drivers/gpu/drm/xe/tests/xe_bo.c b/drivers/gpu/drm/xe/tests/xe_bo.c
index 2294cf89f3e1..2278e589a493 100644
--- a/drivers/gpu/drm/xe/tests/xe_bo.c
+++ b/drivers/gpu/drm/xe/tests/xe_bo.c
@@ -185,8 +185,7 @@ static int ccs_test_run_device(struct xe_device *xe)
return 0;
}
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_tile(tile, xe, id) {
/* For igfx run only for primary tile */
if (!IS_DGFX(xe) && id > 0)
@@ -194,8 +193,6 @@ static int ccs_test_run_device(struct xe_device *xe)
ccs_test_run_tile(xe, tile, test);
}
- xe_pm_runtime_put(xe);
-
return 0;
}
@@ -356,13 +353,10 @@ static int evict_test_run_device(struct xe_device *xe)
return 0;
}
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_tile(tile, xe, id)
evict_test_run_tile(xe, tile, test);
- xe_pm_runtime_put(xe);
-
return 0;
}
diff --git a/drivers/gpu/drm/xe/tests/xe_dma_buf.c b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
index 5df98de5ba3c..954b6b911ea0 100644
--- a/drivers/gpu/drm/xe/tests/xe_dma_buf.c
+++ b/drivers/gpu/drm/xe/tests/xe_dma_buf.c
@@ -266,7 +266,7 @@ static int dma_buf_run_device(struct xe_device *xe)
const struct dma_buf_test_params *params;
struct kunit *test = kunit_get_current_test();
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
for (params = test_params; params->mem_mask; ++params) {
struct dma_buf_test_params p = *params;
@@ -274,7 +274,6 @@ static int dma_buf_run_device(struct xe_device *xe)
test->priv = &p;
xe_test_dmabuf_import_same_driver(xe);
}
- xe_pm_runtime_put(xe);
/* A non-zero return would halt iteration over driver devices */
return 0;
diff --git a/drivers/gpu/drm/xe/tests/xe_migrate.c b/drivers/gpu/drm/xe/tests/xe_migrate.c
index 5904d658d1f2..34e2f0f4631f 100644
--- a/drivers/gpu/drm/xe/tests/xe_migrate.c
+++ b/drivers/gpu/drm/xe/tests/xe_migrate.c
@@ -344,8 +344,7 @@ static int migrate_test_run_device(struct xe_device *xe)
struct xe_tile *tile;
int id;
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_tile(tile, xe, id) {
struct xe_migrate *m = tile->migrate;
struct drm_exec *exec = XE_VALIDATION_OPT_OUT;
@@ -356,8 +355,6 @@ static int migrate_test_run_device(struct xe_device *xe)
xe_vm_unlock(m->q->vm);
}
- xe_pm_runtime_put(xe);
-
return 0;
}
@@ -759,13 +756,10 @@ static int validate_ccs_test_run_device(struct xe_device *xe)
return 0;
}
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_tile(tile, xe, id)
validate_ccs_test_run_tile(xe, tile, test);
- xe_pm_runtime_put(xe);
-
return 0;
}
diff --git a/drivers/gpu/drm/xe/tests/xe_mocs.c b/drivers/gpu/drm/xe/tests/xe_mocs.c
index 16681a0f1f77..fe6eabaa4812 100644
--- a/drivers/gpu/drm/xe/tests/xe_mocs.c
+++ b/drivers/gpu/drm/xe/tests/xe_mocs.c
@@ -115,8 +115,7 @@ static int mocs_kernel_test_run_device(struct xe_device *xe)
unsigned int flags;
int id;
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_gt(gt, xe, id) {
flags = live_mocs_init(&mocs, gt);
if (flags & HAS_GLOBAL_MOCS)
@@ -125,8 +124,6 @@ static int mocs_kernel_test_run_device(struct xe_device *xe)
read_l3cc_table(gt, &mocs.table);
}
- xe_pm_runtime_put(xe);
-
return 0;
}
@@ -150,8 +147,7 @@ static int mocs_reset_test_run_device(struct xe_device *xe)
int id;
struct kunit *test = kunit_get_current_test();
- xe_pm_runtime_get(xe);
-
+ guard(xe_pm_runtime)(xe);
for_each_gt(gt, xe, id) {
flags = live_mocs_init(&mocs, gt);
kunit_info(test, "mocs_reset_test before reset\n");
@@ -169,8 +165,6 @@ static int mocs_reset_test_run_device(struct xe_device *xe)
read_l3cc_table(gt, &mocs.table);
}
- xe_pm_runtime_put(xe);
-
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 32/33] drm/xe/sysfs: Use scope-based runtime power management
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (30 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 31/33] drm/xe/tests: " Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:13 ` [PATCH 33/33] drm/xe/debugfs: Use scope-based runtime PM Matt Roper
` (5 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Switch sysfs to use scope-based runtime power management to slightly
simplify the code.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_device_sysfs.c | 33 ++++++++-----------
drivers/gpu/drm/xe/xe_gt_freq.c | 27 +++++----------
drivers/gpu/drm/xe/xe_gt_throttle.c | 3 +-
drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c | 6 ++--
4 files changed, 25 insertions(+), 44 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_device_sysfs.c b/drivers/gpu/drm/xe/xe_device_sysfs.c
index ec9c06b06fb5..a73e0e957cb0 100644
--- a/drivers/gpu/drm/xe/xe_device_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_device_sysfs.c
@@ -57,9 +57,8 @@ vram_d3cold_threshold_store(struct device *dev, struct device_attribute *attr,
drm_dbg(&xe->drm, "vram_d3cold_threshold: %u\n", vram_d3cold_threshold);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_pm_set_vram_threshold(xe, vram_d3cold_threshold);
- xe_pm_runtime_put(xe);
return ret ?: count;
}
@@ -84,33 +83,31 @@ lb_fan_control_version_show(struct device *dev, struct device_attribute *attr, c
u16 major = 0, minor = 0, hotfix = 0, build = 0;
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_CAPABILITY_STATUS, 0),
&cap, NULL);
if (ret)
- goto out;
+ return ret;
if (REG_FIELD_GET(V1_FAN_PROVISIONED, cap)) {
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_LOW, 0),
&ver_low, NULL);
if (ret)
- goto out;
+ return ret;
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_HIGH, 0),
&ver_high, NULL);
if (ret)
- goto out;
+ return ret;
major = REG_FIELD_GET(MAJOR_VERSION_MASK, ver_low);
minor = REG_FIELD_GET(MINOR_VERSION_MASK, ver_low);
hotfix = REG_FIELD_GET(HOTFIX_VERSION_MASK, ver_high);
build = REG_FIELD_GET(BUILD_VERSION_MASK, ver_high);
}
-out:
- xe_pm_runtime_put(xe);
- return ret ?: sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
+ return sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
}
static DEVICE_ATTR_ADMIN_RO(lb_fan_control_version);
@@ -123,33 +120,31 @@ lb_voltage_regulator_version_show(struct device *dev, struct device_attribute *a
u16 major = 0, minor = 0, hotfix = 0, build = 0;
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_CAPABILITY_STATUS, 0),
&cap, NULL);
if (ret)
- goto out;
+ return ret;
if (REG_FIELD_GET(VR_PARAMS_PROVISIONED, cap)) {
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_LOW, 0),
&ver_low, NULL);
if (ret)
- goto out;
+ return ret;
ret = xe_pcode_read(root, PCODE_MBOX(PCODE_LATE_BINDING, GET_VERSION_HIGH, 0),
&ver_high, NULL);
if (ret)
- goto out;
+ return ret;
major = REG_FIELD_GET(MAJOR_VERSION_MASK, ver_low);
minor = REG_FIELD_GET(MINOR_VERSION_MASK, ver_low);
hotfix = REG_FIELD_GET(HOTFIX_VERSION_MASK, ver_high);
build = REG_FIELD_GET(BUILD_VERSION_MASK, ver_high);
}
-out:
- xe_pm_runtime_put(xe);
- return ret ?: sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
+ return sysfs_emit(buf, "%u.%u.%u.%u\n", major, minor, hotfix, build);
}
static DEVICE_ATTR_ADMIN_RO(lb_voltage_regulator_version);
@@ -233,9 +228,8 @@ auto_link_downgrade_capable_show(struct device *dev, struct device_attribute *at
struct xe_device *xe = pdev_to_xe_device(pdev);
u32 cap, val;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
val = xe_mmio_read32(xe_root_tile_mmio(xe), BMG_PCIE_CAP);
- xe_pm_runtime_put(xe);
cap = REG_FIELD_GET(LINK_DOWNGRADE, val);
return sysfs_emit(buf, "%u\n", cap == DOWNGRADE_CAPABLE);
@@ -251,11 +245,10 @@ auto_link_downgrade_status_show(struct device *dev, struct device_attribute *att
u32 val = 0;
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_pcode_read(xe_device_get_root_tile(xe),
PCODE_MBOX(DGFX_PCODE_STATUS, DGFX_GET_INIT_STATUS, 0),
&val, NULL);
- xe_pm_runtime_put(xe);
return ret ?: sysfs_emit(buf, "%u\n", REG_FIELD_GET(DGFX_LINK_DOWNGRADE_STATUS, val));
}
diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
index 849ea6c86e8e..6284a4daf00a 100644
--- a/drivers/gpu/drm/xe/xe_gt_freq.c
+++ b/drivers/gpu/drm/xe/xe_gt_freq.c
@@ -70,9 +70,8 @@ static ssize_t act_freq_show(struct kobject *kobj,
struct xe_guc_pc *pc = dev_to_pc(dev);
u32 freq;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
freq = xe_guc_pc_get_act_freq(pc);
- xe_pm_runtime_put(dev_to_xe(dev));
return sysfs_emit(buf, "%d\n", freq);
}
@@ -86,9 +85,8 @@ static ssize_t cur_freq_show(struct kobject *kobj,
u32 freq;
ssize_t ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_get_cur_freq(pc, &freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -113,9 +111,8 @@ static ssize_t rpe_freq_show(struct kobject *kobj,
struct xe_guc_pc *pc = dev_to_pc(dev);
u32 freq;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
freq = xe_guc_pc_get_rpe_freq(pc);
- xe_pm_runtime_put(dev_to_xe(dev));
return sysfs_emit(buf, "%d\n", freq);
}
@@ -128,9 +125,8 @@ static ssize_t rpa_freq_show(struct kobject *kobj,
struct xe_guc_pc *pc = dev_to_pc(dev);
u32 freq;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
freq = xe_guc_pc_get_rpa_freq(pc);
- xe_pm_runtime_put(dev_to_xe(dev));
return sysfs_emit(buf, "%d\n", freq);
}
@@ -154,9 +150,8 @@ static ssize_t min_freq_show(struct kobject *kobj,
u32 freq;
ssize_t ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_get_min_freq(pc, &freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -175,9 +170,8 @@ static ssize_t min_freq_store(struct kobject *kobj,
if (ret)
return ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_set_min_freq(pc, freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -193,9 +187,8 @@ static ssize_t max_freq_show(struct kobject *kobj,
u32 freq;
ssize_t ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_get_max_freq(pc, &freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -214,9 +207,8 @@ static ssize_t max_freq_store(struct kobject *kobj,
if (ret)
return ret;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
ret = xe_guc_pc_set_max_freq(pc, freq);
- xe_pm_runtime_put(dev_to_xe(dev));
if (ret)
return ret;
@@ -243,9 +235,8 @@ static ssize_t power_profile_store(struct kobject *kobj,
struct xe_guc_pc *pc = dev_to_pc(dev);
int err;
- xe_pm_runtime_get(dev_to_xe(dev));
+ guard(xe_pm_runtime)(dev_to_xe(dev));
err = xe_guc_pc_set_power_profile(pc, buff);
- xe_pm_runtime_put(dev_to_xe(dev));
return err ?: count;
}
diff --git a/drivers/gpu/drm/xe/xe_gt_throttle.c b/drivers/gpu/drm/xe/xe_gt_throttle.c
index 82c5fbcdfbe3..0ee288389e71 100644
--- a/drivers/gpu/drm/xe/xe_gt_throttle.c
+++ b/drivers/gpu/drm/xe/xe_gt_throttle.c
@@ -97,9 +97,8 @@ u32 xe_gt_throttle_get_limit_reasons(struct xe_gt *gt)
else
mask = GT0_PERF_LIMIT_REASONS_MASK;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
val = xe_mmio_read32(>->mmio, reg) & mask;
- xe_pm_runtime_put(xe);
return val;
}
diff --git a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
index 640950172088..1d3511d0d025 100644
--- a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
@@ -47,9 +47,8 @@ static ssize_t xe_hw_engine_class_sysfs_attr_show(struct kobject *kobj,
kattr = container_of(attr, struct kobj_attribute, attr);
if (kattr->show) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = kattr->show(kobj, kattr, buf);
- xe_pm_runtime_put(xe);
}
return ret;
@@ -66,9 +65,8 @@ static ssize_t xe_hw_engine_class_sysfs_attr_store(struct kobject *kobj,
kattr = container_of(attr, struct kobj_attribute, attr);
if (kattr->store) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = kattr->store(kobj, kattr, buf, count);
- xe_pm_runtime_put(xe);
}
return ret;
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* [PATCH 33/33] drm/xe/debugfs: Use scope-based runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (31 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 32/33] drm/xe/sysfs: Use scope-based runtime power management Matt Roper
@ 2025-11-07 18:13 ` Matt Roper
2025-11-07 18:18 ` [PATCH 00/33] Scope-based forcewake and " Matt Roper
` (4 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:13 UTC (permalink / raw)
To: intel-xe; +Cc: matthew.d.roper
Switch the debugfs code to use scope-based runtime PM where possible,
for consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_debugfs.c | 16 +++++-----------
drivers/gpu/drm/xe/xe_gsc_debugfs.c | 3 +--
drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 12 ++++--------
drivers/gpu/drm/xe/xe_guc_debugfs.c | 3 +--
drivers/gpu/drm/xe/xe_huc_debugfs.c | 3 +--
drivers/gpu/drm/xe/xe_tile_debugfs.c | 3 +--
6 files changed, 13 insertions(+), 27 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_debugfs.c b/drivers/gpu/drm/xe/xe_debugfs.c
index 2d858110922b..165045298b66 100644
--- a/drivers/gpu/drm/xe/xe_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_debugfs.c
@@ -68,7 +68,7 @@ static int info(struct seq_file *m, void *data)
struct xe_gt *gt;
u8 id;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
drm_printf(&p, "graphics_verx100 %d\n", xe->info.graphics_verx100);
drm_printf(&p, "media_verx100 %d\n", xe->info.media_verx100);
@@ -95,7 +95,6 @@ static int info(struct seq_file *m, void *data)
gt->info.engine_mask);
}
- xe_pm_runtime_put(xe);
return 0;
}
@@ -110,9 +109,8 @@ static int sriov_info(struct seq_file *m, void *data)
static int workarounds(struct xe_device *xe, struct drm_printer *p)
{
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_wa_device_dump(xe, p);
- xe_pm_runtime_put(xe);
return 0;
}
@@ -134,7 +132,7 @@ static int dgfx_pkg_residencies_show(struct seq_file *m, void *data)
xe = node_to_xe(m->private);
p = drm_seq_file_printer(m);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
mmio = xe_root_tile_mmio(xe);
static const struct {
u32 offset;
@@ -151,7 +149,6 @@ static int dgfx_pkg_residencies_show(struct seq_file *m, void *data)
for (int i = 0; i < ARRAY_SIZE(residencies); i++)
read_residency_counter(xe, mmio, residencies[i].offset, residencies[i].name, &p);
- xe_pm_runtime_put(xe);
return 0;
}
@@ -163,7 +160,7 @@ static int dgfx_pcie_link_residencies_show(struct seq_file *m, void *data)
xe = node_to_xe(m->private);
p = drm_seq_file_printer(m);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
mmio = xe_root_tile_mmio(xe);
static const struct {
@@ -178,7 +175,6 @@ static int dgfx_pcie_link_residencies_show(struct seq_file *m, void *data)
for (int i = 0; i < ARRAY_SIZE(residencies); i++)
read_residency_counter(xe, mmio, residencies[i].offset, residencies[i].name, &p);
- xe_pm_runtime_put(xe);
return 0;
}
@@ -290,16 +286,14 @@ static ssize_t wedged_mode_set(struct file *f, const char __user *ubuf,
xe->wedged.mode = wedged_mode;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
for_each_gt(gt, xe, id) {
ret = xe_guc_ads_scheduler_policy_toggle_reset(>->uc.guc.ads);
if (ret) {
xe_gt_err(gt, "Failed to update GuC ADS scheduler policy. GuC may still cause engine reset even with wedged_mode=2\n");
- xe_pm_runtime_put(xe);
return -EIO;
}
}
- xe_pm_runtime_put(xe);
return size;
}
diff --git a/drivers/gpu/drm/xe/xe_gsc_debugfs.c b/drivers/gpu/drm/xe/xe_gsc_debugfs.c
index 461d7e99c2b3..b13928b50eb9 100644
--- a/drivers/gpu/drm/xe/xe_gsc_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_gsc_debugfs.c
@@ -37,9 +37,8 @@ static int gsc_info(struct seq_file *m, void *data)
struct xe_device *xe = gsc_to_xe(gsc);
struct drm_printer p = drm_seq_file_printer(m);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_gsc_print_info(gsc, &p);
- xe_pm_runtime_put(xe);
return 0;
}
diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
index 838beb7f6327..ddb64e78b988 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
@@ -123,11 +123,10 @@ static int POLICY##_set(void *data, u64 val) \
if (val > (TYPE)~0ull) \
return -EOVERFLOW; \
\
- xe_pm_runtime_get(xe); \
+ guard(xe_pm_runtime)(xe); \
err = xe_gt_sriov_pf_policy_set_##POLICY(gt, val); \
if (!err) \
xe_sriov_pf_provision_set_custom_mode(xe); \
- xe_pm_runtime_put(xe); \
\
return err; \
} \
@@ -189,12 +188,11 @@ static int CONFIG##_set(void *data, u64 val) \
if (val > (TYPE)~0ull) \
return -EOVERFLOW; \
\
- xe_pm_runtime_get(xe); \
+ guard(xe_pm_runtime)(xe); \
err = xe_sriov_pf_wait_ready(xe) ?: \
xe_gt_sriov_pf_config_set_##CONFIG(gt, vfid, val); \
if (!err) \
xe_sriov_pf_provision_set_custom_mode(xe); \
- xe_pm_runtime_put(xe); \
\
return err; \
} \
@@ -249,11 +247,10 @@ static int set_threshold(void *data, u64 val, enum xe_guc_klv_threshold_index in
if (val > (u32)~0ull)
return -EOVERFLOW;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
err = xe_gt_sriov_pf_config_set_threshold(gt, vfid, index, val);
if (!err)
xe_sriov_pf_provision_set_custom_mode(xe);
- xe_pm_runtime_put(xe);
return err;
}
@@ -361,9 +358,8 @@ static ssize_t control_write(struct file *file, const char __user *buf, size_t c
xe_gt_assert(gt, sizeof(cmd) > strlen(control_cmds[n].cmd));
if (sysfs_streq(cmd, control_cmds[n].cmd)) {
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = control_cmds[n].fn ? (*control_cmds[n].fn)(gt, vfid) : 0;
- xe_pm_runtime_put(xe);
break;
}
}
diff --git a/drivers/gpu/drm/xe/xe_guc_debugfs.c b/drivers/gpu/drm/xe/xe_guc_debugfs.c
index 0b102ab46c4d..2198141526ae 100644
--- a/drivers/gpu/drm/xe/xe_guc_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_guc_debugfs.c
@@ -72,9 +72,8 @@ static int guc_debugfs_show(struct seq_file *m, void *data)
int (*print)(struct xe_guc *, struct drm_printer *) = node->info_ent->data;
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = print(>->uc.guc, &p);
- xe_pm_runtime_put(xe);
return ret;
}
diff --git a/drivers/gpu/drm/xe/xe_huc_debugfs.c b/drivers/gpu/drm/xe/xe_huc_debugfs.c
index 3a888a40188b..df9c4d79b710 100644
--- a/drivers/gpu/drm/xe/xe_huc_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_huc_debugfs.c
@@ -37,9 +37,8 @@ static int huc_info(struct seq_file *m, void *data)
struct xe_device *xe = huc_to_xe(huc);
struct drm_printer p = drm_seq_file_printer(m);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
xe_huc_print_info(huc, &p);
- xe_pm_runtime_put(xe);
return 0;
}
diff --git a/drivers/gpu/drm/xe/xe_tile_debugfs.c b/drivers/gpu/drm/xe/xe_tile_debugfs.c
index fff242a5ae56..773d352da6de 100644
--- a/drivers/gpu/drm/xe/xe_tile_debugfs.c
+++ b/drivers/gpu/drm/xe/xe_tile_debugfs.c
@@ -84,9 +84,8 @@ int xe_tile_debugfs_show_with_rpm(struct seq_file *m, void *data)
struct xe_device *xe = tile_to_xe(tile);
int ret;
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
ret = xe_tile_debugfs_simple_show(m, data);
- xe_pm_runtime_put(xe);
return ret;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 44+ messages in thread
* Re: [PATCH 00/33] Scope-based forcewake and runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (32 preceding siblings ...)
2025-11-07 18:13 ` [PATCH 33/33] drm/xe/debugfs: Use scope-based runtime PM Matt Roper
@ 2025-11-07 18:18 ` Matt Roper
2025-11-07 20:43 ` ✗ CI.checkpatch: warning for " Patchwork
` (3 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 18:18 UTC (permalink / raw)
To: intel-xe
On Fri, Nov 07, 2025 at 10:13:16AM -0800, Matt Roper wrote:
> Forcewake and runtime PM both follow reference-counted get/put models;
> when used in functions that can encounter errors and return early, it's
> easy for developers to make mistakes and fail to drop a reference on all
> of the error paths. Cleanup of these reference counts is often
> addressed by goto-based error handling which is somewhat ugly and
> subject to its own set of mistakes once we accumulate too many error
> labels in a function.
>
> Scope-based cleanup ([1][2]) has been gaining increasing popularity in
> the Linux kernel for cleaning up various kinds of resources in a more
> automated way when code has lots of error paths and early exits. Let's
> add scope-based cleanup for both forcewake and runtime PM, based on the
> mechanisms provided in include/linux/cleanup.h. Scope-based cleanup
> allows cleanup destructors to be executed automatically when the current
> scope is exited by any means (end of block, return, break, etc.).
>
> For xe_runtime_pm_{get,put} pairs that were grabbed and released within
> a single function or block, the preferred replacement is now just
>
> guard(xe_pm_runtime_noresume)(xe);
>
> which will take care of releasing the runtime PM reference
> automatically. scoped_guard() can be used instead if the reference
> should only be held over part of the block. There are also guard
> variants added for xe_pm_runtime_noresume and xe_pm_runtime_ioctl that
> allow replacement of those alternate functions as well.
>
> Unlike runtime PM, where all reference tracking is done within the
> object parameter itself, forcewake is currently a model where get
> operations return a cookie that needs to be passed back to put
> operations. That necessitates a slightly different type of cleanup
> helper (CLASS instead of guard), although the underlying mechanisms are
> the same. For forcewake that is grabbed and released within a single
> function or block, the preferred form is now:
>
> CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GT);
>
> which, like the runtime PM equivalent, will cause the forcewake
> reference to be dropped automatically. If forcewake needs to be held
> over only a subset of the current block,
>
> xe_with_force_wake(fw_ref, gt_to_fw(gt), XE_FW_GT) { ... }
>
> can be used in the same way scoped_guard() is used for runtime PM.
>
> The first few patches in this series make some general cleanups and
> restructuring of the existing force wake code. Then the new guards and
> classes for runtime PM and forcewake are defined. Finally, most of the
> existing runtime PM and forcewake usage in the driver is converted to
> the scope-based form in the remainder of the series. Some of the
> conversions eliminate goto-based cleanup models and/or significantly
> simplify the code. Other conversions don't significantly simplify the
> code (aside from a slight reduction in line count), but are still useful
> for consistency across our codebase.
>
> An advantage of doing the conversion everywhere possible, not just the
> places where it noticeably simplifies the code, is that it helps
> highlight the remaining get/put usage as special cases where wake
> references follow more complicated lifetimes (e.g., obtained in one
> function and released in a different one, often tied to some other type
> of resource or operation). With fewer direct get/put calls overall, its
> easier to identify the ones that remain as special cases and make sure
> they truly are paired up properly.
>
> There's other areas where scope-based cleanup could potentially be
> applied in the future (e.g., mutex locks, bo locking, etc.), but this
> series does not try to address those, even in places where those
> resources are also part of the same error handling cleanup paths as
> forcewake and runtime PM. We can potentially think about converting
> other types of resources to scope-based cleanup down the road if it
> winds up working well here for forcewake and PM.
>
>
> References:
> [1] https://www.kernel.org/doc/html/next/core-api/cleanup.html
> [2] https://lwn.net/Articles/934679/
>
>
> Matt Roper (33):
> drm/xe/forcewake: Improve kerneldoc
> drm/xe/eustall: Store forcewake reference in stream structure
> drm/xe/oa: Store forcewake reference in stream structure
> drm/xe/forcewake: Create dedicated type for forcewake references
> squash! drm/xe/forcewake: Create dedicated type for forcewake
> references
> squash! squash! drm/xe/forcewake: Create dedicated type for forcewake
> references
One thing I forgot to mention in the cover letter...patches 4-6 should
be squashed together before merging. But since patch #5 is generated by
Coccinelle from smpl rule, I wanted to keep those changes separate for
ease of review and to make it clear what was a human change vs an
automated change.
Matt
> drm/xe/forcewake: Add scope-based cleanup for forcewake
> drm/xe/pm: Add scope-based cleanup helper for runtime PM
> drm/xe/gt: Use scope-based cleanup
> drm/xe/gt_idle: Use scope-based cleanup
> drm/xe/guc: Use scope-based cleanup
> drm/xe/guc_pc: Use scope-based cleanup
> drm/xe/mocs: Use scope-based cleanup
> drm/xe/pat: Use scope-based forcewake
> drm/xe/pxp: Use scope-based cleanup
> drm/xe/gsc: Use scope-based cleanup
> drm/xe/device: Use scope-based cleanup
> drm/xe/devcoredump: Use scope-based cleanup
> drm/xe/display: Use scoped-cleanup
> drm/xe: Create scoped cleanup class for force_wake_get_any_engine()
> drm/xe/drm_client: Use scope-based cleanup
> drm/xe/gt_debugfs: Use scope-based cleanup
> drm/xe/huc: Use scope-based forcewake
> drm/xe/query: Use scope-based forcewake
> drm/xe/reg_sr: Use scope-based forcewake
> drm/xe/vram: Use scope-based forcewake
> drm/xe/bo: Use scope-based runtime PM
> drm/xe/ggtt: Use scope-based runtime pm
> drm/xe/hwmon: Use scope-based runtime PM
> drm/xe/sriov: Use scope-based runtime PM
> drm/xe/tests: Use scope-based runtime PM
> drm/xe/sysfs: Use scope-based runtime power management
> drm/xe/debugfs: Use scope-based runtime PM
>
> drivers/gpu/drm/xe/display/xe_fb_pin.c | 24 ++-
> drivers/gpu/drm/xe/display/xe_hdcp_gsc.c | 25 +--
> drivers/gpu/drm/xe/tests/xe_bo.c | 10 +-
> drivers/gpu/drm/xe/tests/xe_dma_buf.c | 3 +-
> drivers/gpu/drm/xe/tests/xe_migrate.c | 10 +-
> drivers/gpu/drm/xe/tests/xe_mocs.c | 27 +---
> drivers/gpu/drm/xe/xe_bo.c | 3 +-
> drivers/gpu/drm/xe/xe_debugfs.c | 39 +++--
> drivers/gpu/drm/xe/xe_devcoredump.c | 26 ++-
> drivers/gpu/drm/xe/xe_device.c | 35 ++--
> drivers/gpu/drm/xe/xe_device_sysfs.c | 33 ++--
> drivers/gpu/drm/xe/xe_drm_client.c | 77 +++++----
> drivers/gpu/drm/xe/xe_eu_stall.c | 8 +-
> drivers/gpu/drm/xe/xe_force_wake.c | 19 ++-
> drivers/gpu/drm/xe/xe_force_wake.h | 23 ++-
> drivers/gpu/drm/xe/xe_force_wake_types.h | 41 ++++-
> drivers/gpu/drm/xe/xe_ggtt.c | 3 +-
> drivers/gpu/drm/xe/xe_gsc.c | 28 ++--
> drivers/gpu/drm/xe/xe_gsc_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_gsc_proxy.c | 17 +-
> drivers/gpu/drm/xe/xe_gt.c | 149 ++++++------------
> drivers/gpu/drm/xe/xe_gt_debugfs.c | 29 +---
> drivers/gpu/drm/xe/xe_gt_freq.c | 27 ++--
> drivers/gpu/drm/xe/xe_gt_idle.c | 32 ++--
> drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 12 +-
> drivers/gpu/drm/xe/xe_gt_throttle.c | 3 +-
> drivers/gpu/drm/xe/xe_guc.c | 13 +-
> drivers/gpu/drm/xe/xe_guc_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_guc_log.c | 10 +-
> drivers/gpu/drm/xe/xe_guc_pc.c | 62 ++------
> drivers/gpu/drm/xe/xe_guc_submit.c | 9 +-
> drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 4 +-
> drivers/gpu/drm/xe/xe_huc.c | 7 +-
> drivers/gpu/drm/xe/xe_huc_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c | 6 +-
> drivers/gpu/drm/xe/xe_hwmon.c | 16 +-
> drivers/gpu/drm/xe/xe_mocs.c | 18 +--
> drivers/gpu/drm/xe/xe_oa.c | 9 +-
> drivers/gpu/drm/xe/xe_oa_types.h | 3 +
> drivers/gpu/drm/xe/xe_pat.c | 36 ++---
> drivers/gpu/drm/xe/xe_pci_sriov.c | 3 +-
> drivers/gpu/drm/xe/xe_pm.h | 17 ++
> drivers/gpu/drm/xe/xe_pmu.c | 10 +-
> drivers/gpu/drm/xe/xe_pxp.c | 49 ++----
> drivers/gpu/drm/xe/xe_query.c | 16 +-
> drivers/gpu/drm/xe/xe_reg_sr.c | 17 +-
> drivers/gpu/drm/xe/xe_sriov_pf_debugfs.c | 6 +-
> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 6 +-
> drivers/gpu/drm/xe/xe_sriov_vf_ccs.c | 5 +-
> drivers/gpu/drm/xe/xe_tile_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_tile_sriov_pf_debugfs.c | 3 +-
> drivers/gpu/drm/xe/xe_vram.c | 7 +-
> 52 files changed, 422 insertions(+), 625 deletions(-)
>
> --
> 2.51.1
>
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH 04/33] drm/xe/forcewake: Create dedicated type for forcewake references
2025-11-07 18:13 ` [PATCH 04/33] drm/xe/forcewake: Create dedicated type for forcewake references Matt Roper
@ 2025-11-07 19:27 ` Michal Wajdeczko
2025-11-07 21:17 ` Matt Roper
0 siblings, 1 reply; 44+ messages in thread
From: Michal Wajdeczko @ 2025-11-07 19:27 UTC (permalink / raw)
To: Matt Roper, intel-xe
On 11/7/2025 7:13 PM, Matt Roper wrote:
> xe_force_wake_get() currently returns an integer mask of power domains
> that were successfully awoken; both this mask and a pointer to the force
> wake collection must be passed to xe_force_wake_put() to release the
> wake reference.
>
> Create a dedicated structure type to hold both the mask and the
> collection pointer. While this change does little on its own, it will
> make it easier for us to add scope-based cleanup of forcewake in the
> future.
>
> FIXME:
> For ease of review, this patch contains only the manual changes to
> add the structure and change the get/put function definitions; it
> does not build on its own since the rest of the driver is still
> trying to call the get/put functions with the old signature. The
> next patch contains the coccinelle-generated changes necessary
> elsewhere in the driver to adapt to the new interface. The two
> patches will be squashed together when applied, remain separate for
> now to help reviewers.
>
> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> ---
...
> diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
> index 0e3e84bfa51c..86e9bca7cac9 100644
> --- a/drivers/gpu/drm/xe/xe_force_wake.h
> +++ b/drivers/gpu/drm/xe/xe_force_wake.h
> @@ -15,9 +15,9 @@ void xe_force_wake_init_gt(struct xe_gt *gt,
> struct xe_force_wake *fw);
> void xe_force_wake_init_engines(struct xe_gt *gt,
> struct xe_force_wake *fw);
> -unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
> - enum xe_force_wake_domains domains);
> -void xe_force_wake_put(struct xe_force_wake *fw, unsigned int fw_ref);
> +struct xe_force_wake_ref __must_check xe_force_wake_get(struct xe_force_wake *fw,
> + enum xe_force_wake_domains domains);
> +void xe_force_wake_put(struct xe_force_wake_ref fw_ref);
but is it really necessary to change signature of all existing xe_force_wake functions?
in my previous attempt [1] this new helper struct was just initialized inside the CLASS
so we can start use new approach and still use existing xe_fw API without any massive changes
+DEFINE_CLASS(xe_fw, struct xe_force_wake_guard,
+ xe_force_wake_put(_T.fw, _T.ref),
+ ({ (struct xe_force_wake_guard){ fw, xe_force_wake_get(fw, domains) }; }),
+ struct xe_force_wake *fw, enum xe_force_wake_domains domains);
[1] https://patchwork.freedesktop.org/patch/625116/?series=141516&rev=1
>
> static inline int
> xe_force_wake_ref(struct xe_force_wake *fw,
> @@ -56,9 +56,10 @@ xe_force_wake_assert_held(struct xe_force_wake *fw,
> * Return: true if domain is refcounted.
> */
> static inline bool
> -xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains domain)
> +xe_force_wake_ref_has_domain(struct xe_force_wake_ref fw_ref,
> + enum xe_force_wake_domains domain)
> {
> - return fw_ref & domain;
> + return fw_ref.domains & domain;
> }
>
> #endif
> diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h b/drivers/gpu/drm/xe/xe_force_wake_types.h
> index 9cfa28faf7bc..26df4adba4c5 100644
> --- a/drivers/gpu/drm/xe/xe_force_wake_types.h
> +++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
> @@ -107,4 +107,19 @@ struct xe_force_wake {
> struct xe_force_wake_domain domains[XE_FW_DOMAIN_ID_COUNT];
> };
>
> +/**
> + * struct xe_force_wake_ref - Xe force wake reference
> + *
> + * Represents a wakeref for a subset of the power domains belonging to an
> + * xe_force_wake collection. Returned by xe_force_wake_get() and passed
> + * to xe_force_wake_put().
> + */
> +struct xe_force_wake_ref {
> + /** @fw: back pointer to force wake collection */
> + struct xe_force_wake *fw;
> +
> + /** @domains: mask of individual domains held by this reference */
> + unsigned int domains;
> +};
> +
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH 02/33] drm/xe/eustall: Store forcewake reference in stream structure
2025-11-07 18:13 ` [PATCH 02/33] drm/xe/eustall: Store forcewake reference in stream structure Matt Roper
@ 2025-11-07 19:52 ` Harish Chegondi
0 siblings, 0 replies; 44+ messages in thread
From: Harish Chegondi @ 2025-11-07 19:52 UTC (permalink / raw)
To: Matt Roper; +Cc: intel-xe
On Fri, Nov 07, 2025 at 10:13:18AM -0800, Matt Roper wrote:
> Calls to xe_force_wake_put() should generally pass the exact reference
> returned by xe_force_wake_get(). Since EU stall grabs and releases
> forcewake in different functions, xe_eu_stall_disable_locked() is
> currently calling put with a hardcoded RENDER domain. Although this
> works for now, it's somewhat fragile in case the power domain(s)
> required by stall sampling change in the future, or if workarounds show
> up that require us to obtain additional domains.
>
> Stash the original reference obtained during stream enable inside the
> stream structure so that we can use it directly when the stream is
> disabled.
>
> Cc: Harish Chegondi <harish.chegondi@intel.com>
> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Harish Chegondi <harish.chegondi@intel.com>
Thanks for the patch
Harish.
> ---
> drivers/gpu/drm/xe/xe_eu_stall.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_eu_stall.c b/drivers/gpu/drm/xe/xe_eu_stall.c
> index 650e45f6a7c7..97dfb7945b7a 100644
> --- a/drivers/gpu/drm/xe/xe_eu_stall.c
> +++ b/drivers/gpu/drm/xe/xe_eu_stall.c
> @@ -49,6 +49,7 @@ struct xe_eu_stall_data_stream {
> wait_queue_head_t poll_wq;
> size_t data_record_size;
> size_t per_xecore_buf_size;
> + unsigned int fw_ref;
>
> struct xe_gt *gt;
> struct xe_bo *bo;
> @@ -660,13 +661,12 @@ static int xe_eu_stall_stream_enable(struct xe_eu_stall_data_stream *stream)
> struct per_xecore_buf *xecore_buf;
> struct xe_gt *gt = stream->gt;
> u16 group, instance;
> - unsigned int fw_ref;
> int xecore;
>
> /* Take runtime pm ref and forcewake to disable RC6 */
> xe_pm_runtime_get(gt_to_xe(gt));
> - fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_RENDER);
> - if (!xe_force_wake_ref_has_domain(fw_ref, XE_FW_RENDER)) {
> + stream->fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_RENDER);
> + if (!xe_force_wake_ref_has_domain(stream->fw_ref, XE_FW_RENDER)) {
> xe_gt_err(gt, "Failed to get RENDER forcewake\n");
> xe_pm_runtime_put(gt_to_xe(gt));
> return -ETIMEDOUT;
> @@ -832,7 +832,7 @@ static int xe_eu_stall_disable_locked(struct xe_eu_stall_data_stream *stream)
> xe_gt_mcr_multicast_write(gt, ROW_CHICKEN2,
> _MASKED_BIT_DISABLE(DISABLE_DOP_GATING));
>
> - xe_force_wake_put(gt_to_fw(gt), XE_FW_RENDER);
> + xe_force_wake_put(gt_to_fw(gt), stream->fw_ref);
> xe_pm_runtime_put(gt_to_xe(gt));
>
> return 0;
> --
> 2.51.1
>
^ permalink raw reply [flat|nested] 44+ messages in thread
* ✗ CI.checkpatch: warning for Scope-based forcewake and runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (33 preceding siblings ...)
2025-11-07 18:18 ` [PATCH 00/33] Scope-based forcewake and " Matt Roper
@ 2025-11-07 20:43 ` Patchwork
2025-11-07 20:45 ` ✓ CI.KUnit: success " Patchwork
` (2 subsequent siblings)
37 siblings, 0 replies; 44+ messages in thread
From: Patchwork @ 2025-11-07 20:43 UTC (permalink / raw)
To: Matt Roper; +Cc: intel-xe
== Series Details ==
Series: Scope-based forcewake and runtime PM
URL : https://patchwork.freedesktop.org/series/157253/
State : warning
== Summary ==
+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
d9120d4d84745cf011b4b3efb338747e69179dfb
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit 3e7b4b04ed1d66f9b0401869f1cb8b93f3f23ac4
Author: Matt Roper <matthew.d.roper@intel.com>
Date: Fri Nov 7 10:13:49 2025 -0800
drm/xe/debugfs: Use scope-based runtime PM
Switch the debugfs code to use scope-based runtime PM where possible,
for consistency with other parts of the driver.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
+ /mt/dim checkpatch 4dff427fe8bbfd0bdbf7935d23a2aba0c350ab2d drm-intel
71511280d127 drm/xe/forcewake: Improve kerneldoc
1d73ee63879f drm/xe/eustall: Store forcewake reference in stream structure
b9fdd277fab0 drm/xe/oa: Store forcewake reference in stream structure
a12f36f0e820 drm/xe/forcewake: Create dedicated type for forcewake references
054b488a6db7 squash! drm/xe/forcewake: Create dedicated type for forcewake references
-:290: WARNING:BRACES: braces {} are not necessary for single statement blocks
#290: FILE: drivers/gpu/drm/xe/xe_gsc.c:284:
+ if (tile->primary_gt && XE_GT_WA(tile->primary_gt, 14018094691)) {
+ xe_force_wake_put(fw_ref);
+ }
total: 0 errors, 1 warnings, 0 checks, 1031 lines checked
d4d69ec2816f squash! squash! drm/xe/forcewake: Create dedicated type for forcewake references
21f16cf0b765 drm/xe/forcewake: Add scope-based cleanup for forcewake
bafed0fabeb2 drm/xe/pm: Add scope-based cleanup helper for runtime PM
4b1a1a664eb6 drm/xe/gt: Use scope-based cleanup
65cca554e566 drm/xe/gt_idle: Use scope-based cleanup
5115ce172ced drm/xe/guc: Use scope-based cleanup
50c9332fe31d drm/xe/guc_pc: Use scope-based cleanup
2254bb670c04 drm/xe/mocs: Use scope-based cleanup
817bee049941 drm/xe/pat: Use scope-based forcewake
7e3879d5d8ef drm/xe/pxp: Use scope-based cleanup
61ca2c189ee2 drm/xe/gsc: Use scope-based cleanup
03fdb5e56a93 drm/xe/device: Use scope-based cleanup
-:39: ERROR:ASSIGN_IN_IF: do not use assignment in if condition
#39: FILE: drivers/gpu/drm/xe/xe_device.c:222:
+ if ((ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm)) >= 0)
-:52: ERROR:ASSIGN_IN_IF: do not use assignment in if condition
#52: FILE: drivers/gpu/drm/xe/xe_device.c:239:
+ if ((ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm)) >= 0)
total: 2 errors, 0 warnings, 0 checks, 109 lines checked
cd4b190c8394 drm/xe/devcoredump: Use scope-based cleanup
ae9c45a3c213 drm/xe/display: Use scoped-cleanup
-:24: ERROR:ASSIGN_IN_IF: do not use assignment in if condition
#24: FILE: drivers/gpu/drm/xe/display/xe_fb_pin.c:215:
+ if ((ret = ACQUIRE_ERR(mutex_intr, &lock)))
total: 1 errors, 0 warnings, 0 checks, 122 lines checked
16e480d2e64f drm/xe: Create scoped cleanup class for force_wake_get_any_engine()
526913b58b1b drm/xe/drm_client: Use scope-based cleanup
bcf1d955d0af drm/xe/gt_debugfs: Use scope-based cleanup
b0a9566ce664 drm/xe/huc: Use scope-based forcewake
0db66e348ecc drm/xe/query: Use scope-based forcewake
dec7534cdc08 drm/xe/reg_sr: Use scope-based forcewake
959bf6515c4a drm/xe/vram: Use scope-based forcewake
ad718e04d8ab drm/xe/bo: Use scope-based runtime PM
5722d706ed42 drm/xe/ggtt: Use scope-based runtime pm
d6de7b0c6686 drm/xe/hwmon: Use scope-based runtime PM
5333dbb282ba drm/xe/sriov: Use scope-based runtime PM
808fe73dca75 drm/xe/tests: Use scope-based runtime PM
60f137f1bed1 drm/xe/sysfs: Use scope-based runtime power management
3e7b4b04ed1d drm/xe/debugfs: Use scope-based runtime PM
^ permalink raw reply [flat|nested] 44+ messages in thread
* ✓ CI.KUnit: success for Scope-based forcewake and runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (34 preceding siblings ...)
2025-11-07 20:43 ` ✗ CI.checkpatch: warning for " Patchwork
@ 2025-11-07 20:45 ` Patchwork
2025-11-07 21:21 ` ✓ Xe.CI.BAT: " Patchwork
2025-11-09 3:59 ` ✗ Xe.CI.Full: failure " Patchwork
37 siblings, 0 replies; 44+ messages in thread
From: Patchwork @ 2025-11-07 20:45 UTC (permalink / raw)
To: Matt Roper; +Cc: intel-xe
== Series Details ==
Series: Scope-based forcewake and runtime PM
URL : https://patchwork.freedesktop.org/series/157253/
State : success
== Summary ==
+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
[20:43:46] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:43:50] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:44:20] Starting KUnit Kernel (1/1)...
[20:44:20] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:44:20] ================== guc_buf (11 subtests) ===================
[20:44:20] [PASSED] test_smallest
[20:44:20] [PASSED] test_largest
[20:44:20] [PASSED] test_granular
[20:44:20] [PASSED] test_unique
[20:44:20] [PASSED] test_overlap
[20:44:20] [PASSED] test_reusable
[20:44:20] [PASSED] test_too_big
[20:44:20] [PASSED] test_flush
[20:44:20] [PASSED] test_lookup
[20:44:20] [PASSED] test_data
[20:44:20] [PASSED] test_class
[20:44:20] ===================== [PASSED] guc_buf =====================
[20:44:20] =================== guc_dbm (7 subtests) ===================
[20:44:20] [PASSED] test_empty
[20:44:20] [PASSED] test_default
[20:44:20] ======================== test_size ========================
[20:44:20] [PASSED] 4
[20:44:20] [PASSED] 8
[20:44:20] [PASSED] 32
[20:44:20] [PASSED] 256
[20:44:20] ==================== [PASSED] test_size ====================
[20:44:20] ======================= test_reuse ========================
[20:44:20] [PASSED] 4
[20:44:20] [PASSED] 8
[20:44:20] [PASSED] 32
[20:44:20] [PASSED] 256
[20:44:20] =================== [PASSED] test_reuse ====================
[20:44:20] =================== test_range_overlap ====================
[20:44:20] [PASSED] 4
[20:44:20] [PASSED] 8
[20:44:20] [PASSED] 32
[20:44:20] [PASSED] 256
[20:44:20] =============== [PASSED] test_range_overlap ================
[20:44:20] =================== test_range_compact ====================
[20:44:20] [PASSED] 4
[20:44:20] [PASSED] 8
[20:44:20] [PASSED] 32
[20:44:20] [PASSED] 256
[20:44:20] =============== [PASSED] test_range_compact ================
[20:44:20] ==================== test_range_spare =====================
[20:44:20] [PASSED] 4
[20:44:20] [PASSED] 8
[20:44:20] [PASSED] 32
[20:44:20] [PASSED] 256
[20:44:20] ================ [PASSED] test_range_spare =================
[20:44:20] ===================== [PASSED] guc_dbm =====================
[20:44:20] =================== guc_idm (6 subtests) ===================
[20:44:20] [PASSED] bad_init
[20:44:20] [PASSED] no_init
[20:44:20] [PASSED] init_fini
[20:44:20] [PASSED] check_used
[20:44:20] [PASSED] check_quota
[20:44:20] [PASSED] check_all
[20:44:20] ===================== [PASSED] guc_idm =====================
[20:44:20] ================== no_relay (3 subtests) ===================
[20:44:20] [PASSED] xe_drops_guc2pf_if_not_ready
[20:44:20] [PASSED] xe_drops_guc2vf_if_not_ready
[20:44:20] [PASSED] xe_rejects_send_if_not_ready
[20:44:20] ==================== [PASSED] no_relay =====================
[20:44:20] ================== pf_relay (14 subtests) ==================
[20:44:20] [PASSED] pf_rejects_guc2pf_too_short
[20:44:20] [PASSED] pf_rejects_guc2pf_too_long
[20:44:20] [PASSED] pf_rejects_guc2pf_no_payload
[20:44:20] [PASSED] pf_fails_no_payload
[20:44:20] [PASSED] pf_fails_bad_origin
[20:44:20] [PASSED] pf_fails_bad_type
[20:44:20] [PASSED] pf_txn_reports_error
[20:44:20] [PASSED] pf_txn_sends_pf2guc
[20:44:20] [PASSED] pf_sends_pf2guc
[20:44:20] [SKIPPED] pf_loopback_nop
[20:44:20] [SKIPPED] pf_loopback_echo
[20:44:20] [SKIPPED] pf_loopback_fail
[20:44:20] [SKIPPED] pf_loopback_busy
[20:44:20] [SKIPPED] pf_loopback_retry
[20:44:20] ==================== [PASSED] pf_relay =====================
[20:44:20] ================== vf_relay (3 subtests) ===================
[20:44:20] [PASSED] vf_rejects_guc2vf_too_short
[20:44:20] [PASSED] vf_rejects_guc2vf_too_long
[20:44:20] [PASSED] vf_rejects_guc2vf_no_payload
[20:44:20] ==================== [PASSED] vf_relay =====================
[20:44:20] ================ pf_gt_config (4 subtests) =================
[20:44:20] [PASSED] fair_contexts_1vf
[20:44:20] [PASSED] fair_doorbells_1vf
[20:44:20] ====================== fair_contexts ======================
[20:44:20] [PASSED] 1 VF
[20:44:20] [PASSED] 2 VFs
[20:44:20] [PASSED] 3 VFs
[20:44:20] [PASSED] 4 VFs
[20:44:20] [PASSED] 5 VFs
[20:44:20] [PASSED] 6 VFs
[20:44:20] [PASSED] 7 VFs
[20:44:20] [PASSED] 8 VFs
[20:44:20] [PASSED] 9 VFs
[20:44:20] [PASSED] 10 VFs
[20:44:20] [PASSED] 11 VFs
[20:44:20] [PASSED] 12 VFs
[20:44:20] [PASSED] 13 VFs
[20:44:20] [PASSED] 14 VFs
[20:44:20] [PASSED] 15 VFs
[20:44:20] [PASSED] 16 VFs
[20:44:20] [PASSED] 17 VFs
[20:44:20] [PASSED] 18 VFs
[20:44:20] [PASSED] 19 VFs
[20:44:20] [PASSED] 20 VFs
[20:44:20] [PASSED] 21 VFs
[20:44:20] [PASSED] 22 VFs
[20:44:20] [PASSED] 23 VFs
[20:44:20] [PASSED] 24 VFs
[20:44:20] [PASSED] 25 VFs
[20:44:20] [PASSED] 26 VFs
[20:44:20] [PASSED] 27 VFs
[20:44:20] [PASSED] 28 VFs
[20:44:20] [PASSED] 29 VFs
[20:44:20] [PASSED] 30 VFs
[20:44:20] [PASSED] 31 VFs
[20:44:20] [PASSED] 32 VFs
[20:44:20] [PASSED] 33 VFs
[20:44:20] [PASSED] 34 VFs
[20:44:20] [PASSED] 35 VFs
[20:44:20] [PASSED] 36 VFs
[20:44:20] [PASSED] 37 VFs
[20:44:20] [PASSED] 38 VFs
[20:44:20] [PASSED] 39 VFs
[20:44:20] [PASSED] 40 VFs
[20:44:20] [PASSED] 41 VFs
[20:44:20] [PASSED] 42 VFs
[20:44:20] [PASSED] 43 VFs
[20:44:20] [PASSED] 44 VFs
[20:44:20] [PASSED] 45 VFs
[20:44:20] [PASSED] 46 VFs
[20:44:20] [PASSED] 47 VFs
[20:44:20] [PASSED] 48 VFs
[20:44:20] [PASSED] 49 VFs
[20:44:20] [PASSED] 50 VFs
[20:44:20] [PASSED] 51 VFs
[20:44:20] [PASSED] 52 VFs
[20:44:20] [PASSED] 53 VFs
[20:44:20] [PASSED] 54 VFs
[20:44:20] [PASSED] 55 VFs
[20:44:20] [PASSED] 56 VFs
[20:44:20] [PASSED] 57 VFs
[20:44:20] [PASSED] 58 VFs
[20:44:20] [PASSED] 59 VFs
[20:44:20] [PASSED] 60 VFs
[20:44:20] [PASSED] 61 VFs
[20:44:20] [PASSED] 62 VFs
[20:44:20] [PASSED] 63 VFs
[20:44:20] ================== [PASSED] fair_contexts ==================
[20:44:20] ===================== fair_doorbells ======================
[20:44:20] [PASSED] 1 VF
[20:44:20] [PASSED] 2 VFs
[20:44:20] [PASSED] 3 VFs
[20:44:20] [PASSED] 4 VFs
[20:44:20] [PASSED] 5 VFs
[20:44:20] [PASSED] 6 VFs
[20:44:20] [PASSED] 7 VFs
[20:44:20] [PASSED] 8 VFs
[20:44:20] [PASSED] 9 VFs
[20:44:20] [PASSED] 10 VFs
[20:44:20] [PASSED] 11 VFs
[20:44:20] [PASSED] 12 VFs
[20:44:20] [PASSED] 13 VFs
[20:44:20] [PASSED] 14 VFs
[20:44:20] [PASSED] 15 VFs
[20:44:20] [PASSED] 16 VFs
[20:44:20] [PASSED] 17 VFs
[20:44:20] [PASSED] 18 VFs
[20:44:20] [PASSED] 19 VFs
[20:44:20] [PASSED] 20 VFs
[20:44:20] [PASSED] 21 VFs
[20:44:20] [PASSED] 22 VFs
[20:44:20] [PASSED] 23 VFs
[20:44:20] [PASSED] 24 VFs
[20:44:20] [PASSED] 25 VFs
[20:44:20] [PASSED] 26 VFs
[20:44:20] [PASSED] 27 VFs
[20:44:20] [PASSED] 28 VFs
[20:44:20] [PASSED] 29 VFs
[20:44:20] [PASSED] 30 VFs
[20:44:20] [PASSED] 31 VFs
[20:44:20] [PASSED] 32 VFs
[20:44:20] [PASSED] 33 VFs
[20:44:20] [PASSED] 34 VFs
[20:44:20] [PASSED] 35 VFs
[20:44:20] [PASSED] 36 VFs
[20:44:20] [PASSED] 37 VFs
[20:44:20] [PASSED] 38 VFs
[20:44:20] [PASSED] 39 VFs
[20:44:20] [PASSED] 40 VFs
[20:44:20] [PASSED] 41 VFs
[20:44:20] [PASSED] 42 VFs
[20:44:20] [PASSED] 43 VFs
[20:44:20] [PASSED] 44 VFs
[20:44:20] [PASSED] 45 VFs
[20:44:20] [PASSED] 46 VFs
[20:44:20] [PASSED] 47 VFs
[20:44:20] [PASSED] 48 VFs
[20:44:20] [PASSED] 49 VFs
[20:44:20] [PASSED] 50 VFs
[20:44:20] [PASSED] 51 VFs
[20:44:20] [PASSED] 52 VFs
[20:44:20] [PASSED] 53 VFs
[20:44:20] [PASSED] 54 VFs
[20:44:20] [PASSED] 55 VFs
[20:44:20] [PASSED] 56 VFs
[20:44:20] [PASSED] 57 VFs
[20:44:20] [PASSED] 58 VFs
[20:44:20] [PASSED] 59 VFs
[20:44:20] [PASSED] 60 VFs
[20:44:20] [PASSED] 61 VFs
[20:44:20] [PASSED] 62 VFs
[20:44:20] [PASSED] 63 VFs
[20:44:20] ================= [PASSED] fair_doorbells ==================
[20:44:20] ================== [PASSED] pf_gt_config ===================
[20:44:20] ===================== lmtt (1 subtest) =====================
[20:44:20] ======================== test_ops =========================
[20:44:20] [PASSED] 2-level
[20:44:20] [PASSED] multi-level
[20:44:20] ==================== [PASSED] test_ops =====================
[20:44:20] ====================== [PASSED] lmtt =======================
[20:44:20] ================= pf_service (11 subtests) =================
[20:44:20] [PASSED] pf_negotiate_any
[20:44:20] [PASSED] pf_negotiate_base_match
[20:44:20] [PASSED] pf_negotiate_base_newer
[20:44:20] [PASSED] pf_negotiate_base_next
[20:44:20] [SKIPPED] pf_negotiate_base_older
[20:44:20] [PASSED] pf_negotiate_base_prev
[20:44:20] [PASSED] pf_negotiate_latest_match
[20:44:20] [PASSED] pf_negotiate_latest_newer
[20:44:20] [PASSED] pf_negotiate_latest_next
[20:44:20] [SKIPPED] pf_negotiate_latest_older
[20:44:20] [SKIPPED] pf_negotiate_latest_prev
[20:44:20] =================== [PASSED] pf_service ====================
[20:44:20] ================= xe_guc_g2g (2 subtests) ==================
[20:44:20] ============== xe_live_guc_g2g_kunit_default ==============
[20:44:20] ========= [SKIPPED] xe_live_guc_g2g_kunit_default ==========
[20:44:20] ============== xe_live_guc_g2g_kunit_allmem ===============
[20:44:20] ========== [SKIPPED] xe_live_guc_g2g_kunit_allmem ==========
[20:44:20] =================== [SKIPPED] xe_guc_g2g ===================
[20:44:20] =================== xe_mocs (2 subtests) ===================
[20:44:20] ================ xe_live_mocs_kernel_kunit ================
[20:44:20] =========== [SKIPPED] xe_live_mocs_kernel_kunit ============
[20:44:20] ================ xe_live_mocs_reset_kunit =================
[20:44:20] ============ [SKIPPED] xe_live_mocs_reset_kunit ============
[20:44:20] ==================== [SKIPPED] xe_mocs =====================
[20:44:20] ================= xe_migrate (2 subtests) ==================
[20:44:20] ================= xe_migrate_sanity_kunit =================
[20:44:20] ============ [SKIPPED] xe_migrate_sanity_kunit =============
[20:44:20] ================== xe_validate_ccs_kunit ==================
[20:44:20] ============= [SKIPPED] xe_validate_ccs_kunit ==============
[20:44:20] =================== [SKIPPED] xe_migrate ===================
[20:44:20] ================== xe_dma_buf (1 subtest) ==================
[20:44:20] ==================== xe_dma_buf_kunit =====================
[20:44:20] ================ [SKIPPED] xe_dma_buf_kunit ================
[20:44:20] =================== [SKIPPED] xe_dma_buf ===================
[20:44:20] ================= xe_bo_shrink (1 subtest) =================
[20:44:20] =================== xe_bo_shrink_kunit ====================
[20:44:20] =============== [SKIPPED] xe_bo_shrink_kunit ===============
[20:44:20] ================== [SKIPPED] xe_bo_shrink ==================
[20:44:20] ==================== xe_bo (2 subtests) ====================
[20:44:20] ================== xe_ccs_migrate_kunit ===================
[20:44:20] ============== [SKIPPED] xe_ccs_migrate_kunit ==============
[20:44:20] ==================== xe_bo_evict_kunit ====================
[20:44:20] =============== [SKIPPED] xe_bo_evict_kunit ================
[20:44:20] ===================== [SKIPPED] xe_bo ======================
[20:44:20] ==================== args (11 subtests) ====================
[20:44:20] [PASSED] count_args_test
[20:44:20] [PASSED] call_args_example
[20:44:20] [PASSED] call_args_test
[20:44:20] [PASSED] drop_first_arg_example
[20:44:20] [PASSED] drop_first_arg_test
[20:44:20] [PASSED] first_arg_example
[20:44:20] [PASSED] first_arg_test
[20:44:20] [PASSED] last_arg_example
[20:44:20] [PASSED] last_arg_test
[20:44:20] [PASSED] pick_arg_example
[20:44:20] [PASSED] sep_comma_example
[20:44:20] ====================== [PASSED] args =======================
[20:44:20] =================== xe_pci (3 subtests) ====================
[20:44:20] ==================== check_graphics_ip ====================
[20:44:20] [PASSED] 12.00 Xe_LP
[20:44:20] [PASSED] 12.10 Xe_LP+
[20:44:20] [PASSED] 12.55 Xe_HPG
[20:44:20] [PASSED] 12.60 Xe_HPC
[20:44:20] [PASSED] 12.70 Xe_LPG
[20:44:20] [PASSED] 12.71 Xe_LPG
[20:44:20] [PASSED] 12.74 Xe_LPG+
[20:44:20] [PASSED] 20.01 Xe2_HPG
[20:44:20] [PASSED] 20.02 Xe2_HPG
[20:44:20] [PASSED] 20.04 Xe2_LPG
[20:44:20] [PASSED] 30.00 Xe3_LPG
[20:44:20] [PASSED] 30.01 Xe3_LPG
[20:44:20] [PASSED] 30.03 Xe3_LPG
[20:44:20] [PASSED] 30.04 Xe3_LPG
[20:44:20] [PASSED] 30.05 Xe3_LPG
[20:44:20] [PASSED] 35.11 Xe3p_XPC
[20:44:20] ================ [PASSED] check_graphics_ip ================
[20:44:20] ===================== check_media_ip ======================
[20:44:20] [PASSED] 12.00 Xe_M
[20:44:20] [PASSED] 12.55 Xe_HPM
[20:44:20] [PASSED] 13.00 Xe_LPM+
[20:44:20] [PASSED] 13.01 Xe2_HPM
[20:44:20] [PASSED] 20.00 Xe2_LPM
[20:44:20] [PASSED] 30.00 Xe3_LPM
[20:44:20] [PASSED] 30.02 Xe3_LPM
[20:44:20] [PASSED] 35.00 Xe3p_LPM
[20:44:20] [PASSED] 35.03 Xe3p_HPM
[20:44:20] ================= [PASSED] check_media_ip ==================
[20:44:20] =================== check_platform_desc ===================
[20:44:20] [PASSED] 0x9A60 (TIGERLAKE)
[20:44:20] [PASSED] 0x9A68 (TIGERLAKE)
[20:44:20] [PASSED] 0x9A70 (TIGERLAKE)
[20:44:20] [PASSED] 0x9A40 (TIGERLAKE)
[20:44:20] [PASSED] 0x9A49 (TIGERLAKE)
[20:44:20] [PASSED] 0x9A59 (TIGERLAKE)
[20:44:20] [PASSED] 0x9A78 (TIGERLAKE)
[20:44:20] [PASSED] 0x9AC0 (TIGERLAKE)
[20:44:20] [PASSED] 0x9AC9 (TIGERLAKE)
[20:44:20] [PASSED] 0x9AD9 (TIGERLAKE)
[20:44:20] [PASSED] 0x9AF8 (TIGERLAKE)
[20:44:20] [PASSED] 0x4C80 (ROCKETLAKE)
[20:44:20] [PASSED] 0x4C8A (ROCKETLAKE)
[20:44:20] [PASSED] 0x4C8B (ROCKETLAKE)
[20:44:20] [PASSED] 0x4C8C (ROCKETLAKE)
[20:44:20] [PASSED] 0x4C90 (ROCKETLAKE)
[20:44:20] [PASSED] 0x4C9A (ROCKETLAKE)
[20:44:20] [PASSED] 0x4680 (ALDERLAKE_S)
[20:44:20] [PASSED] 0x4682 (ALDERLAKE_S)
[20:44:20] [PASSED] 0x4688 (ALDERLAKE_S)
[20:44:20] [PASSED] 0x468A (ALDERLAKE_S)
[20:44:20] [PASSED] 0x468B (ALDERLAKE_S)
[20:44:20] [PASSED] 0x4690 (ALDERLAKE_S)
[20:44:20] [PASSED] 0x4692 (ALDERLAKE_S)
[20:44:20] [PASSED] 0x4693 (ALDERLAKE_S)
[20:44:20] [PASSED] 0x46A0 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46A1 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46A2 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46A3 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46A6 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46A8 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46AA (ALDERLAKE_P)
[20:44:20] [PASSED] 0x462A (ALDERLAKE_P)
[20:44:20] [PASSED] 0x4626 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x4628 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46B0 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46B1 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46B2 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46B3 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46C0 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46C1 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46C2 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46C3 (ALDERLAKE_P)
[20:44:20] [PASSED] 0x46D0 (ALDERLAKE_N)
[20:44:20] [PASSED] 0x46D1 (ALDERLAKE_N)
[20:44:20] [PASSED] 0x46D2 (ALDERLAKE_N)
[20:44:20] [PASSED] 0x46D3 (ALDERLAKE_N)
[20:44:20] [PASSED] 0x46D4 (ALDERLAKE_N)
[20:44:20] [PASSED] 0xA721 (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA7A1 (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA7A9 (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA7AC (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA7AD (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA720 (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA7A0 (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA7A8 (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA7AA (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA7AB (ALDERLAKE_P)
[20:44:20] [PASSED] 0xA780 (ALDERLAKE_S)
[20:44:20] [PASSED] 0xA781 (ALDERLAKE_S)
[20:44:20] [PASSED] 0xA782 (ALDERLAKE_S)
[20:44:20] [PASSED] 0xA783 (ALDERLAKE_S)
[20:44:20] [PASSED] 0xA788 (ALDERLAKE_S)
[20:44:20] [PASSED] 0xA789 (ALDERLAKE_S)
[20:44:20] [PASSED] 0xA78A (ALDERLAKE_S)
[20:44:20] [PASSED] 0xA78B (ALDERLAKE_S)
[20:44:20] [PASSED] 0x4905 (DG1)
[20:44:20] [PASSED] 0x4906 (DG1)
[20:44:20] [PASSED] 0x4907 (DG1)
[20:44:20] [PASSED] 0x4908 (DG1)
[20:44:20] [PASSED] 0x4909 (DG1)
[20:44:20] [PASSED] 0x56C0 (DG2)
[20:44:20] [PASSED] 0x56C2 (DG2)
[20:44:20] [PASSED] 0x56C1 (DG2)
[20:44:20] [PASSED] 0x7D51 (METEORLAKE)
[20:44:20] [PASSED] 0x7DD1 (METEORLAKE)
[20:44:20] [PASSED] 0x7D41 (METEORLAKE)
[20:44:20] [PASSED] 0x7D67 (METEORLAKE)
[20:44:20] [PASSED] 0xB640 (METEORLAKE)
[20:44:20] [PASSED] 0x56A0 (DG2)
[20:44:20] [PASSED] 0x56A1 (DG2)
[20:44:20] [PASSED] 0x56A2 (DG2)
[20:44:20] [PASSED] 0x56BE (DG2)
[20:44:20] [PASSED] 0x56BF (DG2)
[20:44:20] [PASSED] 0x5690 (DG2)
stty: 'standard input': Inappropriate ioctl for device
[20:44:20] [PASSED] 0x5691 (DG2)
[20:44:20] [PASSED] 0x5692 (DG2)
[20:44:20] [PASSED] 0x56A5 (DG2)
[20:44:20] [PASSED] 0x56A6 (DG2)
[20:44:20] [PASSED] 0x56B0 (DG2)
[20:44:20] [PASSED] 0x56B1 (DG2)
[20:44:20] [PASSED] 0x56BA (DG2)
[20:44:20] [PASSED] 0x56BB (DG2)
[20:44:20] [PASSED] 0x56BC (DG2)
[20:44:20] [PASSED] 0x56BD (DG2)
[20:44:20] [PASSED] 0x5693 (DG2)
[20:44:20] [PASSED] 0x5694 (DG2)
[20:44:20] [PASSED] 0x5695 (DG2)
[20:44:20] [PASSED] 0x56A3 (DG2)
[20:44:20] [PASSED] 0x56A4 (DG2)
[20:44:20] [PASSED] 0x56B2 (DG2)
[20:44:20] [PASSED] 0x56B3 (DG2)
[20:44:20] [PASSED] 0x5696 (DG2)
[20:44:20] [PASSED] 0x5697 (DG2)
[20:44:20] [PASSED] 0xB69 (PVC)
[20:44:20] [PASSED] 0xB6E (PVC)
[20:44:20] [PASSED] 0xBD4 (PVC)
[20:44:20] [PASSED] 0xBD5 (PVC)
[20:44:20] [PASSED] 0xBD6 (PVC)
[20:44:20] [PASSED] 0xBD7 (PVC)
[20:44:20] [PASSED] 0xBD8 (PVC)
[20:44:20] [PASSED] 0xBD9 (PVC)
[20:44:20] [PASSED] 0xBDA (PVC)
[20:44:20] [PASSED] 0xBDB (PVC)
[20:44:20] [PASSED] 0xBE0 (PVC)
[20:44:20] [PASSED] 0xBE1 (PVC)
[20:44:20] [PASSED] 0xBE5 (PVC)
[20:44:20] [PASSED] 0x7D40 (METEORLAKE)
[20:44:20] [PASSED] 0x7D45 (METEORLAKE)
[20:44:20] [PASSED] 0x7D55 (METEORLAKE)
[20:44:20] [PASSED] 0x7D60 (METEORLAKE)
[20:44:20] [PASSED] 0x7DD5 (METEORLAKE)
[20:44:20] [PASSED] 0x6420 (LUNARLAKE)
[20:44:20] [PASSED] 0x64A0 (LUNARLAKE)
[20:44:20] [PASSED] 0x64B0 (LUNARLAKE)
[20:44:20] [PASSED] 0xE202 (BATTLEMAGE)
[20:44:20] [PASSED] 0xE209 (BATTLEMAGE)
[20:44:20] [PASSED] 0xE20B (BATTLEMAGE)
[20:44:20] [PASSED] 0xE20C (BATTLEMAGE)
[20:44:20] [PASSED] 0xE20D (BATTLEMAGE)
[20:44:20] [PASSED] 0xE210 (BATTLEMAGE)
[20:44:20] [PASSED] 0xE211 (BATTLEMAGE)
[20:44:20] [PASSED] 0xE212 (BATTLEMAGE)
[20:44:20] [PASSED] 0xE216 (BATTLEMAGE)
[20:44:20] [PASSED] 0xE220 (BATTLEMAGE)
[20:44:20] [PASSED] 0xE221 (BATTLEMAGE)
[20:44:20] [PASSED] 0xE222 (BATTLEMAGE)
[20:44:20] [PASSED] 0xE223 (BATTLEMAGE)
[20:44:20] [PASSED] 0xB080 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB081 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB082 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB083 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB084 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB085 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB086 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB087 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB08F (PANTHERLAKE)
[20:44:20] [PASSED] 0xB090 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB0A0 (PANTHERLAKE)
[20:44:20] [PASSED] 0xB0B0 (PANTHERLAKE)
[20:44:20] [PASSED] 0xD740 (NOVALAKE_S)
[20:44:20] [PASSED] 0xD741 (NOVALAKE_S)
[20:44:20] [PASSED] 0xD742 (NOVALAKE_S)
[20:44:20] [PASSED] 0xD743 (NOVALAKE_S)
[20:44:20] [PASSED] 0xD744 (NOVALAKE_S)
[20:44:20] [PASSED] 0xD745 (NOVALAKE_S)
[20:44:20] [PASSED] 0x674C (CRESCENTISLAND)
[20:44:20] [PASSED] 0xFD80 (PANTHERLAKE)
[20:44:20] [PASSED] 0xFD81 (PANTHERLAKE)
[20:44:20] =============== [PASSED] check_platform_desc ===============
[20:44:20] ===================== [PASSED] xe_pci ======================
[20:44:20] =================== xe_rtp (2 subtests) ====================
[20:44:20] =============== xe_rtp_process_to_sr_tests ================
[20:44:20] [PASSED] coalesce-same-reg
[20:44:20] [PASSED] no-match-no-add
[20:44:20] [PASSED] match-or
[20:44:20] [PASSED] match-or-xfail
[20:44:20] [PASSED] no-match-no-add-multiple-rules
[20:44:20] [PASSED] two-regs-two-entries
[20:44:20] [PASSED] clr-one-set-other
[20:44:20] [PASSED] set-field
[20:44:20] [PASSED] conflict-duplicate
[20:44:20] [PASSED] conflict-not-disjoint
[20:44:20] [PASSED] conflict-reg-type
[20:44:20] =========== [PASSED] xe_rtp_process_to_sr_tests ============
[20:44:20] ================== xe_rtp_process_tests ===================
[20:44:20] [PASSED] active1
[20:44:20] [PASSED] active2
[20:44:20] [PASSED] active-inactive
[20:44:20] [PASSED] inactive-active
[20:44:20] [PASSED] inactive-1st_or_active-inactive
[20:44:20] [PASSED] inactive-2nd_or_active-inactive
[20:44:20] [PASSED] inactive-last_or_active-inactive
[20:44:20] [PASSED] inactive-no_or_active-inactive
[20:44:20] ============== [PASSED] xe_rtp_process_tests ===============
[20:44:20] ===================== [PASSED] xe_rtp ======================
[20:44:20] ==================== xe_wa (1 subtest) =====================
[20:44:20] ======================== xe_wa_gt =========================
[20:44:20] [PASSED] TIGERLAKE B0
[20:44:20] [PASSED] DG1 A0
[20:44:20] [PASSED] DG1 B0
[20:44:20] [PASSED] ALDERLAKE_S A0
[20:44:20] [PASSED] ALDERLAKE_S B0
[20:44:20] [PASSED] ALDERLAKE_S C0
[20:44:20] [PASSED] ALDERLAKE_S D0
[20:44:20] [PASSED] ALDERLAKE_P A0
[20:44:20] [PASSED] ALDERLAKE_P B0
[20:44:20] [PASSED] ALDERLAKE_P C0
[20:44:20] [PASSED] ALDERLAKE_S RPLS D0
[20:44:20] [PASSED] ALDERLAKE_P RPLU E0
[20:44:20] [PASSED] DG2 G10 C0
[20:44:20] [PASSED] DG2 G11 B1
[20:44:20] [PASSED] DG2 G12 A1
[20:44:20] [PASSED] METEORLAKE 12.70(Xe_LPG) A0 13.00(Xe_LPM+) A0
[20:44:20] [PASSED] METEORLAKE 12.71(Xe_LPG) A0 13.00(Xe_LPM+) A0
[20:44:20] [PASSED] METEORLAKE 12.74(Xe_LPG+) A0 13.00(Xe_LPM+) A0
[20:44:20] [PASSED] LUNARLAKE 20.04(Xe2_LPG) A0 20.00(Xe2_LPM) A0
[20:44:20] [PASSED] LUNARLAKE 20.04(Xe2_LPG) B0 20.00(Xe2_LPM) A0
[20:44:20] [PASSED] BATTLEMAGE 20.01(Xe2_HPG) A0 13.01(Xe2_HPM) A1
[20:44:20] [PASSED] PANTHERLAKE 30.00(Xe3_LPG) A0 30.00(Xe3_LPM) A0
[20:44:20] ==================== [PASSED] xe_wa_gt =====================
[20:44:20] ====================== [PASSED] xe_wa ======================
[20:44:20] ============================================================
[20:44:20] Testing complete. Ran 446 tests: passed: 428, skipped: 18
[20:44:21] Elapsed time: 34.935s total, 4.181s configuring, 30.287s building, 0.422s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/tests/.kunitconfig
[20:44:21] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:44:22] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:44:47] Starting KUnit Kernel (1/1)...
[20:44:47] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:44:47] ============ drm_test_pick_cmdline (2 subtests) ============
[20:44:47] [PASSED] drm_test_pick_cmdline_res_1920_1080_60
[20:44:47] =============== drm_test_pick_cmdline_named ===============
[20:44:47] [PASSED] NTSC
[20:44:47] [PASSED] NTSC-J
[20:44:47] [PASSED] PAL
[20:44:47] [PASSED] PAL-M
[20:44:47] =========== [PASSED] drm_test_pick_cmdline_named ===========
[20:44:47] ============== [PASSED] drm_test_pick_cmdline ==============
[20:44:47] == drm_test_atomic_get_connector_for_encoder (1 subtest) ===
[20:44:47] [PASSED] drm_test_drm_atomic_get_connector_for_encoder
[20:44:47] ==== [PASSED] drm_test_atomic_get_connector_for_encoder ====
[20:44:47] =========== drm_validate_clone_mode (2 subtests) ===========
[20:44:47] ============== drm_test_check_in_clone_mode ===============
[20:44:47] [PASSED] in_clone_mode
[20:44:47] [PASSED] not_in_clone_mode
[20:44:47] ========== [PASSED] drm_test_check_in_clone_mode ===========
[20:44:47] =============== drm_test_check_valid_clones ===============
[20:44:47] [PASSED] not_in_clone_mode
[20:44:47] [PASSED] valid_clone
[20:44:47] [PASSED] invalid_clone
[20:44:47] =========== [PASSED] drm_test_check_valid_clones ===========
[20:44:47] ============= [PASSED] drm_validate_clone_mode =============
[20:44:47] ============= drm_validate_modeset (1 subtest) =============
[20:44:47] [PASSED] drm_test_check_connector_changed_modeset
[20:44:47] ============== [PASSED] drm_validate_modeset ===============
[20:44:47] ====== drm_test_bridge_get_current_state (2 subtests) ======
[20:44:47] [PASSED] drm_test_drm_bridge_get_current_state_atomic
[20:44:47] [PASSED] drm_test_drm_bridge_get_current_state_legacy
[20:44:47] ======== [PASSED] drm_test_bridge_get_current_state ========
[20:44:47] ====== drm_test_bridge_helper_reset_crtc (3 subtests) ======
[20:44:47] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic
[20:44:47] [PASSED] drm_test_drm_bridge_helper_reset_crtc_atomic_disabled
[20:44:47] [PASSED] drm_test_drm_bridge_helper_reset_crtc_legacy
[20:44:47] ======== [PASSED] drm_test_bridge_helper_reset_crtc ========
[20:44:47] ============== drm_bridge_alloc (2 subtests) ===============
[20:44:47] [PASSED] drm_test_drm_bridge_alloc_basic
[20:44:47] [PASSED] drm_test_drm_bridge_alloc_get_put
[20:44:47] ================ [PASSED] drm_bridge_alloc =================
[20:44:47] ================== drm_buddy (8 subtests) ==================
[20:44:47] [PASSED] drm_test_buddy_alloc_limit
[20:44:47] [PASSED] drm_test_buddy_alloc_optimistic
[20:44:47] [PASSED] drm_test_buddy_alloc_pessimistic
[20:44:47] [PASSED] drm_test_buddy_alloc_pathological
[20:44:47] [PASSED] drm_test_buddy_alloc_contiguous
[20:44:47] [PASSED] drm_test_buddy_alloc_clear
[20:44:47] [PASSED] drm_test_buddy_alloc_range_bias
[20:44:47] [PASSED] drm_test_buddy_fragmentation_performance
[20:44:47] ==================== [PASSED] drm_buddy ====================
[20:44:47] ============= drm_cmdline_parser (40 subtests) =============
[20:44:47] [PASSED] drm_test_cmdline_force_d_only
[20:44:47] [PASSED] drm_test_cmdline_force_D_only_dvi
[20:44:47] [PASSED] drm_test_cmdline_force_D_only_hdmi
[20:44:47] [PASSED] drm_test_cmdline_force_D_only_not_digital
[20:44:47] [PASSED] drm_test_cmdline_force_e_only
[20:44:47] [PASSED] drm_test_cmdline_res
[20:44:47] [PASSED] drm_test_cmdline_res_vesa
[20:44:47] [PASSED] drm_test_cmdline_res_vesa_rblank
[20:44:47] [PASSED] drm_test_cmdline_res_rblank
[20:44:47] [PASSED] drm_test_cmdline_res_bpp
[20:44:47] [PASSED] drm_test_cmdline_res_refresh
[20:44:47] [PASSED] drm_test_cmdline_res_bpp_refresh
[20:44:47] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced
[20:44:47] [PASSED] drm_test_cmdline_res_bpp_refresh_margins
[20:44:47] [PASSED] drm_test_cmdline_res_bpp_refresh_force_off
[20:44:47] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on
[20:44:47] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_analog
[20:44:47] [PASSED] drm_test_cmdline_res_bpp_refresh_force_on_digital
[20:44:47] [PASSED] drm_test_cmdline_res_bpp_refresh_interlaced_margins_force_on
[20:44:47] [PASSED] drm_test_cmdline_res_margins_force_on
[20:44:47] [PASSED] drm_test_cmdline_res_vesa_margins
[20:44:47] [PASSED] drm_test_cmdline_name
[20:44:47] [PASSED] drm_test_cmdline_name_bpp
[20:44:47] [PASSED] drm_test_cmdline_name_option
[20:44:47] [PASSED] drm_test_cmdline_name_bpp_option
[20:44:47] [PASSED] drm_test_cmdline_rotate_0
[20:44:47] [PASSED] drm_test_cmdline_rotate_90
[20:44:47] [PASSED] drm_test_cmdline_rotate_180
[20:44:47] [PASSED] drm_test_cmdline_rotate_270
[20:44:47] [PASSED] drm_test_cmdline_hmirror
[20:44:47] [PASSED] drm_test_cmdline_vmirror
[20:44:47] [PASSED] drm_test_cmdline_margin_options
[20:44:47] [PASSED] drm_test_cmdline_multiple_options
[20:44:47] [PASSED] drm_test_cmdline_bpp_extra_and_option
[20:44:47] [PASSED] drm_test_cmdline_extra_and_option
[20:44:47] [PASSED] drm_test_cmdline_freestanding_options
[20:44:47] [PASSED] drm_test_cmdline_freestanding_force_e_and_options
[20:44:47] [PASSED] drm_test_cmdline_panel_orientation
[20:44:47] ================ drm_test_cmdline_invalid =================
[20:44:47] [PASSED] margin_only
[20:44:47] [PASSED] interlace_only
[20:44:47] [PASSED] res_missing_x
[20:44:47] [PASSED] res_missing_y
[20:44:47] [PASSED] res_bad_y
[20:44:47] [PASSED] res_missing_y_bpp
[20:44:47] [PASSED] res_bad_bpp
[20:44:47] [PASSED] res_bad_refresh
[20:44:47] [PASSED] res_bpp_refresh_force_on_off
[20:44:47] [PASSED] res_invalid_mode
[20:44:47] [PASSED] res_bpp_wrong_place_mode
[20:44:47] [PASSED] name_bpp_refresh
[20:44:47] [PASSED] name_refresh
[20:44:47] [PASSED] name_refresh_wrong_mode
[20:44:47] [PASSED] name_refresh_invalid_mode
[20:44:47] [PASSED] rotate_multiple
[20:44:47] [PASSED] rotate_invalid_val
[20:44:47] [PASSED] rotate_truncated
[20:44:47] [PASSED] invalid_option
[20:44:47] [PASSED] invalid_tv_option
[20:44:47] [PASSED] truncated_tv_option
[20:44:47] ============ [PASSED] drm_test_cmdline_invalid =============
[20:44:47] =============== drm_test_cmdline_tv_options ===============
[20:44:47] [PASSED] NTSC
[20:44:47] [PASSED] NTSC_443
[20:44:47] [PASSED] NTSC_J
[20:44:47] [PASSED] PAL
[20:44:47] [PASSED] PAL_M
[20:44:47] [PASSED] PAL_N
[20:44:47] [PASSED] SECAM
[20:44:47] [PASSED] MONO_525
[20:44:47] [PASSED] MONO_625
[20:44:47] =========== [PASSED] drm_test_cmdline_tv_options ===========
[20:44:47] =============== [PASSED] drm_cmdline_parser ================
[20:44:47] ========== drmm_connector_hdmi_init (20 subtests) ==========
[20:44:47] [PASSED] drm_test_connector_hdmi_init_valid
[20:44:47] [PASSED] drm_test_connector_hdmi_init_bpc_8
[20:44:47] [PASSED] drm_test_connector_hdmi_init_bpc_10
[20:44:47] [PASSED] drm_test_connector_hdmi_init_bpc_12
[20:44:47] [PASSED] drm_test_connector_hdmi_init_bpc_invalid
[20:44:47] [PASSED] drm_test_connector_hdmi_init_bpc_null
[20:44:47] [PASSED] drm_test_connector_hdmi_init_formats_empty
[20:44:47] [PASSED] drm_test_connector_hdmi_init_formats_no_rgb
[20:44:47] === drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[20:44:47] [PASSED] supported_formats=0x9 yuv420_allowed=1
[20:44:47] [PASSED] supported_formats=0x9 yuv420_allowed=0
[20:44:47] [PASSED] supported_formats=0x3 yuv420_allowed=1
[20:44:47] [PASSED] supported_formats=0x3 yuv420_allowed=0
[20:44:47] === [PASSED] drm_test_connector_hdmi_init_formats_yuv420_allowed ===
[20:44:47] [PASSED] drm_test_connector_hdmi_init_null_ddc
[20:44:47] [PASSED] drm_test_connector_hdmi_init_null_product
[20:44:47] [PASSED] drm_test_connector_hdmi_init_null_vendor
[20:44:47] [PASSED] drm_test_connector_hdmi_init_product_length_exact
[20:44:47] [PASSED] drm_test_connector_hdmi_init_product_length_too_long
[20:44:47] [PASSED] drm_test_connector_hdmi_init_product_valid
[20:44:47] [PASSED] drm_test_connector_hdmi_init_vendor_length_exact
[20:44:47] [PASSED] drm_test_connector_hdmi_init_vendor_length_too_long
[20:44:47] [PASSED] drm_test_connector_hdmi_init_vendor_valid
[20:44:47] ========= drm_test_connector_hdmi_init_type_valid =========
[20:44:47] [PASSED] HDMI-A
[20:44:47] [PASSED] HDMI-B
[20:44:47] ===== [PASSED] drm_test_connector_hdmi_init_type_valid =====
[20:44:47] ======== drm_test_connector_hdmi_init_type_invalid ========
[20:44:47] [PASSED] Unknown
[20:44:47] [PASSED] VGA
[20:44:47] [PASSED] DVI-I
[20:44:47] [PASSED] DVI-D
[20:44:47] [PASSED] DVI-A
[20:44:47] [PASSED] Composite
[20:44:47] [PASSED] SVIDEO
[20:44:47] [PASSED] LVDS
[20:44:47] [PASSED] Component
[20:44:47] [PASSED] DIN
[20:44:47] [PASSED] DP
[20:44:47] [PASSED] TV
[20:44:47] [PASSED] eDP
[20:44:47] [PASSED] Virtual
[20:44:47] [PASSED] DSI
[20:44:47] [PASSED] DPI
[20:44:47] [PASSED] Writeback
[20:44:47] [PASSED] SPI
[20:44:47] [PASSED] USB
[20:44:47] ==== [PASSED] drm_test_connector_hdmi_init_type_invalid ====
[20:44:47] ============ [PASSED] drmm_connector_hdmi_init =============
[20:44:47] ============= drmm_connector_init (3 subtests) =============
[20:44:47] [PASSED] drm_test_drmm_connector_init
[20:44:47] [PASSED] drm_test_drmm_connector_init_null_ddc
[20:44:47] ========= drm_test_drmm_connector_init_type_valid =========
[20:44:47] [PASSED] Unknown
[20:44:47] [PASSED] VGA
[20:44:47] [PASSED] DVI-I
[20:44:47] [PASSED] DVI-D
[20:44:47] [PASSED] DVI-A
[20:44:47] [PASSED] Composite
[20:44:47] [PASSED] SVIDEO
[20:44:47] [PASSED] LVDS
[20:44:47] [PASSED] Component
[20:44:47] [PASSED] DIN
[20:44:47] [PASSED] DP
[20:44:47] [PASSED] HDMI-A
[20:44:47] [PASSED] HDMI-B
[20:44:47] [PASSED] TV
[20:44:47] [PASSED] eDP
[20:44:47] [PASSED] Virtual
[20:44:47] [PASSED] DSI
[20:44:47] [PASSED] DPI
[20:44:47] [PASSED] Writeback
[20:44:47] [PASSED] SPI
[20:44:47] [PASSED] USB
[20:44:47] ===== [PASSED] drm_test_drmm_connector_init_type_valid =====
[20:44:47] =============== [PASSED] drmm_connector_init ===============
[20:44:47] ========= drm_connector_dynamic_init (6 subtests) ==========
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_init
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_init_null_ddc
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_init_not_added
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_init_properties
[20:44:47] ===== drm_test_drm_connector_dynamic_init_type_valid ======
[20:44:47] [PASSED] Unknown
[20:44:47] [PASSED] VGA
[20:44:47] [PASSED] DVI-I
[20:44:47] [PASSED] DVI-D
[20:44:47] [PASSED] DVI-A
[20:44:47] [PASSED] Composite
[20:44:47] [PASSED] SVIDEO
[20:44:47] [PASSED] LVDS
[20:44:47] [PASSED] Component
[20:44:47] [PASSED] DIN
[20:44:47] [PASSED] DP
[20:44:47] [PASSED] HDMI-A
[20:44:47] [PASSED] HDMI-B
[20:44:47] [PASSED] TV
[20:44:47] [PASSED] eDP
[20:44:47] [PASSED] Virtual
[20:44:47] [PASSED] DSI
[20:44:47] [PASSED] DPI
[20:44:47] [PASSED] Writeback
[20:44:47] [PASSED] SPI
[20:44:47] [PASSED] USB
[20:44:47] = [PASSED] drm_test_drm_connector_dynamic_init_type_valid ==
[20:44:47] ======== drm_test_drm_connector_dynamic_init_name =========
[20:44:47] [PASSED] Unknown
[20:44:47] [PASSED] VGA
[20:44:47] [PASSED] DVI-I
[20:44:47] [PASSED] DVI-D
[20:44:47] [PASSED] DVI-A
[20:44:47] [PASSED] Composite
[20:44:47] [PASSED] SVIDEO
[20:44:47] [PASSED] LVDS
[20:44:47] [PASSED] Component
[20:44:47] [PASSED] DIN
[20:44:47] [PASSED] DP
[20:44:47] [PASSED] HDMI-A
[20:44:47] [PASSED] HDMI-B
[20:44:47] [PASSED] TV
[20:44:47] [PASSED] eDP
[20:44:47] [PASSED] Virtual
[20:44:47] [PASSED] DSI
[20:44:47] [PASSED] DPI
[20:44:47] [PASSED] Writeback
[20:44:47] [PASSED] SPI
[20:44:47] [PASSED] USB
[20:44:47] ==== [PASSED] drm_test_drm_connector_dynamic_init_name =====
[20:44:47] =========== [PASSED] drm_connector_dynamic_init ============
[20:44:47] ==== drm_connector_dynamic_register_early (4 subtests) =====
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_early_on_list
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_early_defer
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_early_no_init
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_early_no_mode_object
[20:44:47] ====== [PASSED] drm_connector_dynamic_register_early =======
[20:44:47] ======= drm_connector_dynamic_register (7 subtests) ========
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_on_list
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_no_defer
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_no_init
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_mode_object
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_sysfs
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_sysfs_name
[20:44:47] [PASSED] drm_test_drm_connector_dynamic_register_debugfs
[20:44:47] ========= [PASSED] drm_connector_dynamic_register ==========
[20:44:47] = drm_connector_attach_broadcast_rgb_property (2 subtests) =
[20:44:47] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property
[20:44:47] [PASSED] drm_test_drm_connector_attach_broadcast_rgb_property_hdmi_connector
[20:44:47] === [PASSED] drm_connector_attach_broadcast_rgb_property ===
[20:44:47] ========== drm_get_tv_mode_from_name (2 subtests) ==========
[20:44:47] ========== drm_test_get_tv_mode_from_name_valid ===========
[20:44:47] [PASSED] NTSC
[20:44:47] [PASSED] NTSC-443
[20:44:47] [PASSED] NTSC-J
[20:44:47] [PASSED] PAL
[20:44:47] [PASSED] PAL-M
[20:44:47] [PASSED] PAL-N
[20:44:47] [PASSED] SECAM
[20:44:47] [PASSED] Mono
[20:44:47] ====== [PASSED] drm_test_get_tv_mode_from_name_valid =======
[20:44:47] [PASSED] drm_test_get_tv_mode_from_name_truncated
[20:44:47] ============ [PASSED] drm_get_tv_mode_from_name ============
[20:44:47] = drm_test_connector_hdmi_compute_mode_clock (12 subtests) =
[20:44:47] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb
[20:44:47] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc
[20:44:47] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1
[20:44:47] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc
[20:44:47] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1
[20:44:47] [PASSED] drm_test_drm_hdmi_compute_mode_clock_rgb_double
[20:44:47] = drm_test_connector_hdmi_compute_mode_clock_yuv420_valid =
[20:44:47] [PASSED] VIC 96
[20:44:47] [PASSED] VIC 97
[20:44:47] [PASSED] VIC 101
[20:44:47] [PASSED] VIC 102
[20:44:47] [PASSED] VIC 106
[20:44:47] [PASSED] VIC 107
[20:44:47] === [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_valid ===
[20:44:47] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc
[20:44:47] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc
[20:44:47] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc
[20:44:47] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc
[20:44:47] [PASSED] drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc
[20:44:47] === [PASSED] drm_test_connector_hdmi_compute_mode_clock ====
[20:44:47] == drm_hdmi_connector_get_broadcast_rgb_name (2 subtests) ==
[20:44:47] === drm_test_drm_hdmi_connector_get_broadcast_rgb_name ====
[20:44:47] [PASSED] Automatic
[20:44:47] [PASSED] Full
[20:44:47] [PASSED] Limited 16:235
[20:44:47] === [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name ===
[20:44:47] [PASSED] drm_test_drm_hdmi_connector_get_broadcast_rgb_name_invalid
[20:44:47] ==== [PASSED] drm_hdmi_connector_get_broadcast_rgb_name ====
[20:44:47] == drm_hdmi_connector_get_output_format_name (2 subtests) ==
[20:44:47] === drm_test_drm_hdmi_connector_get_output_format_name ====
[20:44:47] [PASSED] RGB
[20:44:47] [PASSED] YUV 4:2:0
[20:44:47] [PASSED] YUV 4:2:2
[20:44:47] [PASSED] YUV 4:4:4
[20:44:47] === [PASSED] drm_test_drm_hdmi_connector_get_output_format_name ===
[20:44:47] [PASSED] drm_test_drm_hdmi_connector_get_output_format_name_invalid
[20:44:47] ==== [PASSED] drm_hdmi_connector_get_output_format_name ====
[20:44:47] ============= drm_damage_helper (21 subtests) ==============
[20:44:47] [PASSED] drm_test_damage_iter_no_damage
[20:44:47] [PASSED] drm_test_damage_iter_no_damage_fractional_src
[20:44:47] [PASSED] drm_test_damage_iter_no_damage_src_moved
[20:44:47] [PASSED] drm_test_damage_iter_no_damage_fractional_src_moved
[20:44:47] [PASSED] drm_test_damage_iter_no_damage_not_visible
[20:44:47] [PASSED] drm_test_damage_iter_no_damage_no_crtc
[20:44:47] [PASSED] drm_test_damage_iter_no_damage_no_fb
[20:44:47] [PASSED] drm_test_damage_iter_simple_damage
[20:44:47] [PASSED] drm_test_damage_iter_single_damage
[20:44:47] [PASSED] drm_test_damage_iter_single_damage_intersect_src
[20:44:47] [PASSED] drm_test_damage_iter_single_damage_outside_src
[20:44:47] [PASSED] drm_test_damage_iter_single_damage_fractional_src
[20:44:47] [PASSED] drm_test_damage_iter_single_damage_intersect_fractional_src
[20:44:47] [PASSED] drm_test_damage_iter_single_damage_outside_fractional_src
[20:44:47] [PASSED] drm_test_damage_iter_single_damage_src_moved
[20:44:47] [PASSED] drm_test_damage_iter_single_damage_fractional_src_moved
[20:44:47] [PASSED] drm_test_damage_iter_damage
[20:44:47] [PASSED] drm_test_damage_iter_damage_one_intersect
[20:44:47] [PASSED] drm_test_damage_iter_damage_one_outside
[20:44:47] [PASSED] drm_test_damage_iter_damage_src_moved
[20:44:47] [PASSED] drm_test_damage_iter_damage_not_visible
[20:44:47] ================ [PASSED] drm_damage_helper ================
[20:44:47] ============== drm_dp_mst_helper (3 subtests) ==============
[20:44:47] ============== drm_test_dp_mst_calc_pbn_mode ==============
[20:44:47] [PASSED] Clock 154000 BPP 30 DSC disabled
[20:44:47] [PASSED] Clock 234000 BPP 30 DSC disabled
[20:44:47] [PASSED] Clock 297000 BPP 24 DSC disabled
[20:44:47] [PASSED] Clock 332880 BPP 24 DSC enabled
[20:44:47] [PASSED] Clock 324540 BPP 24 DSC enabled
[20:44:47] ========== [PASSED] drm_test_dp_mst_calc_pbn_mode ==========
[20:44:47] ============== drm_test_dp_mst_calc_pbn_div ===============
[20:44:47] [PASSED] Link rate 2000000 lane count 4
[20:44:47] [PASSED] Link rate 2000000 lane count 2
[20:44:47] [PASSED] Link rate 2000000 lane count 1
[20:44:47] [PASSED] Link rate 1350000 lane count 4
[20:44:47] [PASSED] Link rate 1350000 lane count 2
[20:44:47] [PASSED] Link rate 1350000 lane count 1
[20:44:47] [PASSED] Link rate 1000000 lane count 4
[20:44:47] [PASSED] Link rate 1000000 lane count 2
[20:44:47] [PASSED] Link rate 1000000 lane count 1
[20:44:47] [PASSED] Link rate 810000 lane count 4
[20:44:47] [PASSED] Link rate 810000 lane count 2
[20:44:47] [PASSED] Link rate 810000 lane count 1
[20:44:47] [PASSED] Link rate 540000 lane count 4
[20:44:47] [PASSED] Link rate 540000 lane count 2
[20:44:47] [PASSED] Link rate 540000 lane count 1
[20:44:47] [PASSED] Link rate 270000 lane count 4
[20:44:47] [PASSED] Link rate 270000 lane count 2
[20:44:47] [PASSED] Link rate 270000 lane count 1
[20:44:47] [PASSED] Link rate 162000 lane count 4
[20:44:47] [PASSED] Link rate 162000 lane count 2
[20:44:47] [PASSED] Link rate 162000 lane count 1
[20:44:47] ========== [PASSED] drm_test_dp_mst_calc_pbn_div ===========
[20:44:47] ========= drm_test_dp_mst_sideband_msg_req_decode =========
[20:44:47] [PASSED] DP_ENUM_PATH_RESOURCES with port number
[20:44:47] [PASSED] DP_POWER_UP_PHY with port number
[20:44:47] [PASSED] DP_POWER_DOWN_PHY with port number
[20:44:47] [PASSED] DP_ALLOCATE_PAYLOAD with SDP stream sinks
[20:44:47] [PASSED] DP_ALLOCATE_PAYLOAD with port number
[20:44:47] [PASSED] DP_ALLOCATE_PAYLOAD with VCPI
[20:44:47] [PASSED] DP_ALLOCATE_PAYLOAD with PBN
[20:44:47] [PASSED] DP_QUERY_PAYLOAD with port number
[20:44:47] [PASSED] DP_QUERY_PAYLOAD with VCPI
[20:44:47] [PASSED] DP_REMOTE_DPCD_READ with port number
[20:44:47] [PASSED] DP_REMOTE_DPCD_READ with DPCD address
[20:44:47] [PASSED] DP_REMOTE_DPCD_READ with max number of bytes
[20:44:47] [PASSED] DP_REMOTE_DPCD_WRITE with port number
[20:44:47] [PASSED] DP_REMOTE_DPCD_WRITE with DPCD address
[20:44:47] [PASSED] DP_REMOTE_DPCD_WRITE with data array
[20:44:47] [PASSED] DP_REMOTE_I2C_READ with port number
[20:44:47] [PASSED] DP_REMOTE_I2C_READ with I2C device ID
[20:44:47] [PASSED] DP_REMOTE_I2C_READ with transactions array
[20:44:47] [PASSED] DP_REMOTE_I2C_WRITE with port number
[20:44:47] [PASSED] DP_REMOTE_I2C_WRITE with I2C device ID
[20:44:47] [PASSED] DP_REMOTE_I2C_WRITE with data array
[20:44:47] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream ID
[20:44:47] [PASSED] DP_QUERY_STREAM_ENC_STATUS with client ID
[20:44:47] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream event
[20:44:47] [PASSED] DP_QUERY_STREAM_ENC_STATUS with valid stream event
[20:44:47] [PASSED] DP_QUERY_STREAM_ENC_STATUS with stream behavior
[20:44:47] [PASSED] DP_QUERY_STREAM_ENC_STATUS with a valid stream behavior
[20:44:47] ===== [PASSED] drm_test_dp_mst_sideband_msg_req_decode =====
[20:44:47] ================ [PASSED] drm_dp_mst_helper ================
[20:44:47] ================== drm_exec (7 subtests) ===================
[20:44:47] [PASSED] sanitycheck
[20:44:47] [PASSED] test_lock
[20:44:47] [PASSED] test_lock_unlock
[20:44:47] [PASSED] test_duplicates
[20:44:47] [PASSED] test_prepare
[20:44:47] [PASSED] test_prepare_array
[20:44:47] [PASSED] test_multiple_loops
[20:44:47] ==================== [PASSED] drm_exec =====================
[20:44:47] =========== drm_format_helper_test (17 subtests) ===========
[20:44:47] ============== drm_test_fb_xrgb8888_to_gray8 ==============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ========== [PASSED] drm_test_fb_xrgb8888_to_gray8 ==========
[20:44:47] ============= drm_test_fb_xrgb8888_to_rgb332 ==============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb332 ==========
[20:44:47] ============= drm_test_fb_xrgb8888_to_rgb565 ==============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb565 ==========
[20:44:47] ============ drm_test_fb_xrgb8888_to_xrgb1555 =============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ======== [PASSED] drm_test_fb_xrgb8888_to_xrgb1555 =========
[20:44:47] ============ drm_test_fb_xrgb8888_to_argb1555 =============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ======== [PASSED] drm_test_fb_xrgb8888_to_argb1555 =========
[20:44:47] ============ drm_test_fb_xrgb8888_to_rgba5551 =============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ======== [PASSED] drm_test_fb_xrgb8888_to_rgba5551 =========
[20:44:47] ============= drm_test_fb_xrgb8888_to_rgb888 ==============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ========= [PASSED] drm_test_fb_xrgb8888_to_rgb888 ==========
[20:44:47] ============= drm_test_fb_xrgb8888_to_bgr888 ==============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ========= [PASSED] drm_test_fb_xrgb8888_to_bgr888 ==========
[20:44:47] ============ drm_test_fb_xrgb8888_to_argb8888 =============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ======== [PASSED] drm_test_fb_xrgb8888_to_argb8888 =========
[20:44:47] =========== drm_test_fb_xrgb8888_to_xrgb2101010 ===========
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ======= [PASSED] drm_test_fb_xrgb8888_to_xrgb2101010 =======
[20:44:47] =========== drm_test_fb_xrgb8888_to_argb2101010 ===========
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ======= [PASSED] drm_test_fb_xrgb8888_to_argb2101010 =======
[20:44:47] ============== drm_test_fb_xrgb8888_to_mono ===============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ========== [PASSED] drm_test_fb_xrgb8888_to_mono ===========
[20:44:47] ==================== drm_test_fb_swab =====================
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ================ [PASSED] drm_test_fb_swab =================
[20:44:47] ============ drm_test_fb_xrgb8888_to_xbgr8888 =============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ======== [PASSED] drm_test_fb_xrgb8888_to_xbgr8888 =========
[20:44:47] ============ drm_test_fb_xrgb8888_to_abgr8888 =============
[20:44:47] [PASSED] single_pixel_source_buffer
[20:44:47] [PASSED] single_pixel_clip_rectangle
[20:44:47] [PASSED] well_known_colors
[20:44:47] [PASSED] destination_pitch
[20:44:47] ======== [PASSED] drm_test_fb_xrgb8888_to_abgr8888 =========
[20:44:47] ================= drm_test_fb_clip_offset =================
[20:44:47] [PASSED] pass through
[20:44:47] [PASSED] horizontal offset
[20:44:47] [PASSED] vertical offset
[20:44:47] [PASSED] horizontal and vertical offset
[20:44:47] [PASSED] horizontal offset (custom pitch)
[20:44:47] [PASSED] vertical offset (custom pitch)
[20:44:47] [PASSED] horizontal and vertical offset (custom pitch)
[20:44:47] ============= [PASSED] drm_test_fb_clip_offset =============
[20:44:47] =================== drm_test_fb_memcpy ====================
[20:44:47] [PASSED] single_pixel_source_buffer: XR24 little-endian (0x34325258)
[20:44:47] [PASSED] single_pixel_source_buffer: XRA8 little-endian (0x38415258)
[20:44:47] [PASSED] single_pixel_source_buffer: YU24 little-endian (0x34325559)
[20:44:47] [PASSED] single_pixel_clip_rectangle: XB24 little-endian (0x34324258)
[20:44:47] [PASSED] single_pixel_clip_rectangle: XRA8 little-endian (0x38415258)
[20:44:47] [PASSED] single_pixel_clip_rectangle: YU24 little-endian (0x34325559)
[20:44:47] [PASSED] well_known_colors: XB24 little-endian (0x34324258)
[20:44:47] [PASSED] well_known_colors: XRA8 little-endian (0x38415258)
[20:44:47] [PASSED] well_known_colors: YU24 little-endian (0x34325559)
[20:44:47] [PASSED] destination_pitch: XB24 little-endian (0x34324258)
[20:44:47] [PASSED] destination_pitch: XRA8 little-endian (0x38415258)
[20:44:47] [PASSED] destination_pitch: YU24 little-endian (0x34325559)
[20:44:47] =============== [PASSED] drm_test_fb_memcpy ================
[20:44:47] ============= [PASSED] drm_format_helper_test ==============
[20:44:47] ================= drm_format (18 subtests) =================
[20:44:47] [PASSED] drm_test_format_block_width_invalid
[20:44:47] [PASSED] drm_test_format_block_width_one_plane
[20:44:47] [PASSED] drm_test_format_block_width_two_plane
[20:44:47] [PASSED] drm_test_format_block_width_three_plane
[20:44:47] [PASSED] drm_test_format_block_width_tiled
[20:44:47] [PASSED] drm_test_format_block_height_invalid
[20:44:47] [PASSED] drm_test_format_block_height_one_plane
[20:44:47] [PASSED] drm_test_format_block_height_two_plane
[20:44:47] [PASSED] drm_test_format_block_height_three_plane
[20:44:47] [PASSED] drm_test_format_block_height_tiled
[20:44:47] [PASSED] drm_test_format_min_pitch_invalid
[20:44:47] [PASSED] drm_test_format_min_pitch_one_plane_8bpp
[20:44:47] [PASSED] drm_test_format_min_pitch_one_plane_16bpp
[20:44:47] [PASSED] drm_test_format_min_pitch_one_plane_24bpp
[20:44:47] [PASSED] drm_test_format_min_pitch_one_plane_32bpp
[20:44:47] [PASSED] drm_test_format_min_pitch_two_plane
[20:44:47] [PASSED] drm_test_format_min_pitch_three_plane_8bpp
[20:44:47] [PASSED] drm_test_format_min_pitch_tiled
[20:44:47] =================== [PASSED] drm_format ====================
[20:44:47] ============== drm_framebuffer (10 subtests) ===============
[20:44:47] ========== drm_test_framebuffer_check_src_coords ==========
[20:44:47] [PASSED] Success: source fits into fb
[20:44:47] [PASSED] Fail: overflowing fb with x-axis coordinate
[20:44:47] [PASSED] Fail: overflowing fb with y-axis coordinate
[20:44:47] [PASSED] Fail: overflowing fb with source width
[20:44:47] [PASSED] Fail: overflowing fb with source height
[20:44:47] ====== [PASSED] drm_test_framebuffer_check_src_coords ======
[20:44:47] [PASSED] drm_test_framebuffer_cleanup
[20:44:47] =============== drm_test_framebuffer_create ===============
[20:44:47] [PASSED] ABGR8888 normal sizes
[20:44:47] [PASSED] ABGR8888 max sizes
[20:44:47] [PASSED] ABGR8888 pitch greater than min required
[20:44:47] [PASSED] ABGR8888 pitch less than min required
[20:44:47] [PASSED] ABGR8888 Invalid width
[20:44:47] [PASSED] ABGR8888 Invalid buffer handle
[20:44:47] [PASSED] No pixel format
[20:44:47] [PASSED] ABGR8888 Width 0
[20:44:47] [PASSED] ABGR8888 Height 0
[20:44:47] [PASSED] ABGR8888 Out of bound height * pitch combination
[20:44:47] [PASSED] ABGR8888 Large buffer offset
[20:44:47] [PASSED] ABGR8888 Buffer offset for inexistent plane
[20:44:47] [PASSED] ABGR8888 Invalid flag
[20:44:47] [PASSED] ABGR8888 Set DRM_MODE_FB_MODIFIERS without modifiers
[20:44:47] [PASSED] ABGR8888 Valid buffer modifier
[20:44:47] [PASSED] ABGR8888 Invalid buffer modifier(DRM_FORMAT_MOD_SAMSUNG_64_32_TILE)
[20:44:47] [PASSED] ABGR8888 Extra pitches without DRM_MODE_FB_MODIFIERS
[20:44:47] [PASSED] ABGR8888 Extra pitches with DRM_MODE_FB_MODIFIERS
[20:44:47] [PASSED] NV12 Normal sizes
[20:44:47] [PASSED] NV12 Max sizes
[20:44:47] [PASSED] NV12 Invalid pitch
[20:44:47] [PASSED] NV12 Invalid modifier/missing DRM_MODE_FB_MODIFIERS flag
[20:44:47] [PASSED] NV12 different modifier per-plane
[20:44:47] [PASSED] NV12 with DRM_FORMAT_MOD_SAMSUNG_64_32_TILE
[20:44:47] [PASSED] NV12 Valid modifiers without DRM_MODE_FB_MODIFIERS
[20:44:47] [PASSED] NV12 Modifier for inexistent plane
[20:44:47] [PASSED] NV12 Handle for inexistent plane
[20:44:47] [PASSED] NV12 Handle for inexistent plane without DRM_MODE_FB_MODIFIERS
[20:44:47] [PASSED] YVU420 DRM_MODE_FB_MODIFIERS set without modifier
[20:44:47] [PASSED] YVU420 Normal sizes
[20:44:47] [PASSED] YVU420 Max sizes
[20:44:47] [PASSED] YVU420 Invalid pitch
[20:44:47] [PASSED] YVU420 Different pitches
[20:44:47] [PASSED] YVU420 Different buffer offsets/pitches
[20:44:47] [PASSED] YVU420 Modifier set just for plane 0, without DRM_MODE_FB_MODIFIERS
[20:44:47] [PASSED] YVU420 Modifier set just for planes 0, 1, without DRM_MODE_FB_MODIFIERS
[20:44:47] [PASSED] YVU420 Modifier set just for plane 0, 1, with DRM_MODE_FB_MODIFIERS
[20:44:47] [PASSED] YVU420 Valid modifier
[20:44:47] [PASSED] YVU420 Different modifiers per plane
[20:44:47] [PASSED] YVU420 Modifier for inexistent plane
[20:44:47] [PASSED] YUV420_10BIT Invalid modifier(DRM_FORMAT_MOD_LINEAR)
[20:44:47] [PASSED] X0L2 Normal sizes
[20:44:47] [PASSED] X0L2 Max sizes
[20:44:47] [PASSED] X0L2 Invalid pitch
[20:44:47] [PASSED] X0L2 Pitch greater than minimum required
[20:44:47] [PASSED] X0L2 Handle for inexistent plane
[20:44:47] [PASSED] X0L2 Offset for inexistent plane, without DRM_MODE_FB_MODIFIERS set
[20:44:47] [PASSED] X0L2 Modifier without DRM_MODE_FB_MODIFIERS set
[20:44:47] [PASSED] X0L2 Valid modifier
[20:44:47] [PASSED] X0L2 Modifier for inexistent plane
[20:44:47] =========== [PASSED] drm_test_framebuffer_create ===========
[20:44:47] [PASSED] drm_test_framebuffer_free
[20:44:47] [PASSED] drm_test_framebuffer_init
[20:44:47] [PASSED] drm_test_framebuffer_init_bad_format
[20:44:47] [PASSED] drm_test_framebuffer_init_dev_mismatch
[20:44:47] [PASSED] drm_test_framebuffer_lookup
[20:44:47] [PASSED] drm_test_framebuffer_lookup_inexistent
[20:44:47] [PASSED] drm_test_framebuffer_modifiers_not_supported
[20:44:47] ================= [PASSED] drm_framebuffer =================
[20:44:47] ================ drm_gem_shmem (8 subtests) ================
[20:44:47] [PASSED] drm_gem_shmem_test_obj_create
[20:44:47] [PASSED] drm_gem_shmem_test_obj_create_private
[20:44:47] [PASSED] drm_gem_shmem_test_pin_pages
[20:44:47] [PASSED] drm_gem_shmem_test_vmap
[20:44:47] [PASSED] drm_gem_shmem_test_get_pages_sgt
[20:44:47] [PASSED] drm_gem_shmem_test_get_sg_table
[20:44:47] [PASSED] drm_gem_shmem_test_madvise
[20:44:47] [PASSED] drm_gem_shmem_test_purge
[20:44:47] ================== [PASSED] drm_gem_shmem ==================
[20:44:47] === drm_atomic_helper_connector_hdmi_check (27 subtests) ===
[20:44:47] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode
[20:44:47] [PASSED] drm_test_check_broadcast_rgb_auto_cea_mode_vic_1
[20:44:47] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode
[20:44:47] [PASSED] drm_test_check_broadcast_rgb_full_cea_mode_vic_1
[20:44:47] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode
[20:44:47] [PASSED] drm_test_check_broadcast_rgb_limited_cea_mode_vic_1
[20:44:47] ====== drm_test_check_broadcast_rgb_cea_mode_yuv420 =======
[20:44:47] [PASSED] Automatic
[20:44:47] [PASSED] Full
[20:44:47] [PASSED] Limited 16:235
[20:44:47] == [PASSED] drm_test_check_broadcast_rgb_cea_mode_yuv420 ===
[20:44:47] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_changed
[20:44:47] [PASSED] drm_test_check_broadcast_rgb_crtc_mode_not_changed
[20:44:47] [PASSED] drm_test_check_disable_connector
[20:44:47] [PASSED] drm_test_check_hdmi_funcs_reject_rate
[20:44:47] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_rgb
[20:44:47] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_yuv420
[20:44:47] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv422
[20:44:47] [PASSED] drm_test_check_max_tmds_rate_bpc_fallback_ignore_yuv420
[20:44:47] [PASSED] drm_test_check_driver_unsupported_fallback_yuv420
[20:44:47] [PASSED] drm_test_check_output_bpc_crtc_mode_changed
[20:44:47] [PASSED] drm_test_check_output_bpc_crtc_mode_not_changed
[20:44:47] [PASSED] drm_test_check_output_bpc_dvi
[20:44:47] [PASSED] drm_test_check_output_bpc_format_vic_1
[20:44:47] [PASSED] drm_test_check_output_bpc_format_display_8bpc_only
[20:44:47] [PASSED] drm_test_check_output_bpc_format_display_rgb_only
[20:44:47] [PASSED] drm_test_check_output_bpc_format_driver_8bpc_only
[20:44:47] [PASSED] drm_test_check_output_bpc_format_driver_rgb_only
[20:44:47] [PASSED] drm_test_check_tmds_char_rate_rgb_8bpc
[20:44:47] [PASSED] drm_test_check_tmds_char_rate_rgb_10bpc
[20:44:47] [PASSED] drm_test_check_tmds_char_rate_rgb_12bpc
[20:44:47] ===== [PASSED] drm_atomic_helper_connector_hdmi_check ======
[20:44:47] === drm_atomic_helper_connector_hdmi_reset (6 subtests) ====
[20:44:47] [PASSED] drm_test_check_broadcast_rgb_value
[20:44:47] [PASSED] drm_test_check_bpc_8_value
[20:44:47] [PASSED] drm_test_check_bpc_10_value
[20:44:47] [PASSED] drm_test_check_bpc_12_value
[20:44:47] [PASSED] drm_test_check_format_value
[20:44:47] [PASSED] drm_test_check_tmds_char_value
[20:44:47] ===== [PASSED] drm_atomic_helper_connector_hdmi_reset ======
[20:44:47] = drm_atomic_helper_connector_hdmi_mode_valid (4 subtests) =
[20:44:47] [PASSED] drm_test_check_mode_valid
[20:44:47] [PASSED] drm_test_check_mode_valid_reject
[20:44:47] [PASSED] drm_test_check_mode_valid_reject_rate
[20:44:47] [PASSED] drm_test_check_mode_valid_reject_max_clock
[20:44:47] === [PASSED] drm_atomic_helper_connector_hdmi_mode_valid ===
[20:44:47] ================= drm_managed (2 subtests) =================
[20:44:47] [PASSED] drm_test_managed_release_action
[20:44:47] [PASSED] drm_test_managed_run_action
[20:44:47] =================== [PASSED] drm_managed ===================
[20:44:47] =================== drm_mm (6 subtests) ====================
[20:44:47] [PASSED] drm_test_mm_init
[20:44:47] [PASSED] drm_test_mm_debug
[20:44:47] [PASSED] drm_test_mm_align32
[20:44:47] [PASSED] drm_test_mm_align64
[20:44:47] [PASSED] drm_test_mm_lowest
[20:44:47] [PASSED] drm_test_mm_highest
[20:44:47] ===================== [PASSED] drm_mm ======================
[20:44:47] ============= drm_modes_analog_tv (5 subtests) =============
[20:44:47] [PASSED] drm_test_modes_analog_tv_mono_576i
[20:44:47] [PASSED] drm_test_modes_analog_tv_ntsc_480i
[20:44:47] [PASSED] drm_test_modes_analog_tv_ntsc_480i_inlined
[20:44:47] [PASSED] drm_test_modes_analog_tv_pal_576i
[20:44:47] [PASSED] drm_test_modes_analog_tv_pal_576i_inlined
[20:44:47] =============== [PASSED] drm_modes_analog_tv ===============
[20:44:47] ============== drm_plane_helper (2 subtests) ===============
[20:44:47] =============== drm_test_check_plane_state ================
[20:44:47] [PASSED] clipping_simple
[20:44:47] [PASSED] clipping_rotate_reflect
[20:44:47] [PASSED] positioning_simple
[20:44:47] [PASSED] upscaling
[20:44:47] [PASSED] downscaling
[20:44:47] [PASSED] rounding1
[20:44:47] [PASSED] rounding2
[20:44:47] [PASSED] rounding3
[20:44:47] [PASSED] rounding4
[20:44:47] =========== [PASSED] drm_test_check_plane_state ============
[20:44:47] =========== drm_test_check_invalid_plane_state ============
[20:44:47] [PASSED] positioning_invalid
[20:44:47] [PASSED] upscaling_invalid
[20:44:47] [PASSED] downscaling_invalid
[20:44:47] ======= [PASSED] drm_test_check_invalid_plane_state ========
[20:44:47] ================ [PASSED] drm_plane_helper =================
[20:44:47] ====== drm_connector_helper_tv_get_modes (1 subtest) =======
[20:44:47] ====== drm_test_connector_helper_tv_get_modes_check =======
[20:44:47] [PASSED] None
[20:44:47] [PASSED] PAL
[20:44:47] [PASSED] NTSC
[20:44:47] [PASSED] Both, NTSC Default
[20:44:47] [PASSED] Both, PAL Default
[20:44:47] [PASSED] Both, NTSC Default, with PAL on command-line
[20:44:47] [PASSED] Both, PAL Default, with NTSC on command-line
[20:44:47] == [PASSED] drm_test_connector_helper_tv_get_modes_check ===
[20:44:47] ======== [PASSED] drm_connector_helper_tv_get_modes ========
[20:44:47] ================== drm_rect (9 subtests) ===================
[20:44:47] [PASSED] drm_test_rect_clip_scaled_div_by_zero
[20:44:47] [PASSED] drm_test_rect_clip_scaled_not_clipped
[20:44:47] [PASSED] drm_test_rect_clip_scaled_clipped
[20:44:47] [PASSED] drm_test_rect_clip_scaled_signed_vs_unsigned
[20:44:47] ================= drm_test_rect_intersect =================
[20:44:47] [PASSED] top-left x bottom-right: 2x2+1+1 x 2x2+0+0
[20:44:47] [PASSED] top-right x bottom-left: 2x2+0+0 x 2x2+1-1
[20:44:47] [PASSED] bottom-left x top-right: 2x2+1-1 x 2x2+0+0
[20:44:47] [PASSED] bottom-right x top-left: 2x2+0+0 x 2x2+1+1
[20:44:47] [PASSED] right x left: 2x1+0+0 x 3x1+1+0
[20:44:47] [PASSED] left x right: 3x1+1+0 x 2x1+0+0
[20:44:47] [PASSED] up x bottom: 1x2+0+0 x 1x3+0-1
[20:44:47] [PASSED] bottom x up: 1x3+0-1 x 1x2+0+0
[20:44:47] [PASSED] touching corner: 1x1+0+0 x 2x2+1+1
[20:44:47] [PASSED] touching side: 1x1+0+0 x 1x1+1+0
[20:44:47] [PASSED] equal rects: 2x2+0+0 x 2x2+0+0
[20:44:47] [PASSED] inside another: 2x2+0+0 x 1x1+1+1
[20:44:47] [PASSED] far away: 1x1+0+0 x 1x1+3+6
[20:44:47] [PASSED] points intersecting: 0x0+5+10 x 0x0+5+10
[20:44:47] [PASSED] points not intersecting: 0x0+0+0 x 0x0+5+10
[20:44:47] ============= [PASSED] drm_test_rect_intersect =============
[20:44:47] ================ drm_test_rect_calc_hscale ================
[20:44:47] [PASSED] normal use
[20:44:47] [PASSED] out of max range
[20:44:47] [PASSED] out of min range
[20:44:47] [PASSED] zero dst
[20:44:47] [PASSED] negative src
[20:44:47] [PASSED] negative dst
[20:44:47] ============ [PASSED] drm_test_rect_calc_hscale ============
[20:44:47] ================ drm_test_rect_calc_vscale ================
[20:44:47] [PASSED] normal use
stty: 'standard input': Inappropriate ioctl for device
[20:44:47] [PASSED] out of max range
[20:44:47] [PASSED] out of min range
[20:44:47] [PASSED] zero dst
[20:44:47] [PASSED] negative src
[20:44:47] [PASSED] negative dst
[20:44:47] ============ [PASSED] drm_test_rect_calc_vscale ============
[20:44:47] ================== drm_test_rect_rotate ===================
[20:44:47] [PASSED] reflect-x
[20:44:47] [PASSED] reflect-y
[20:44:47] [PASSED] rotate-0
[20:44:47] [PASSED] rotate-90
[20:44:47] [PASSED] rotate-180
[20:44:47] [PASSED] rotate-270
[20:44:47] ============== [PASSED] drm_test_rect_rotate ===============
[20:44:47] ================ drm_test_rect_rotate_inv =================
[20:44:47] [PASSED] reflect-x
[20:44:47] [PASSED] reflect-y
[20:44:47] [PASSED] rotate-0
[20:44:47] [PASSED] rotate-90
[20:44:47] [PASSED] rotate-180
[20:44:47] [PASSED] rotate-270
[20:44:47] ============ [PASSED] drm_test_rect_rotate_inv =============
[20:44:47] ==================== [PASSED] drm_rect =====================
[20:44:47] ============ drm_sysfb_modeset_test (1 subtest) ============
[20:44:47] ============ drm_test_sysfb_build_fourcc_list =============
[20:44:47] [PASSED] no native formats
[20:44:47] [PASSED] XRGB8888 as native format
[20:44:47] [PASSED] remove duplicates
[20:44:47] [PASSED] convert alpha formats
[20:44:47] [PASSED] random formats
[20:44:47] ======== [PASSED] drm_test_sysfb_build_fourcc_list =========
[20:44:47] ============= [PASSED] drm_sysfb_modeset_test ==============
[20:44:47] ============================================================
[20:44:47] Testing complete. Ran 622 tests: passed: 622
[20:44:47] Elapsed time: 26.590s total, 1.635s configuring, 24.535s building, 0.389s running
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/ttm/tests/.kunitconfig
[20:44:47] Configuring KUnit Kernel ...
Regenerating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[20:44:49] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=48
[20:44:58] Starting KUnit Kernel (1/1)...
[20:44:58] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[20:44:58] ================= ttm_device (5 subtests) ==================
[20:44:58] [PASSED] ttm_device_init_basic
[20:44:58] [PASSED] ttm_device_init_multiple
[20:44:58] [PASSED] ttm_device_fini_basic
[20:44:58] [PASSED] ttm_device_init_no_vma_man
[20:44:58] ================== ttm_device_init_pools ==================
[20:44:58] [PASSED] No DMA allocations, no DMA32 required
[20:44:58] [PASSED] DMA allocations, DMA32 required
[20:44:58] [PASSED] No DMA allocations, DMA32 required
[20:44:58] [PASSED] DMA allocations, no DMA32 required
[20:44:58] ============== [PASSED] ttm_device_init_pools ==============
[20:44:58] =================== [PASSED] ttm_device ====================
[20:44:58] ================== ttm_pool (8 subtests) ===================
[20:44:58] ================== ttm_pool_alloc_basic ===================
[20:44:58] [PASSED] One page
[20:44:58] [PASSED] More than one page
[20:44:58] [PASSED] Above the allocation limit
[20:44:58] [PASSED] One page, with coherent DMA mappings enabled
[20:44:58] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[20:44:58] ============== [PASSED] ttm_pool_alloc_basic ===============
[20:44:58] ============== ttm_pool_alloc_basic_dma_addr ==============
[20:44:58] [PASSED] One page
[20:44:58] [PASSED] More than one page
[20:44:58] [PASSED] Above the allocation limit
[20:44:58] [PASSED] One page, with coherent DMA mappings enabled
[20:44:58] [PASSED] Above the allocation limit, with coherent DMA mappings enabled
[20:44:58] ========== [PASSED] ttm_pool_alloc_basic_dma_addr ==========
[20:44:58] [PASSED] ttm_pool_alloc_order_caching_match
[20:44:58] [PASSED] ttm_pool_alloc_caching_mismatch
[20:44:58] [PASSED] ttm_pool_alloc_order_mismatch
[20:44:58] [PASSED] ttm_pool_free_dma_alloc
[20:44:58] [PASSED] ttm_pool_free_no_dma_alloc
[20:44:58] [PASSED] ttm_pool_fini_basic
[20:44:58] ==================== [PASSED] ttm_pool =====================
[20:44:58] ================ ttm_resource (8 subtests) =================
[20:44:58] ================= ttm_resource_init_basic =================
[20:44:58] [PASSED] Init resource in TTM_PL_SYSTEM
[20:44:58] [PASSED] Init resource in TTM_PL_VRAM
[20:44:58] [PASSED] Init resource in a private placement
[20:44:58] [PASSED] Init resource in TTM_PL_SYSTEM, set placement flags
[20:44:58] ============= [PASSED] ttm_resource_init_basic =============
[20:44:58] [PASSED] ttm_resource_init_pinned
[20:44:58] [PASSED] ttm_resource_fini_basic
[20:44:58] [PASSED] ttm_resource_manager_init_basic
[20:44:58] [PASSED] ttm_resource_manager_usage_basic
[20:44:58] [PASSED] ttm_resource_manager_set_used_basic
[20:44:58] [PASSED] ttm_sys_man_alloc_basic
[20:44:58] [PASSED] ttm_sys_man_free_basic
[20:44:58] ================== [PASSED] ttm_resource ===================
[20:44:58] =================== ttm_tt (15 subtests) ===================
[20:44:58] ==================== ttm_tt_init_basic ====================
[20:44:58] [PASSED] Page-aligned size
[20:44:58] [PASSED] Extra pages requested
[20:44:58] ================ [PASSED] ttm_tt_init_basic ================
[20:44:58] [PASSED] ttm_tt_init_misaligned
[20:44:58] [PASSED] ttm_tt_fini_basic
[20:44:58] [PASSED] ttm_tt_fini_sg
[20:44:58] [PASSED] ttm_tt_fini_shmem
[20:44:58] [PASSED] ttm_tt_create_basic
[20:44:58] [PASSED] ttm_tt_create_invalid_bo_type
[20:44:58] [PASSED] ttm_tt_create_ttm_exists
[20:44:58] [PASSED] ttm_tt_create_failed
[20:44:58] [PASSED] ttm_tt_destroy_basic
[20:44:58] [PASSED] ttm_tt_populate_null_ttm
[20:44:58] [PASSED] ttm_tt_populate_populated_ttm
[20:44:58] [PASSED] ttm_tt_unpopulate_basic
[20:44:58] [PASSED] ttm_tt_unpopulate_empty_ttm
[20:44:58] [PASSED] ttm_tt_swapin_basic
[20:44:58] ===================== [PASSED] ttm_tt ======================
[20:44:58] =================== ttm_bo (14 subtests) ===================
[20:44:58] =========== ttm_bo_reserve_optimistic_no_ticket ===========
[20:44:58] [PASSED] Cannot be interrupted and sleeps
[20:44:58] [PASSED] Cannot be interrupted, locks straight away
[20:44:58] [PASSED] Can be interrupted, sleeps
[20:44:58] ======= [PASSED] ttm_bo_reserve_optimistic_no_ticket =======
[20:44:58] [PASSED] ttm_bo_reserve_locked_no_sleep
[20:44:58] [PASSED] ttm_bo_reserve_no_wait_ticket
[20:44:58] [PASSED] ttm_bo_reserve_double_resv
[20:44:58] [PASSED] ttm_bo_reserve_interrupted
[20:44:58] [PASSED] ttm_bo_reserve_deadlock
[20:44:58] [PASSED] ttm_bo_unreserve_basic
[20:44:58] [PASSED] ttm_bo_unreserve_pinned
[20:44:58] [PASSED] ttm_bo_unreserve_bulk
[20:44:58] [PASSED] ttm_bo_fini_basic
[20:44:58] [PASSED] ttm_bo_fini_shared_resv
[20:44:58] [PASSED] ttm_bo_pin_basic
[20:44:58] [PASSED] ttm_bo_pin_unpin_resource
[20:44:58] [PASSED] ttm_bo_multiple_pin_one_unpin
[20:44:58] ===================== [PASSED] ttm_bo ======================
[20:44:58] ============== ttm_bo_validate (21 subtests) ===============
[20:44:58] ============== ttm_bo_init_reserved_sys_man ===============
[20:44:58] [PASSED] Buffer object for userspace
[20:44:58] [PASSED] Kernel buffer object
[20:44:58] [PASSED] Shared buffer object
[20:44:58] ========== [PASSED] ttm_bo_init_reserved_sys_man ===========
[20:44:58] ============== ttm_bo_init_reserved_mock_man ==============
[20:44:58] [PASSED] Buffer object for userspace
[20:44:58] [PASSED] Kernel buffer object
[20:44:58] [PASSED] Shared buffer object
[20:44:58] ========== [PASSED] ttm_bo_init_reserved_mock_man ==========
[20:44:58] [PASSED] ttm_bo_init_reserved_resv
[20:44:58] ================== ttm_bo_validate_basic ==================
[20:44:58] [PASSED] Buffer object for userspace
[20:44:58] [PASSED] Kernel buffer object
[20:44:58] [PASSED] Shared buffer object
[20:44:58] ============== [PASSED] ttm_bo_validate_basic ==============
[20:44:58] [PASSED] ttm_bo_validate_invalid_placement
[20:44:58] ============= ttm_bo_validate_same_placement ==============
[20:44:58] [PASSED] System manager
[20:44:58] [PASSED] VRAM manager
[20:44:58] ========= [PASSED] ttm_bo_validate_same_placement ==========
[20:44:58] [PASSED] ttm_bo_validate_failed_alloc
[20:44:58] [PASSED] ttm_bo_validate_pinned
[20:44:58] [PASSED] ttm_bo_validate_busy_placement
[20:44:58] ================ ttm_bo_validate_multihop =================
[20:44:58] [PASSED] Buffer object for userspace
[20:44:58] [PASSED] Kernel buffer object
[20:44:58] [PASSED] Shared buffer object
[20:44:58] ============ [PASSED] ttm_bo_validate_multihop =============
[20:44:58] ========== ttm_bo_validate_no_placement_signaled ==========
[20:44:58] [PASSED] Buffer object in system domain, no page vector
[20:44:58] [PASSED] Buffer object in system domain with an existing page vector
[20:44:58] ====== [PASSED] ttm_bo_validate_no_placement_signaled ======
[20:44:58] ======== ttm_bo_validate_no_placement_not_signaled ========
[20:44:58] [PASSED] Buffer object for userspace
[20:44:58] [PASSED] Kernel buffer object
[20:44:58] [PASSED] Shared buffer object
[20:44:58] ==== [PASSED] ttm_bo_validate_no_placement_not_signaled ====
[20:44:58] [PASSED] ttm_bo_validate_move_fence_signaled
[20:44:58] ========= ttm_bo_validate_move_fence_not_signaled =========
[20:44:58] [PASSED] Waits for GPU
[20:44:58] [PASSED] Tries to lock straight away
[20:44:58] ===== [PASSED] ttm_bo_validate_move_fence_not_signaled =====
[20:44:58] [PASSED] ttm_bo_validate_happy_evict
[20:44:58] [PASSED] ttm_bo_validate_all_pinned_evict
[20:44:58] [PASSED] ttm_bo_validate_allowed_only_evict
[20:44:58] [PASSED] ttm_bo_validate_deleted_evict
[20:44:58] [PASSED] ttm_bo_validate_busy_domain_evict
[20:44:58] [PASSED] ttm_bo_validate_evict_gutting
[20:44:58] [PASSED] ttm_bo_validate_recrusive_evict
stty: 'standard input': Inappropriate ioctl for device
[20:44:58] ================= [PASSED] ttm_bo_validate =================
[20:44:58] ============================================================
[20:44:58] Testing complete. Ran 101 tests: passed: 101
[20:44:58] Elapsed time: 11.184s total, 1.696s configuring, 9.272s building, 0.185s running
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH 04/33] drm/xe/forcewake: Create dedicated type for forcewake references
2025-11-07 19:27 ` Michal Wajdeczko
@ 2025-11-07 21:17 ` Matt Roper
0 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-07 21:17 UTC (permalink / raw)
To: Michal Wajdeczko; +Cc: intel-xe
On Fri, Nov 07, 2025 at 08:27:27PM +0100, Michal Wajdeczko wrote:
>
>
> On 11/7/2025 7:13 PM, Matt Roper wrote:
> > xe_force_wake_get() currently returns an integer mask of power domains
> > that were successfully awoken; both this mask and a pointer to the force
> > wake collection must be passed to xe_force_wake_put() to release the
> > wake reference.
> >
> > Create a dedicated structure type to hold both the mask and the
> > collection pointer. While this change does little on its own, it will
> > make it easier for us to add scope-based cleanup of forcewake in the
> > future.
> >
> > FIXME:
> > For ease of review, this patch contains only the manual changes to
> > add the structure and change the get/put function definitions; it
> > does not build on its own since the rest of the driver is still
> > trying to call the get/put functions with the old signature. The
> > next patch contains the coccinelle-generated changes necessary
> > elsewhere in the driver to adapt to the new interface. The two
> > patches will be squashed together when applied, remain separate for
> > now to help reviewers.
> >
> > Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> > ---
>
> ...
>
> > diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
> > index 0e3e84bfa51c..86e9bca7cac9 100644
> > --- a/drivers/gpu/drm/xe/xe_force_wake.h
> > +++ b/drivers/gpu/drm/xe/xe_force_wake.h
> > @@ -15,9 +15,9 @@ void xe_force_wake_init_gt(struct xe_gt *gt,
> > struct xe_force_wake *fw);
> > void xe_force_wake_init_engines(struct xe_gt *gt,
> > struct xe_force_wake *fw);
> > -unsigned int __must_check xe_force_wake_get(struct xe_force_wake *fw,
> > - enum xe_force_wake_domains domains);
> > -void xe_force_wake_put(struct xe_force_wake *fw, unsigned int fw_ref);
> > +struct xe_force_wake_ref __must_check xe_force_wake_get(struct xe_force_wake *fw,
> > + enum xe_force_wake_domains domains);
> > +void xe_force_wake_put(struct xe_force_wake_ref fw_ref);
>
> but is it really necessary to change signature of all existing xe_force_wake functions?
>
> in my previous attempt [1] this new helper struct was just initialized inside the CLASS
> so we can start use new approach and still use existing xe_fw API without any massive changes
>
> +DEFINE_CLASS(xe_fw, struct xe_force_wake_guard,
> + xe_force_wake_put(_T.fw, _T.ref),
> + ({ (struct xe_force_wake_guard){ fw, xe_force_wake_get(fw, domains) }; }),
> + struct xe_force_wake *fw, enum xe_force_wake_domains domains);
Yeah, that's a good point. I had considered creating a different
constructor that called xe_force_wake_get internally (which is
essentially what you've inlined right into the class definition here),
but I ultimately went with a preparatory refactor to keep the
introduction of the class itself as simple and easy to understand as
possible. But you're right that it's probably unnecessarily invasive
(especially with all the coccinelle transformation), so maybe it would
be better to go back to the alternate constructor approach.
Since this kind of thing is still new to a lot of our driver developers,
I think I'd still opt to explicitly define the extra constructor as a
separate function rather than inlining it like you did here though.
I'll probably do that in v2.
Matt
>
> [1] https://patchwork.freedesktop.org/patch/625116/?series=141516&rev=1
>
>
> >
> > static inline int
> > xe_force_wake_ref(struct xe_force_wake *fw,
> > @@ -56,9 +56,10 @@ xe_force_wake_assert_held(struct xe_force_wake *fw,
> > * Return: true if domain is refcounted.
> > */
> > static inline bool
> > -xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains domain)
> > +xe_force_wake_ref_has_domain(struct xe_force_wake_ref fw_ref,
> > + enum xe_force_wake_domains domain)
> > {
> > - return fw_ref & domain;
> > + return fw_ref.domains & domain;
> > }
> >
> > #endif
> > diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h b/drivers/gpu/drm/xe/xe_force_wake_types.h
> > index 9cfa28faf7bc..26df4adba4c5 100644
> > --- a/drivers/gpu/drm/xe/xe_force_wake_types.h
> > +++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
> > @@ -107,4 +107,19 @@ struct xe_force_wake {
> > struct xe_force_wake_domain domains[XE_FW_DOMAIN_ID_COUNT];
> > };
> >
> > +/**
> > + * struct xe_force_wake_ref - Xe force wake reference
> > + *
> > + * Represents a wakeref for a subset of the power domains belonging to an
> > + * xe_force_wake collection. Returned by xe_force_wake_get() and passed
> > + * to xe_force_wake_put().
> > + */
> > +struct xe_force_wake_ref {
> > + /** @fw: back pointer to force wake collection */
> > + struct xe_force_wake *fw;
> > +
> > + /** @domains: mask of individual domains held by this reference */
> > + unsigned int domains;
> > +};
> > +
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
^ permalink raw reply [flat|nested] 44+ messages in thread
* ✓ Xe.CI.BAT: success for Scope-based forcewake and runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (35 preceding siblings ...)
2025-11-07 20:45 ` ✓ CI.KUnit: success " Patchwork
@ 2025-11-07 21:21 ` Patchwork
2025-11-09 3:59 ` ✗ Xe.CI.Full: failure " Patchwork
37 siblings, 0 replies; 44+ messages in thread
From: Patchwork @ 2025-11-07 21:21 UTC (permalink / raw)
To: Matt Roper; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 1545 bytes --]
== Series Details ==
Series: Scope-based forcewake and runtime PM
URL : https://patchwork.freedesktop.org/series/157253/
State : success
== Summary ==
CI Bug Log - changes from xe-4073-4dff427fe8bbfd0bdbf7935d23a2aba0c350ab2d_BAT -> xe-pw-157253v1_BAT
====================================================
Summary
-------
**SUCCESS**
No regressions found.
Participating hosts (13 -> 13)
------------------------------
No changes in participating hosts
Known issues
------------
Here are the changes found in xe-pw-157253v1_BAT that come from known issues:
### IGT changes ###
#### Possible fixes ####
* igt@xe_waitfence@abstime:
- bat-dg2-oem2: [TIMEOUT][1] ([Intel XE#6506]) -> [PASS][2]
[1]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-4073-4dff427fe8bbfd0bdbf7935d23a2aba0c350ab2d/bat-dg2-oem2/igt@xe_waitfence@abstime.html
[2]: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v1/bat-dg2-oem2/igt@xe_waitfence@abstime.html
[Intel XE#6506]: https://gitlab.freedesktop.org/drm/xe/kernel/issues/6506
Build changes
-------------
* IGT: IGT_8613 -> IGT_8614
* Linux: xe-4073-4dff427fe8bbfd0bdbf7935d23a2aba0c350ab2d -> xe-pw-157253v1
IGT_8613: b542242f5b116e3b554b4068ef5dfa4451075b2b @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
IGT_8614: 8614
xe-4073-4dff427fe8bbfd0bdbf7935d23a2aba0c350ab2d: 4dff427fe8bbfd0bdbf7935d23a2aba0c350ab2d
xe-pw-157253v1: 157253v1
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v1/index.html
[-- Attachment #2: Type: text/html, Size: 2124 bytes --]
^ permalink raw reply [flat|nested] 44+ messages in thread
* ✗ Xe.CI.Full: failure for Scope-based forcewake and runtime PM
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
` (36 preceding siblings ...)
2025-11-07 21:21 ` ✓ Xe.CI.BAT: " Patchwork
@ 2025-11-09 3:59 ` Patchwork
37 siblings, 0 replies; 44+ messages in thread
From: Patchwork @ 2025-11-09 3:59 UTC (permalink / raw)
To: Matt Roper; +Cc: intel-xe
[-- Attachment #1: Type: text/plain, Size: 367 bytes --]
== Series Details ==
Series: Scope-based forcewake and runtime PM
URL : https://patchwork.freedesktop.org/series/157253/
State : failure
== Summary ==
ERROR: The runconfig 'xe-4073-4dff427fe8bbfd0bdbf7935d23a2aba0c350ab2d_FULL' does not exist in the database
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-157253v1/index.html
[-- Attachment #2: Type: text/html, Size: 932 bytes --]
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH 08/33] drm/xe/pm: Add scope-based cleanup helper for runtime PM
2025-11-07 18:13 ` [PATCH 08/33] drm/xe/pm: Add scope-based cleanup helper for runtime PM Matt Roper
@ 2025-11-10 21:59 ` Matt Roper
0 siblings, 0 replies; 44+ messages in thread
From: Matt Roper @ 2025-11-10 21:59 UTC (permalink / raw)
To: intel-xe
On Fri, Nov 07, 2025 at 10:13:24AM -0800, Matt Roper wrote:
> Add a scope-based helpers for runtime PM that may be used to simplify
> cleanup logic and potentially avoid goto-based cleanup.
>
> For example, using
>
> guard(xe_pm_runtime)(xe);
>
> will get runtime PM and cause a corresponding put to occur automatically
> when the current scope is exited. 'xe_pm_runtime_noresume' can be used
> as a guard replacement for the corresponding 'noresume' variant.
> There's also an xe_pm_runtime_ioctl conditional guard that can be used
> as a replacement for xe_runtime_ioctl():
>
> ACQUIRE(xe_pm_runtime_ioctl, pm)(xe);
> if ((ret = ACQUIRE_ERR(xe_pm_runtime_ioctl, &pm)) < 0)
> /* failed */
>
> In a few rare cases (such as gt_reset_worker()) we need to ensure that
> runtime PM is dropped when the function is exited by any means
> (including error paths), but the function does not need to acquire
> runtime PM because that has already been done earlier by a different
> function. For these special cases, an 'xe_pm_runtime_release_only'
> guard can be used to handle the release without doing an acquisition.
>
> These guards will be used in future patches to eliminate some of our
> goto-based cleanup.
>
> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> ---
> drivers/gpu/drm/xe/xe_pm.h | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
> index f7f89a18b6fc..c2cde906aeaf 100644
> --- a/drivers/gpu/drm/xe/xe_pm.h
> +++ b/drivers/gpu/drm/xe/xe_pm.h
> @@ -6,6 +6,7 @@
> #ifndef _XE_PM_H_
> #define _XE_PM_H_
>
> +#include <linux/cleanup.h>
> #include <linux/pm_runtime.h>
>
> #define DEFAULT_VRAM_THRESHOLD 300 /* in MB */
> @@ -37,4 +38,20 @@ int xe_pm_block_on_suspend(struct xe_device *xe);
> void xe_pm_might_block_on_suspend(void);
> int xe_pm_module_init(void);
>
> +static inline void __xe_pm_runtime_noop(struct xe_device *xe) {}
> +
> +DEFINE_GUARD(xe_pm_runtime, struct xe_device *,
> + xe_pm_runtime_get(_T), xe_pm_runtime_put(_T))
> +DEFINE_GUARD(xe_pm_runtime_noresume, struct xe_device *,
> + xe_pm_runtime_get_noresume(_T), xe_pm_runtime_put(_T))
> +DEFINE_GUARD_COND(xe_pm_runtime, _ioctl, xe_pm_runtime_get_ioctl(_T))
The conditional guard needs to specify a ", _RET >= 0" success condition
so that the guard mechanisms will properly recognize which return values
are "success" and need destructor cleanup vs errors that do not. I'll
include that fix in v2, which should solve a bunch of the failures
reported by CI.
Matt
> +
> +/*
> + * Used when a function needs to release runtime PM in all possible cases
> + * and error paths, but the wakeref was already acquired by a different
> + * function (i.e., get() has already happened so only a put() is needed).
> + */
> +DEFINE_GUARD(xe_pm_runtime_release_only, struct xe_device *,
> + __xe_pm_runtime_noop(_T), xe_pm_runtime_put(_T));
> +
> #endif
> --
> 2.51.1
>
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
^ permalink raw reply [flat|nested] 44+ messages in thread
* Re: [PATCH 01/33] drm/xe/forcewake: Improve kerneldoc
2025-11-07 18:13 ` [PATCH 01/33] drm/xe/forcewake: Improve kerneldoc Matt Roper
@ 2025-11-10 23:33 ` Summers, Stuart
0 siblings, 0 replies; 44+ messages in thread
From: Summers, Stuart @ 2025-11-10 23:33 UTC (permalink / raw)
To: intel-xe@lists.freedesktop.org, Roper, Matthew D
On Fri, 2025-11-07 at 10:13 -0800, Matt Roper wrote:
> Improve the kerneldoc for forcewake a bit to give more detail about
> what
> the structures represent.
>
> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Nice, thanks for the detail!
Reviewed-by: Stuart Summers <stuart.summers@intel.com>
> ---
> drivers/gpu/drm/xe/xe_force_wake_types.h | 26
> ++++++++++++++++++++++--
> 1 file changed, 24 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h
> b/drivers/gpu/drm/xe/xe_force_wake_types.h
> index 12d6e2367455..9cfa28faf7bc 100644
> --- a/drivers/gpu/drm/xe/xe_force_wake_types.h
> +++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
> @@ -52,7 +52,22 @@ enum xe_force_wake_domains {
> };
>
> /**
> - * struct xe_force_wake_domain - Xe force wake domains
> + * struct xe_force_wake_domain - Xe force wake power domain
> + *
> + * Represents a individual device-internal power domain. The driver
> must
> + * ensure the power domain is awake before accessing registers or
> other
> + * hardware functionality that is part of the power domain. Since
> different
> + * driver threads may access hardware units simultaneously, a
> reference count
> + * is used to ensure that the domain remains awake as long as any
> software
> + * is using the part of the hardware covered by the power domain.
> + *
> + * Hardware provides a register interface to allow the driver to
> request
> + * wake/sleep of power domains, although in most cases the actual
> action of
> + * powering the hardware up/down is handled by firmware (and may be
> subject to
> + * requirements and constraints outside of the driver's visibility)
> so the
> + * driver needs to wait for an acknowledgment that a wake request
> has been
> + * acted upon before accessing the parts of the hardware that reside
> within the
> + * power domain.
> */
> struct xe_force_wake_domain {
> /** @id: domain force wake id */
> @@ -70,7 +85,14 @@ struct xe_force_wake_domain {
> };
>
> /**
> - * struct xe_force_wake - Xe force wake
> + * struct xe_force_wake - Xe force wake collection
> + *
> + * Represents a collection of related power domains (struct
> + * xe_force_wake_domain) associated with a subunit of the device.
> + *
> + * Currently only used for GT power domains (where the term
> "forcewake" is used
> + * in the hardware documentation), although the interface could be
> extended to
> + * power wells in other parts of the hardware in the future.
> */
> struct xe_force_wake {
> /** @gt: back pointers to GT */
^ permalink raw reply [flat|nested] 44+ messages in thread
end of thread, other threads:[~2025-11-10 23:33 UTC | newest]
Thread overview: 44+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
2025-11-07 18:13 ` [PATCH 01/33] drm/xe/forcewake: Improve kerneldoc Matt Roper
2025-11-10 23:33 ` Summers, Stuart
2025-11-07 18:13 ` [PATCH 02/33] drm/xe/eustall: Store forcewake reference in stream structure Matt Roper
2025-11-07 19:52 ` Harish Chegondi
2025-11-07 18:13 ` [PATCH 03/33] drm/xe/oa: " Matt Roper
2025-11-07 18:13 ` [PATCH 04/33] drm/xe/forcewake: Create dedicated type for forcewake references Matt Roper
2025-11-07 19:27 ` Michal Wajdeczko
2025-11-07 21:17 ` Matt Roper
2025-11-07 18:13 ` [PATCH 05/33] squash! " Matt Roper
2025-11-07 18:13 ` [PATCH 06/33] squash! " Matt Roper
2025-11-07 18:13 ` [PATCH 07/33] drm/xe/forcewake: Add scope-based cleanup for forcewake Matt Roper
2025-11-07 18:13 ` [PATCH 08/33] drm/xe/pm: Add scope-based cleanup helper for runtime PM Matt Roper
2025-11-10 21:59 ` Matt Roper
2025-11-07 18:13 ` [PATCH 09/33] drm/xe/gt: Use scope-based cleanup Matt Roper
2025-11-07 18:13 ` [PATCH 10/33] drm/xe/gt_idle: " Matt Roper
2025-11-07 18:13 ` [PATCH 11/33] drm/xe/guc: " Matt Roper
2025-11-07 18:13 ` [PATCH 12/33] drm/xe/guc_pc: " Matt Roper
2025-11-07 18:13 ` [PATCH 13/33] drm/xe/mocs: " Matt Roper
2025-11-07 18:13 ` [PATCH 14/33] drm/xe/pat: Use scope-based forcewake Matt Roper
2025-11-07 18:13 ` [PATCH 15/33] drm/xe/pxp: Use scope-based cleanup Matt Roper
2025-11-07 18:13 ` [PATCH 16/33] drm/xe/gsc: " Matt Roper
2025-11-07 18:13 ` [PATCH 17/33] drm/xe/device: " Matt Roper
2025-11-07 18:13 ` [PATCH 18/33] drm/xe/devcoredump: " Matt Roper
2025-11-07 18:13 ` [PATCH 19/33] drm/xe/display: Use scoped-cleanup Matt Roper
2025-11-07 18:13 ` [PATCH 20/33] drm/xe: Create scoped cleanup class for force_wake_get_any_engine() Matt Roper
2025-11-07 18:13 ` [PATCH 21/33] drm/xe/drm_client: Use scope-based cleanup Matt Roper
2025-11-07 18:13 ` [PATCH 22/33] drm/xe/gt_debugfs: " Matt Roper
2025-11-07 18:13 ` [PATCH 23/33] drm/xe/huc: Use scope-based forcewake Matt Roper
2025-11-07 18:13 ` [PATCH 24/33] drm/xe/query: " Matt Roper
2025-11-07 18:13 ` [PATCH 25/33] drm/xe/reg_sr: " Matt Roper
2025-11-07 18:13 ` [PATCH 26/33] drm/xe/vram: " Matt Roper
2025-11-07 18:13 ` [PATCH 27/33] drm/xe/bo: Use scope-based runtime PM Matt Roper
2025-11-07 18:13 ` [PATCH 28/33] drm/xe/ggtt: Use scope-based runtime pm Matt Roper
2025-11-07 18:13 ` [PATCH 29/33] drm/xe/hwmon: Use scope-based runtime PM Matt Roper
2025-11-07 18:13 ` [PATCH 30/33] drm/xe/sriov: " Matt Roper
2025-11-07 18:13 ` [PATCH 31/33] drm/xe/tests: " Matt Roper
2025-11-07 18:13 ` [PATCH 32/33] drm/xe/sysfs: Use scope-based runtime power management Matt Roper
2025-11-07 18:13 ` [PATCH 33/33] drm/xe/debugfs: Use scope-based runtime PM Matt Roper
2025-11-07 18:18 ` [PATCH 00/33] Scope-based forcewake and " Matt Roper
2025-11-07 20:43 ` ✗ CI.checkpatch: warning for " Patchwork
2025-11-07 20:45 ` ✓ CI.KUnit: success " Patchwork
2025-11-07 21:21 ` ✓ Xe.CI.BAT: " Patchwork
2025-11-09 3:59 ` ✗ Xe.CI.Full: failure " Patchwork
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox