From: "Nilawar, Badal" <badal.nilawar@intel.com>
To: "Belgaumkar, Vinay" <vinay.belgaumkar@intel.com>,
<intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 2/2] drm/xe: Add forcewake status to powergate_info
Date: Mon, 2 Feb 2026 22:50:17 +0530 [thread overview]
Message-ID: <421a0f41-38f5-4887-8d89-d38442d8fef4@intel.com> (raw)
In-Reply-To: <e40bb59b-d40e-46a1-9792-93019db06221@intel.com>
On 02-02-2026 22:42, Belgaumkar, Vinay wrote:
>
> On 2/1/2026 10:38 PM, Nilawar, Badal wrote:
>>
>> On 30-01-2026 23:04, Belgaumkar, Vinay wrote:
>>>
>>> On 1/30/2026 7:20 AM, Nilawar, Badal wrote:
>>>>
>>>> On 16-01-2026 04:10, Vinay Belgaumkar wrote:
>>>>> Dump forcewake status and ref counts for all domains as part
>>>>> of this debugfs. This is the sample output from gt1-
>>>>>
>>>>> $ cat /sys/kernel/debug/dri//0/gt1/powergate_info
>>>>> Media Power Gating Enabled: yes
>>>>> Media Slice0 Power Gate Status: down
>>>>> GSC Power Gate Status: down
>>>>> GT.ref_count=0, GT.forcewake=0x10000
>>>>> VDBox0.ref_count=0, VDBox0.forcewake=0x10000
>>>>> VEBox0.ref_count=0, VEBox0.forcewake=0x10000
>>>>> GSC.ref_count=0, GSC.forcewake=0x10000
>>>>>
>>>>> Signed-off-by: Vinay Belgaumkar<vinay.belgaumkar@intel.com>
>>>>> ---
>>>>> drivers/gpu/drm/xe/xe_force_wake.c | 46
>>>>> ++++++++++++++++++++++++++----
>>>>> drivers/gpu/drm/xe/xe_force_wake.h | 11 +++++++
>>>>> drivers/gpu/drm/xe/xe_gt_idle.c | 20 +++++++++++++
>>>>> 3 files changed, 71 insertions(+), 6 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/xe/xe_force_wake.c
>>>>> b/drivers/gpu/drm/xe/xe_force_wake.c
>>>>> index 76e054f314ee..197e2197bd0a 100644
>>>>> --- a/drivers/gpu/drm/xe/xe_force_wake.c
>>>>> +++ b/drivers/gpu/drm/xe/xe_force_wake.c
>>>>> @@ -148,12 +148,6 @@ static int domain_sleep_wait(struct xe_gt *gt,
>>>>> return __domain_wait(gt, domain, false);
>>>>> }
>>>>> -#define for_each_fw_domain_masked(domain__, mask__, fw__, tmp__) \
>>>>> - for (tmp__ = (mask__); tmp__; tmp__ &= ~BIT(ffs(tmp__) - 1)) \
>>>>> - for_each_if((domain__ = ((fw__)->domains + \
>>>>> - (ffs(tmp__) - 1))) && \
>>>>> - domain__->reg_ctl.addr)
>>>>> -
>>>>> /**
>>>>> * xe_force_wake_get() : Increase the domain refcount
>>>>> * @fw: struct xe_force_wake
>>>>> @@ -266,3 +260,43 @@ void xe_force_wake_put(struct xe_force_wake
>>>>> *fw, unsigned int fw_ref)
>>>>> xe_gt_WARN(gt, ack_fail, "Forcewake domain%s %#x failed to
>>>>> acknowledge sleep request\n",
>>>>> str_plural(hweight_long(ack_fail)), ack_fail);
>>>>> }
>>>>> +
>>>>> +const char *xe_force_wake_domain_to_str(enum
>>>>> xe_force_wake_domain_id id)
>>>>> +{
>>>>> + switch (id) {
>>>>> + case XE_FW_DOMAIN_ID_GT:
>>>>> + return "GT";
>>>>> + case XE_FW_DOMAIN_ID_RENDER:
>>>>> + return "Render";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA:
>>>>> + return "Media";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VDBOX0:
>>>>> + return "VDBox0";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VDBOX1:
>>>>> + return "VDBox1";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VDBOX2:
>>>>> + return "VDBox2";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VDBOX3:
>>>>> + return "VDBox3";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VDBOX4:
>>>>> + return "VDBox4";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VDBOX5:
>>>>> + return "VDBox5";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VDBOX6:
>>>>> + return "VDBox6";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VDBOX7:
>>>>> + return "VDBox7";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VEBOX0:
>>>>> + return "VEBox0";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VEBOX1:
>>>>> + return "VEBox1";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VEBOX2:
>>>>> + return "VEBox2";
>>>>> + case XE_FW_DOMAIN_ID_MEDIA_VEBOX3:
>>>>> + return "VEBox3";
>>>>> + case XE_FW_DOMAIN_ID_GSC:
>>>>> + return "GSC";
>>>>
>>>> How about creating static look up table.
>>>>
>>>> static const char * const domain_names[] = {
>>>> [XE_FW_DOMAIN_ID_GT] = "GT",
>>>> [XE_FW_DOMAIN_ID_RENDER] = "Render",
>>>> [XE_FW_DOMAIN_ID_MEDIA] = "Media",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VDBOX0] = "VDBox0",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VDBOX1] = "VDBox1",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VDBOX2] = "VDBox2",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VDBOX3] = "VDBox3",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VDBOX4] = "VDBox4",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VDBOX5] = "VDBox5",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VDBOX6] = "VDBox6",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VDBOX7] = "VDBox7",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VEBOX0] = "VEBox0",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VEBOX1] = "VEBox1",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VEBOX2] = "VEBox2",
>>>> [XE_FW_DOMAIN_ID_MEDIA_VEBOX3] = "VEBox3",
>>>> [XE_FW_DOMAIN_ID_GSC] = "GSC",
>>>> };
>>>>
>>>> if (id < ARRAY_SIZE(domain_names) && domain_names[id])
>>>> return domain_names[id];
>>>
>>> I was trying to make it a little more dynamic where, if something
>>> changes in the FW table, we don't need to update 2 locations.
>>
>> Ok, but even with a switch-case statement, you’d still need to update
>> it whenever a new enum value is added.
>> So, updates in two places can’t be completely avoided.
>
> True. Similar thing was needed in sriov and guc code, and switch/case
> was used there. So, just following the same method to keep it uniform
> might be better? I believe i915 used the array definition method.
In xe also its used in boot survivability, late binding and sriov etc.
But fine lets keep the switch case method.
Thanks,
Badal
>
> Thanks,
>
> Vinay.
>
>>
>>>
>>> Thanks,
>>>
>>> Vinay.
>>>
>>>>
>>>> Thanks,
>>>> Badal
>>>>
>>>>> + default:
>>>>> + return "Unknown";
>>>>> + }
>>>>> +}
>>>>> diff --git a/drivers/gpu/drm/xe/xe_force_wake.h
>>>>> b/drivers/gpu/drm/xe/xe_force_wake.h
>>>>> index 1e2198f6a007..f7690cb34ef7 100644
>>>>> --- a/drivers/gpu/drm/xe/xe_force_wake.h
>>>>> +++ b/drivers/gpu/drm/xe/xe_force_wake.h
>>>>> @@ -19,6 +19,17 @@ unsigned int __must_check
>>>>> xe_force_wake_get(struct xe_force_wake *fw,
>>>>> enum xe_force_wake_domains domains);
>>>>> void xe_force_wake_put(struct xe_force_wake *fw, unsigned int
>>>>> fw_ref);
>>>>> +const char *xe_force_wake_domain_to_str(enum
>>>>> xe_force_wake_domain_id id);
>>>>> +
>>>>> +#define for_each_fw_domain_masked(domain__, mask__, fw__, tmp__) \
>>>>> + for (tmp__ = (mask__); tmp__; tmp__ &= ~BIT(ffs(tmp__) - 1)) \
>>>>> + for_each_if((domain__ = ((fw__)->domains + \
>>>>> + (ffs(tmp__) - 1))) && \
>>>>> + domain__->reg_ctl.addr)
>>>>> +
>>>>> +#define for_each_fw_domain(domain__, fw__, tmp__) \
>>>>> + for_each_fw_domain_masked(domain__,
>>>>> fw__->initialized_domains, fw__, tmp__)
>>>>> +
>>>>> static inline int
>>>>> xe_force_wake_ref(struct xe_force_wake *fw,
>>>>> enum xe_force_wake_domains domain)
>>>>> diff --git a/drivers/gpu/drm/xe/xe_gt_idle.c
>>>>> b/drivers/gpu/drm/xe/xe_gt_idle.c
>>>>> index 52436dcb6381..8e36202f1a4f 100644
>>>>> --- a/drivers/gpu/drm/xe/xe_gt_idle.c
>>>>> +++ b/drivers/gpu/drm/xe/xe_gt_idle.c
>>>>> @@ -169,6 +169,24 @@ void xe_gt_idle_disable_pg(struct xe_gt *gt)
>>>>> xe_mmio_write32(>->mmio, POWERGATE_ENABLE,
>>>>> gtidle->powergate_enable);
>>>>> }
>>>>> +static void force_wake_domains_show(struct xe_gt *gt, struct
>>>>> drm_printer *p)
>>>>> +{
>>>>> + struct xe_force_wake_domain *domain;
>>>>> + struct xe_force_wake *fw = gt_to_fw(gt);
>>>>> + unsigned int tmp;
>>>>> + unsigned long flags;
>>>>> +
>>>>> + spin_lock_irqsave(&fw->lock, flags);
>>>>> + for_each_fw_domain(domain, fw, tmp) {
>>>>> + drm_printf(p, "%s.ref_count=%u, %s.fwake=0x%x\n",
>>>>> + xe_force_wake_domain_to_str(domain->id),
>>>>> + READ_ONCE(domain->ref),
>>>>> + xe_force_wake_domain_to_str(domain->id),
>>>>> + xe_mmio_read32(>->mmio, domain->reg_ctl));
>>>>> + }
>>>>> + spin_unlock_irqrestore(&fw->lock, flags);
>>>>> +}
>>>>> +
>>>>> /**
>>>>> * xe_gt_idle_pg_print - Xe powergating info
>>>>> * @gt: GT object
>>>>> @@ -260,6 +278,8 @@ int xe_gt_idle_pg_print(struct xe_gt *gt,
>>>>> struct drm_printer *p)
>>>>> str_up_down(pg_status & GSC_AWAKE_STATUS));
>>>>> }
>>>>> + force_wake_domains_show(gt, p);
>>>>> +
>>>>> return 0;
>>>>> }
next prev parent reply other threads:[~2026-02-02 18:11 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-15 22:40 [PATCH 0/2] drm/xe: Add more info to powergate_info debugfs Vinay Belgaumkar
2026-01-15 22:40 ` [PATCH 1/2] drm/xe: Add GSC to powergate_info Vinay Belgaumkar
2026-01-30 15:23 ` Nilawar, Badal
2026-01-15 22:40 ` [PATCH 2/2] drm/xe: Add forcewake status " Vinay Belgaumkar
2026-01-30 15:20 ` Nilawar, Badal
2026-01-30 17:34 ` Belgaumkar, Vinay
2026-02-02 6:38 ` Nilawar, Badal
2026-02-02 17:12 ` Belgaumkar, Vinay
2026-02-02 17:20 ` Nilawar, Badal [this message]
2026-02-03 6:07 ` Nilawar, Badal
2026-01-15 22:48 ` ✗ CI.checkpatch: warning for drm/xe: Add more info to powergate_info debugfs Patchwork
2026-01-15 22:50 ` ✓ CI.KUnit: success " Patchwork
2026-01-15 23:36 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-01-16 4:34 ` ✗ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=421a0f41-38f5-4887-8d89-d38442d8fef4@intel.com \
--to=badal.nilawar@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=vinay.belgaumkar@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox