From: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>,
<intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v3 08/12] drm/xe/sriov: Add debugfs with scheduler groups information
Date: Thu, 11 Dec 2025 14:44:43 -0800 [thread overview]
Message-ID: <07bbf5db-e171-40a3-83c4-12ab031b8854@intel.com> (raw)
In-Reply-To: <fb7cd4b8-f5a6-4fc2-b58b-5460cd23e8ae@intel.com>
On 12/11/2025 2:40 PM, Michal Wajdeczko wrote:
> this is PF only patch, so
>
> drm/xe/pf:
>
> On 12/11/2025 2:57 AM, Daniele Ceraolo Spurio wrote:
>> Under a new subfolder, an entry is created for each group to list the
>> engines assigned to them. We create enough entries for each possible
>> group, with the disabled groups just returning an empty list.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> ---> v2: drop subfolders, always register debugfs for all groups (Michal)
>> v3: store the group id as uintptr_t (Michal)
>> ---
>> drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 66 +++++++++++++++++++++
>> 1 file changed, 66 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
>> index 8ac5e0e01e36..c09a89c69fad 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
>> @@ -163,6 +163,10 @@ static void pf_add_policy_attrs(struct xe_gt *gt, struct dentry *parent)
>> * : ├── tile0
>> * : ├── gt0
>> * : ├── sched_groups_mode
>> + * ├── sched_groups
>> + * : ├── group0
>> + * :
>> + * └── groupN
>> */
>>
>> static const char *sched_group_mode_to_string(enum xe_sriov_sched_group_modes mode)
>> @@ -254,8 +258,56 @@ static const struct file_operations sched_groups_fops = {
>> .release = single_release,
>> };
>>
>> +static ssize_t sched_group_engines_read(struct file *file, char __user *buf,
>> + size_t count, loff_t *ppos)
>> +{
>> + struct dentry *dent = file_dentry(file);
>> + struct xe_gt *gt = extract_gt(dent->d_parent->d_parent);
>> + struct xe_gt_sriov_scheduler_groups *info = >->sriov.pf.policy.guc.sched_groups;
>> + struct guc_sched_group *groups = info->modes[info->current_mode].groups;
>> + u32 num_groups = info->modes[info->current_mode].num_groups;
>> + unsigned int group = (uintptr_t)extract_priv(dent);
>> + struct xe_hw_engine *hwe;
>> + enum xe_hw_engine_id id;
>> + char engines[128];
>> +
>> + engines[0] = '\0';
>> +
>> + /* If there are no groups it means that all the engines are in group 0 */
> nit: this comment is more related to the 'else' below
>
>> + if (group < num_groups) {
>> + for_each_hw_engine(hwe, gt, id) {
>> + u8 guc_class = xe_engine_class_to_guc_class(hwe->class);
>> + u32 mask = groups[group].engines[guc_class];
>> +
>> + if (mask & BIT(hwe->logical_instance)) {
>> + strlcat(engines, hwe->name, sizeof(engines));
>> + strlcat(engines, " ", sizeof(engines));
>> + }
>> + }
>> + strlcat(engines, "\n", sizeof(engines));
>> + } else if (group == 0) {
> I'm still not convinced that we should list all engines in group0,
> even if this is what GuC uses internally, as from the ABI POV,
> and from the data you have in your groups structs,
> this group is still unpopulated, and mode says it is 'disabled'
>
> IMO on debugfs we should rather focus on exposing data we maintain in the driver,
> without going into detail how firmware implementation might use it
I'd prefer to keep this, because the sched_group ET/PT settings still
work even if we're in disabled mode and the behavior for those is that
all engines are considered part of group 0. Unless you'd prefer me to
block access to those debugfs files if we're in disabled mode?
Daniele
>
> so with this "default" group0 removed and nits fixed,
>
> Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>
>> + for_each_hw_engine(hwe, gt, id) {
>> + strlcat(engines, hwe->name, sizeof(engines));
>> + strlcat(engines, " ", sizeof(engines));
>> + }
>> + strlcat(engines, "\n", sizeof(engines));
>> + }
>> +
>> + return simple_read_from_buffer(buf, count, ppos, engines, strlen(engines));
>> +}
>> +
>> +static const struct file_operations sched_group_engines_fops = {
>> + .owner = THIS_MODULE,
>> + .open = simple_open,
>> + .read = sched_group_engines_read,
>> + .llseek = default_llseek,
>> +};
>> +
>> static void pf_add_sched_groups(struct xe_gt *gt, struct dentry *parent)
>> {
>> + struct dentry *groups;
>> + u8 group;
>> +
>> xe_gt_assert(gt, gt == extract_gt(parent));
>> xe_gt_assert(gt, PFID == extract_vfid(parent));
>>
>> @@ -268,11 +320,25 @@ static void pf_add_sched_groups(struct xe_gt *gt, struct dentry *parent)
>> * We should rework the flow so that debugfs is registered after the
>> * policy init, so that we check if there are valid groups before
>> * adding the debugfs files.
>> + * Similarly, instead of using GUC_MAX_SCHED_GROUPS we could use
>> + * gt->sriov.pf.policy.guc.sched_groups.max_number_of_groups.
>> */
>> if (!xe_sriov_gt_pf_policy_has_sched_groups_support(gt))
>> return;
>>
>> debugfs_create_file("sched_groups_mode", 0644, parent, parent, &sched_groups_fops);
>> +
>> + groups = debugfs_create_dir("sched_groups", parent);
>> + if (IS_ERR(groups))
>> + return;
>> +
>> + for (group = 0; group < GUC_MAX_SCHED_GROUPS; group++) {
>> + char name[10];
>> +
>> + snprintf(name, sizeof(name), "group%u", group);
>> + debugfs_create_file(name, 0644, groups, (void *)(uintptr_t)group,
>> + &sched_group_engines_fops);
>> + }
>> }
>>
>> /*
next prev parent reply other threads:[~2025-12-11 22:44 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-11 1:56 [PATCH v3 00/12] Introduce SRIOV scheduler groups Daniele Ceraolo Spurio
2025-12-11 1:57 ` [PATCH v3 01/12] drm/xe/gt: Add engine masks for each class Daniele Ceraolo Spurio
2025-12-11 18:19 ` Michal Wajdeczko
2025-12-11 1:57 ` [PATCH v3 02/12] drm/gt/guc: extract scheduler-related defines from guc_fwif.h Daniele Ceraolo Spurio
2025-12-11 18:20 ` Michal Wajdeczko
2025-12-11 1:57 ` [PATCH v3 03/12] drm/xe/sriov: Initialize scheduler groups Daniele Ceraolo Spurio
2025-12-11 18:52 ` Michal Wajdeczko
2025-12-11 22:55 ` Daniele Ceraolo Spurio
2025-12-11 1:57 ` [PATCH v3 04/12] drm/xe/sriov: Add support for enabling " Daniele Ceraolo Spurio
2025-12-11 18:59 ` Michal Wajdeczko
2025-12-11 23:00 ` Daniele Ceraolo Spurio
2025-12-11 1:57 ` [PATCH v3 05/12] drm/xe/sriov: Scheduler groups are incompatible with multi-lrc Daniele Ceraolo Spurio
2025-12-11 19:05 ` Michal Wajdeczko
2025-12-11 1:57 ` [PATCH v3 06/12] drm/xe/sriov: Add handling for MLRC adverse event threshold Daniele Ceraolo Spurio
2025-12-11 23:19 ` Michal Wajdeczko
2025-12-11 1:57 ` [PATCH v3 07/12] drm/xe/sriov: Add debugfs to enable scheduler groups Daniele Ceraolo Spurio
2025-12-11 21:07 ` Michal Wajdeczko
2025-12-11 1:57 ` [PATCH v3 08/12] drm/xe/sriov: Add debugfs with scheduler groups information Daniele Ceraolo Spurio
2025-12-11 22:40 ` Michal Wajdeczko
2025-12-11 22:44 ` Daniele Ceraolo Spurio [this message]
2025-12-11 1:57 ` [PATCH v3 09/12] drm/xe/sriov: Prep for multiple exec quantums and preemption timeouts Daniele Ceraolo Spurio
2025-12-11 22:41 ` Michal Wajdeczko
2025-12-11 1:57 ` [PATCH v3 10/12] drm/xe/sriov: Add functions to set exec quantums for each group Daniele Ceraolo Spurio
2025-12-11 22:47 ` Michal Wajdeczko
2025-12-11 1:57 ` [PATCH v3 11/12] drm/xe/sriov: Add functions to set preempt timeouts " Daniele Ceraolo Spurio
2025-12-11 22:49 ` Michal Wajdeczko
2025-12-11 1:57 ` [PATCH v3 12/12] drm/xe/sriov: Add debugfs to set EQ and PT for scheduler groups Daniele Ceraolo Spurio
2025-12-11 23:07 ` Michal Wajdeczko
2025-12-11 2:31 ` ✗ CI.checkpatch: warning for Introduce SRIOV scheduler groups (rev3) Patchwork
2025-12-11 2:32 ` ✓ CI.KUnit: success " Patchwork
2025-12-11 3:34 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-11 10:47 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=07bbf5db-e171-40a3-83c4-12ab031b8854@intel.com \
--to=daniele.ceraolospurio@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=michal.wajdeczko@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox