From: Michal Wajdeczko <michal.wajdeczko@intel.com>
To: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>,
<intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v2 07/11] drm/xe/sriov: Add debugfs with scheduler groups information
Date: Tue, 9 Dec 2025 01:08:28 +0100 [thread overview]
Message-ID: <fa7c593a-5147-4a4a-8d12-566b2ebc40d3@intel.com> (raw)
In-Reply-To: <20251206230356.3600292-20-daniele.ceraolospurio@intel.com>
On 12/7/2025 12:04 AM, Daniele Ceraolo Spurio wrote:
> Under a new subfolder, an entry is created for each group to list the
> engines assigned to them. We create enough entries for each possible
> group, with the disabled groups just returning an empty list.
>
> v2: drop subfolders, always register debugfs for all groups (Michal)
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
> drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c | 70 +++++++++++++++++++++
> 1 file changed, 70 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
> index 1be23809e624..15f5f3a40471 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_debugfs.c
> @@ -163,6 +163,10 @@ static void pf_add_policy_attrs(struct xe_gt *gt, struct dentry *parent)
> * : ├── tile0
> * : ├── gt0
> * : ├── sched_groups_mode
> + * ├── sched_groups
> + * : ├── group0
> + * :
> + * └── groupN
> */
>
> static const char *sched_group_mode_to_string(enum xe_sriov_sched_group_modes mode)
> @@ -260,8 +264,60 @@ static const struct file_operations sched_groups_fops = {
> .release = single_release,
> };
>
> +static ssize_t sched_group_engines_read(struct file *file, char __user *buf,
> + size_t count, loff_t *ppos)
> +{
> + struct dentry *dent = file_dentry(file);
> + struct xe_gt *gt = extract_gt(dent->d_parent);
> + struct xe_gt_sriov_scheduler_groups *groups = >->sriov.pf.policy.guc.sched_groups;
> + u32 num_masks = groups->modes[groups->current_mode].num_masks;
> + u32 *masks = groups->modes[groups->current_mode].masks;
> + unsigned int group = GUC_MAX_SCHED_GROUPS;
> + struct xe_hw_engine *hwe;
> + enum xe_hw_engine_id id;
> + char engines[128];
> + int ret;
> +
> + ret = sscanf(dent->d_name.name, "group%u", &group);
see below
> + xe_gt_assert(gt, ret == 1 && group < GUC_MAX_SCHED_GROUPS);
> +
> + engines[0] = '\0';
> +
> + /* If there are no masks it means that all the engines are in group 0 */
> + if (num_masks >= (group + 1) * GUC_MAX_ENGINE_CLASSES) {
> + for_each_hw_engine(hwe, gt, id) {
> + u8 guc_class = xe_engine_class_to_guc_class(hwe->class);
> + u32 mask = masks[group * GUC_MAX_ENGINE_CLASSES + guc_class];
> +
> + if (mask & BIT(hwe->logical_instance)) {
> + strlcat(engines, hwe->name, sizeof(engines));
> + strlcat(engines, " ", sizeof(engines));
> + }
> + }
> + strlcat(engines, "\n", sizeof(engines));
> + } else if (group == 0) {
> + for_each_hw_engine(hwe, gt, id) {
> + strlcat(engines, hwe->name, sizeof(engines));
> + strlcat(engines, " ", sizeof(engines));
> + }
> + strlcat(engines, "\n", sizeof(engines));
> + }
> +
> + return simple_read_from_buffer(buf, count, ppos, engines, strlen(engines));
> +}
> +
> +static const struct file_operations sched_group_engines_fops = {
> + .owner = THIS_MODULE,
> + .open = simple_open,
> + .read = sched_group_engines_read,
> + .llseek = default_llseek,
> +};
> +
> static void pf_add_sched_groups(struct xe_gt *gt, struct dentry *parent)
> {
> + struct dentry *groups;
> + u8 g;
no cryptic names, please ;)
> +
> xe_gt_assert(gt, gt == extract_gt(parent));
> xe_gt_assert(gt, PFID == extract_vfid(parent));
>
> @@ -274,11 +330,25 @@ static void pf_add_sched_groups(struct xe_gt *gt, struct dentry *parent)
> * We should rework the flow so that debugfs is registered after the
> * policy init, so that we check if there are valid groups before
> * adding the debugfs files.
> + * Similarly, instead of using GUC_MAX_SCHED_GROUPS we could use
> + * gt->sriov.pf.policy.guc.sched_groups.max_number_of_groups.
> */
> if (!xe_sriov_gt_pf_policy_has_sched_groups_support(gt))
> return;
>
> debugfs_create_file("sched_groups_mode", 0644, parent, parent, &sched_groups_fops);
> +
> + groups = debugfs_create_dir("sched_groups", parent);
> + if (IS_ERR(groups))
> + return;
> + groups->d_inode->i_private = gt;
no need to store gt here, you can grab it from
parent->d_parent
or
parent->d_parent->d_parent
> +
> + for (g = 0; g < GUC_MAX_SCHED_GROUPS; g++) {
> + char name[10];
> +
> + snprintf(name, sizeof(name), "group%u", g);
> + debugfs_create_file(name, 0644, groups, parent, &sched_group_engines_fops);
why storing 'parent' as private data?
better store group index there so you will not need to sscanf it back
and parent can always be retrieved from the dentry
> + }
> }
>
> /*
next prev parent reply other threads:[~2025-12-09 0:08 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-06 23:03 [PATCH v2 00/11] Introduce SRIOV scheduler groups Daniele Ceraolo Spurio
2025-12-06 23:03 ` [PATCH v2 01/11] drm/xe/gt: Add engine masks for each class Daniele Ceraolo Spurio
2025-12-07 15:35 ` Michal Wajdeczko
2025-12-06 23:03 ` [PATCH v2 02/11] drm/xe/sriov: Initialize scheduler groups Daniele Ceraolo Spurio
2025-12-07 21:57 ` Michal Wajdeczko
2025-12-08 17:36 ` Daniele Ceraolo Spurio
2025-12-06 23:03 ` [PATCH v2 03/11] drm/xe/sriov: Add support for enabling " Daniele Ceraolo Spurio
2025-12-07 21:57 ` Michal Wajdeczko
2025-12-08 17:41 ` Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 04/11] drm/xe/sriov: Scheduler groups are incompatible with multi-lrc Daniele Ceraolo Spurio
2025-12-07 21:58 ` Michal Wajdeczko
2025-12-08 17:48 ` Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 05/11] drm/xe/sriov: Add handling for MLRC adverse event threshold Daniele Ceraolo Spurio
2025-12-07 22:03 ` Michal Wajdeczko
2025-12-08 17:52 ` Daniele Ceraolo Spurio
2025-12-08 18:27 ` Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 06/11] drm/xe/sriov: Add debugfs to enable scheduler groups Daniele Ceraolo Spurio
2025-12-08 23:38 ` Michal Wajdeczko
2025-12-09 0:36 ` Daniele Ceraolo Spurio
2025-12-09 15:07 ` Michal Wajdeczko
2025-12-09 18:09 ` Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 07/11] drm/xe/sriov: Add debugfs with scheduler groups information Daniele Ceraolo Spurio
2025-12-09 0:08 ` Michal Wajdeczko [this message]
2025-12-09 0:23 ` Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 08/11] drm/xe/sriov: Prep for multiple exec quantums and preemption timeouts Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 09/11] drm/xe/sriov: Add functions to set exec quantums for each group Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 10/11] drm/xe/sriov: Add functions to set preempt timeouts " Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 11/11] drm/xe/sriov: Add debugfs to set EQ and PT for scheduler groups Daniele Ceraolo Spurio
2025-12-06 23:10 ` ✗ CI.checkpatch: warning for Introduce SRIOV scheduler groups (rev2) Patchwork
2025-12-06 23:11 ` ✓ CI.KUnit: success " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fa7c593a-5147-4a4a-8d12-566b2ebc40d3@intel.com \
--to=michal.wajdeczko@intel.com \
--cc=daniele.ceraolospurio@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox