Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	Lucas De Marchi <lucas.demarchi@intel.com>
Subject: Re: [PATCH 09/14] drm/xe/pf: Allow bulk change all VFs priority using sysfs
Date: Fri, 24 Oct 2025 15:47:07 -0400	[thread overview]
Message-ID: <aPvXu8JW10pOSKOu@intel.com> (raw)
In-Reply-To: <20251020182414.576-10-michal.wajdeczko@intel.com>

On Mon, Oct 20, 2025 at 08:24:09PM +0200, Michal Wajdeczko wrote:
> It is expected to be a common practice to configure the same level
> of scheduling priority across all VFs and PF (at least as starting
> point). Due to current GuC FW limitations it is also the only way
> to change VFs priority.
> 
> Add write-only sysfs attribute that will apply required priority
> level to all VFs and PF at once.
> 
>   /sys/bus/pci/drivers/xe/BDF/
>   ├── sriov_admin/
>       ├── .bulk_profile
>       │   └── sched_priority		[WO] low, normal

Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> 
> Writing "low" to this write-only attribute will change PF and
> VFs scheduling priority on all tiles/GTs to LOW (function will
> be scheduled only if it has work submitted). Similarly, writing
> "normal" will change functions priority to NORMAL (functions will
> be scheduled irrespective of whether there is a work or not).
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Lucas De Marchi <lucas.demarchi@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_sriov_pf_provision.c |  1 +
>  drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c     | 42 +++++++++++++++++++++-
>  2 files changed, 42 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> index 3a3806055616..89b2feb43b77 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> @@ -6,6 +6,7 @@
>  #include "xe_assert.h"
>  #include "xe_device.h"
>  #include "xe_gt_sriov_pf_config.h"
> +#include "xe_gt_sriov_pf_policy.h"
>  #include "xe_sriov.h"
>  #include "xe_sriov_pf_helpers.h"
>  #include "xe_sriov_pf_provision.h"
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> index 5c445094e223..26ba9a2efec9 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> @@ -24,7 +24,8 @@
>   *     ├── ...
>   *     ├── .bulk_profile
>   *     │   ├── exec_quantum_ms
> - *     │   └── preempt_timeout_us
> + *     │   ├── preempt_timeout_us
> + *     │   └── sched_priority
>   *     ├── pf/
>   *     │   ├── ...
>   *     │   └── profile
> @@ -108,9 +109,48 @@ static XE_SRIOV_DEV_ATTR_WO(NAME)
>  DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(exec_quantum_ms, eq, u32);
>  DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(preempt_timeout_us, pt, u32);
>  
> +static const char * const sched_priority_names[] = {
> +	[GUC_SCHED_PRIORITY_LOW] = "low",
> +	[GUC_SCHED_PRIORITY_NORMAL] = "normal",
> +	[GUC_SCHED_PRIORITY_HIGH] = "high",
> +};
> +
> +static bool sched_priority_high_allowed(unsigned int vfid)
> +{
> +	/* As of today GuC FW allows to select 'high' priority only for the PF. */
> +	return vfid == PFID;
> +}
> +
> +static bool sched_priority_bulk_high_allowed(struct xe_device *xe)
> +{
> +	/* all VFs are equal - it's sufficient to check VF1 only */
> +	return sched_priority_high_allowed(VFID(1));
> +}
> +
> +static ssize_t xe_sriov_dev_attr_sched_priority_store(struct xe_device *xe,
> +						      const char *buf, size_t count)
> +{
> +	size_t num_priorities = ARRAY_SIZE(sched_priority_names);
> +	int match;
> +	int err;
> +
> +	if (!sched_priority_bulk_high_allowed(xe))
> +		num_priorities--;
> +
> +	match = __sysfs_match_string(sched_priority_names, num_priorities, buf);
> +	if (match < 0)
> +		return -EINVAL;
> +
> +	err = xe_sriov_pf_provision_bulk_apply_priority(xe, match);
> +	return err ?: count;
> +}
> +
> +static XE_SRIOV_DEV_ATTR_WO(sched_priority);
> +
>  static struct attribute *bulk_profile_dev_attrs[] = {
>  	&xe_sriov_dev_attr_exec_quantum_ms.attr,
>  	&xe_sriov_dev_attr_preempt_timeout_us.attr,
> +	&xe_sriov_dev_attr_sched_priority.attr,
>  	NULL
>  };
>  
> -- 
> 2.47.1
> 

  reply	other threads:[~2025-10-24 19:47 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-20 18:24 [PATCH 00/14] PF: Add sriov_admin sysfs tree Michal Wajdeczko
2025-10-20 18:24 ` [PATCH 01/14] drm/xe/pf: Prepare sysfs for SR-IOV admin attributes Michal Wajdeczko
2025-10-24 19:43   ` Rodrigo Vivi
2025-10-27 17:11   ` Lucas De Marchi
2025-10-27 17:59     ` Michal Wajdeczko
2025-10-27 18:30       ` Lucas De Marchi
2025-10-20 18:24 ` [PATCH 02/14] drm/xe/pf: Take RPM during calls to SR-IOV attr.store() Michal Wajdeczko
2025-10-27 17:14   ` Lucas De Marchi
2025-10-20 18:24 ` [PATCH 03/14] drm/xe/pf: Allow change PF and VFs EQ/PT using sysfs Michal Wajdeczko
2025-10-24 19:45   ` Rodrigo Vivi
2025-10-27 17:27   ` Lucas De Marchi
2025-10-27 18:09     ` Michal Wajdeczko
2025-10-27 18:32       ` Lucas De Marchi
2025-10-20 18:24 ` [PATCH 04/14] drm/xe/pf: Relax report helper to accept PF in bulk configs Michal Wajdeczko
2025-10-27 18:50   ` Lucas De Marchi
2025-10-20 18:24 ` [PATCH 05/14] drm/xe/pf: Add functions to bulk configure EQ/PT on GT Michal Wajdeczko
2025-10-27 19:03   ` Lucas De Marchi
2025-10-27 20:12     ` Michal Wajdeczko
2025-10-20 18:24 ` [PATCH 06/14] drm/xe/pf: Add functions to bulk provision EQ/PT Michal Wajdeczko
2025-10-27 19:18   ` Lucas De Marchi
2025-10-20 18:24 ` [PATCH 07/14] drm/xe/pf: Allow bulk change all VFs EQ/PT using sysfs Michal Wajdeczko
2025-10-24 19:46   ` Rodrigo Vivi
2025-10-27 19:28   ` Lucas De Marchi
2025-10-27 20:15     ` Michal Wajdeczko
2025-10-20 18:24 ` [PATCH 08/14] drm/xe/pf: Add functions to provision scheduling priority Michal Wajdeczko
2025-10-28 11:17   ` Piotr Piórkowski
2025-10-20 18:24 ` [PATCH 09/14] drm/xe/pf: Allow bulk change all VFs priority using sysfs Michal Wajdeczko
2025-10-24 19:47   ` Rodrigo Vivi [this message]
2025-10-20 18:24 ` [PATCH 10/14] drm/xe/pf: Allow change PF scheduling " Michal Wajdeczko
2025-10-24 19:47   ` Rodrigo Vivi
2025-10-20 18:24 ` [PATCH 11/14] drm/xe/pf: Promote xe_pci_sriov_get_vf_pdev Michal Wajdeczko
2025-10-28  9:57   ` Piotr Piórkowski
2025-10-28 12:22     ` Michal Wajdeczko
2025-10-28 16:03       ` Piotr Piórkowski
2025-10-20 18:24 ` [PATCH 12/14] drm/xe/pf: Add sysfs device symlinks to enabled VFs Michal Wajdeczko
2025-10-24 19:47   ` Rodrigo Vivi
2025-10-20 18:24 ` [PATCH 13/14] drm/xe/pf: Allow to stop and reset VF using sysfs Michal Wajdeczko
2025-10-24 19:51   ` Rodrigo Vivi
2025-10-27 20:58     ` Lucas De Marchi
2025-10-20 18:24 ` [PATCH 14/14] drm/xe/pf: Add documentation for sriov_admin attributes Michal Wajdeczko
2025-10-27 16:44   ` Rodrigo Vivi
2025-10-21  4:35 ` ✗ CI.checkpatch: warning for PF: Add sriov_admin sysfs tree Patchwork
2025-10-21  4:36 ` ✓ CI.KUnit: success " Patchwork
2025-10-21 10:19 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aPvXu8JW10pOSKOu@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=michal.wajdeczko@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox