public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Michał Winiarski" <michal.winiarski@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: intel-xe@lists.freedesktop.org,
	"Piotr Piórkowski" <piotr.piorkowski@intel.com>
Subject: Re: [PATCH v2 13/13] drm/xe/pf: Perform fair scheduling auto-provisioning
Date: Fri, 3 Apr 2026 13:37:52 +0200	[thread overview]
Message-ID: <ac-iEOsjky05Pw4F@nostramo> (raw)
In-Reply-To: <20260402191726.4932-14-michal.wajdeczko@intel.com>

On Thu, Apr 02, 2026 at 09:17:26PM +0200, Michal Wajdeczko wrote:
> Default VF scheduling configuration is the same as for the PF and
> includes unlimited execution quantum (EQ) and unlimited preemption
> timeout (PT). While this setup gives the most flexibility it does
> not protect the PF or VFs from other VF that could constantly submit
> workloads without any gaps that would let GuC do a VF-switch.
> 
> To avoid that, do some trivial auto-provisioning and configure PF
> and all VFs with 16ms EQ and PT. This setup should allow GuC to
> perform a full round-robin with up to 63 VFs within 2s, which in
> turn should match expectations from most of the VMs using VFs.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com> #v1

Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>

Thanks,
-Michał

> ---
> v2: don't miss to check last VF sched-provisioning (Michal)
>     and explicitly check sched_if_idle (Michal)
> ---
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 91 ++++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  1 +
>  drivers/gpu/drm/xe/xe_sriov_pf_provision.c |  2 +
>  3 files changed, 94 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index d02c96d3041a..e112aa148dab 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -2662,6 +2662,97 @@ static bool pf_non_default_sched(struct xe_gt *gt, unsigned int vfid)
>  	       custom_sched_priority(gt, pf_get_sched_priority(gt, vfid));
>  }
>  
> +static void __pf_show_provisioned_sched(struct xe_gt *gt, unsigned int first_vf,
> +					unsigned int num_vfs, bool provisioned)
> +{
> +	__pf_show_provisioned(gt, first_vf, num_vfs, provisioned,
> +			      pf_get_exec_quantum, NULL, "EQ");
> +	__pf_show_provisioned(gt, first_vf, num_vfs, provisioned,
> +			      pf_get_preempt_timeout, NULL, "PT");
> +
> +	/* we only care about non-default priorities */
> +	if (provisioned)
> +		__pf_show_provisioned(gt, first_vf, num_vfs, true,
> +				      pf_get_sched_priority, NULL, "PRIORITY");
> +}
> +
> +static void pf_show_all_provisioned_sched(struct xe_gt *gt)
> +{
> +	__pf_show_provisioned_sched(gt, PFID, 1 + xe_gt_sriov_pf_get_totalvfs(gt), true);
> +}
> +
> +static void pf_show_unprovisioned_sched(struct xe_gt *gt, unsigned int num_vfs)
> +{
> +	__pf_show_provisioned_sched(gt, PFID, 1 + num_vfs, false);
> +}
> +
> +static bool pf_needs_provision_sched(struct xe_gt *gt, unsigned int num_vfs)
> +{
> +	unsigned int vfid;
> +
> +	for (vfid = PFID; vfid <= PFID + num_vfs; vfid++) {
> +		if (pf_non_default_sched(gt, vfid)) {
> +			pf_show_all_provisioned_sched(gt);
> +			pf_show_unprovisioned_sched(gt, num_vfs);
> +			return false;
> +		}
> +	}
> +
> +	if (xe_gt_sriov_pf_policy_get_sched_if_idle_locked(gt)) {
> +		pf_show_all_provisioned_sched(gt);
> +		pf_show_unprovisioned_sched(gt, num_vfs);
> +		return false;
> +	}
> +
> +	pf_show_all_provisioned_sched(gt);
> +	return true;
> +}
> +
> +/* With 16ms EQ/PT GuC should be able to handle up to 63 VFs within 2s */
> +#define XE_FAIR_EXEC_QUANTUM_MS		16
> +#define XE_FAIR_PREEMPT_TIMEOUT_US	16000
> +#define XE_FAIR_SCHED_PRIORITY		GUC_SCHED_PRIORITY_LOW
> +#define XE_ADMIN_PF_SCHED_PRIORITY	GUC_SCHED_PRIORITY_HIGH
> +
> +/**
> + * xe_gt_sriov_pf_config_set_fair_sched() - Provision PF and VFs with fair scheduling.
> + * @gt: the &xe_gt
> + * @num_vfs: number of VFs to provision (can't be 0)
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_gt_sriov_pf_config_set_fair_sched(struct xe_gt *gt, unsigned int num_vfs)
> +{
> +	int result = 0;
> +	int err;
> +
> +	xe_gt_assert(gt, num_vfs);
> +	xe_gt_assert(gt, XE_FAIR_EXEC_QUANTUM_MS);
> +	xe_gt_assert(gt, XE_FAIR_PREEMPT_TIMEOUT_US);
> +
> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
> +
> +	if (!pf_needs_provision_sched(gt, num_vfs))
> +		return 0;
> +
> +	err = pf_bulk_set_exec_quantum(gt, XE_FAIR_EXEC_QUANTUM_MS, PFID, 1 + num_vfs);
> +	result = result ?: err;
> +	err = pf_bulk_set_preempt_timeout(gt, XE_FAIR_PREEMPT_TIMEOUT_US, PFID, 1 + num_vfs);
> +	result = result ?: err;
> +
> +	xe_gt_assert(gt, XE_FAIR_SCHED_PRIORITY == GUC_SCHED_PRIORITY_LOW);
> +	xe_gt_assert(gt, !xe_gt_sriov_pf_policy_get_sched_if_idle_locked(gt));
> +
> +	if (xe_sriov_pf_admin_only(gt_to_xe(gt))) {
> +		err = pf_provision_sched_priority(gt, PFID, XE_ADMIN_PF_SCHED_PRIORITY);
> +		result = result ?: err;
> +	}
> +
> +	return result;
> +}
> +
>  static int pf_provision_threshold(struct xe_gt *gt, unsigned int vfid,
>  				  enum xe_guc_klv_threshold_index index, u32 value)
>  {
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> index e9314f0a9b4e..2ec62c12ad5c 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> @@ -78,6 +78,7 @@ u32 xe_gt_sriov_pf_config_get_threshold(struct xe_gt *gt, unsigned int vfid,
>  int xe_gt_sriov_pf_config_set_threshold(struct xe_gt *gt, unsigned int vfid,
>  					enum xe_guc_klv_threshold_index index, u32 value);
>  
> +int xe_gt_sriov_pf_config_set_fair_sched(struct xe_gt *gt, unsigned int num_vfs);
>  int xe_gt_sriov_pf_config_set_fair(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
>  int xe_gt_sriov_pf_config_sanitize(struct xe_gt *gt, unsigned int vfid, long timeout);
>  int xe_gt_sriov_pf_config_release(struct xe_gt *gt, unsigned int vfid, bool force);
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> index e11874d689fa..0ec7ea83f12a 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> @@ -41,6 +41,8 @@ static int pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs)
>  	int err;
>  
>  	for_each_gt(gt, xe, id) {
> +		err = xe_gt_sriov_pf_config_set_fair_sched(gt, num_vfs);
> +		result = result ?: err;
>  		err = xe_gt_sriov_pf_config_set_fair(gt, VFID(1), num_vfs);
>  		result = result ?: err;
>  	}
> -- 
> 2.47.1
> 

  reply	other threads:[~2026-04-03 11:38 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-02 19:17 [PATCH v2 00/13] drm/xe/pf: Improve EQ/PT provisioning Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 01/13] drm/xe/guc: Update POLICY_SCHED_IF_IDLE documentation Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 02/13] drm/xe/pf: Fix pf_get_sched_priority() function signature Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 03/13] drm/xe/pf: Force new VFs prorities only once Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 04/13] drm/xe/pf: Print applied policy KLVs Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 05/13] drm/xe/pf: Reprovision policy settings after GT reset Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 06/13] drm/xe/pf: Don't reprovision policies if already default Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 07/13] drm/xe/pf: Encode scheduling priority KLV if needed Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 08/13] drm/xe/pf: Check EQ/PT/PRIO when testing VF config Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 09/13] drm/xe/pf: Allow to change sched_if_idle policy under lock Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 10/13] drm/xe/pf: Reprovision scheduling to default when no VFs Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 11/13] drm/xe/pf: Extract helper to show which VFs are provisioned Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 12/13] drm/xe/pf: Extract helpers for bulk EQ/PT provisioning Michal Wajdeczko
2026-04-02 19:17 ` [PATCH v2 13/13] drm/xe/pf: Perform fair scheduling auto-provisioning Michal Wajdeczko
2026-04-03 11:37   ` Michał Winiarski [this message]
2026-04-02 19:32 ` ✓ CI.KUnit: success for drm/xe/pf: Improve EQ/PT provisioning (rev2) Patchwork
2026-04-02 20:12 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-03  9:31 ` ✗ Xe.CI.FULL: failure " Patchwork
2026-04-03 20:48   ` Michal Wajdeczko
2026-04-11 15:55 ` ✓ CI.KUnit: success for drm/xe/pf: Improve EQ/PT provisioning (rev3) Patchwork
2026-04-11 16:38 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-11 18:09 ` ✓ Xe.CI.FULL: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ac-iEOsjky05Pw4F@nostramo \
    --to=michal.winiarski@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=michal.wajdeczko@intel.com \
    --cc=piotr.piorkowski@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox