Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>,
	<intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v3 04/12] drm/xe/sriov: Add support for enabling scheduler groups
Date: Thu, 11 Dec 2025 15:00:33 -0800	[thread overview]
Message-ID: <7dacdde0-805c-4c57-82d9-5817d4050ea8@intel.com> (raw)
In-Reply-To: <d53cee53-e649-4997-bede-3cdae61d9ebb@intel.com>



On 12/11/2025 10:59 AM, Michal Wajdeczko wrote:
>
> On 12/11/2025 2:57 AM, Daniele Ceraolo Spurio wrote:
>> Scheduler groups are enabled by sending a specific policy configuration
>> KLV to the GuC. We don't allow changing this policy if there are VF
>> active, since the expectation is that the VF will only check if the
>> feature is enabled during driver initialization.
>>
>> The functions added by this patch will be used by sysfs/debugfs, coming
>> in follow up patches.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> ---
>> v2: code improvements, add GUC_MAX_SCHED_GROUPS define, don't add
>>      XE_SRIOV_SCHED_GROUPS_NONE to supported_modes (Michal)
>> v3: fix enum/integer mismatch, use GUC_MAX_SCHED_GROUPS to define the
>>      max KLV length and not the other way around
>> ---
>>   drivers/gpu/drm/xe/abi/guc_klvs_abi.h         |  19 +++
>>   drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c    | 151 ++++++++++++++++++
>>   drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h    |   3 +
>>   .../gpu/drm/xe/xe_gt_sriov_pf_policy_types.h  |   6 +
>>   drivers/gpu/drm/xe/xe_guc_klv_helpers.c       |   2 +
>>   5 files changed, 181 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
>> index 265a135e7061..f0a87a1cb12f 100644
>> --- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
>> +++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
>> @@ -8,6 +8,8 @@
>>   
>>   #include <linux/types.h>
>>   
>> +#include "abi/guc_scheduler_abi.h"
>> +
>>   /**
>>    * DOC: GuC KLV
>>    *
>> @@ -200,6 +202,20 @@ enum  {
>>    *      :0: adverse events are not counted (default)
>>    *      :n: sample period in milliseconds
>>    *
>> + * _`GUC_KLV_VGT_POLICY_ENGINE_GROUP_CONFIG` : 0x8004
>> + *      This config allows the PF to split the engines across scheduling groups.
>> + *      Each group is independently timesliced across VFs, allowing different
>> + *      VFs to be active on the HW at the same time. When enabling this feature,
>> + *      all engines must be assigned to a group (and only one group), or they
>> + *      will be excluded from scheduling after this KLV is sent. To enable
>> + *      the groups, the driver must provide a masks array with
>> + *      GUC_MAX_ENGINE_CLASSES entries for each group, with each mask indicating
>> + *      which logical instances of that class belong to the group. Therefore,
>> + *      the length of this KLV when enabling groups is
>> + *      num_groups * GUC_MAX_ENGINE_CLASSES. To disable the groups, the driver
>> + *      must send the KLV without any payload (i.e. len = 0). The maximum
>> + *      number of groups is 8.
>> + *
>>    * _`GUC_KLV_VGT_POLICY_RESET_AFTER_VF_SWITCH` : 0x8D00
>>    *      This enum is to reset utilized HW engine after VF Switch (i.e to clean
>>    *      up Stale HW register left behind by previous VF)
>> @@ -214,6 +230,9 @@ enum  {
>>   #define GUC_KLV_VGT_POLICY_ADVERSE_SAMPLE_PERIOD_KEY	0x8002
>>   #define GUC_KLV_VGT_POLICY_ADVERSE_SAMPLE_PERIOD_LEN	1u
>>   
>> +#define GUC_KLV_VGT_POLICY_ENGINE_GROUP_CONFIG_KEY	0x8004
>> +#define GUC_KLV_VGT_POLICY_ENGINE_GROUP_MAX_COUNT	GUC_MAX_SCHED_GROUPS
> nit: maybe we should still add LEN defines?
>
> #define GUC_KLV_VGT_POLICY_ENGINE_GROUP_CONFIG_MIN_LEN 0
> #define GUC_KLV_VGT_POLICY_ENGINE_GROUP_CONFIG_MAX_LEN \
> 	(GUC_MAX_ENGINE_CLASSES * GUC_MAX_SCHED_GROUPS)
>
>> +
>>   #define GUC_KLV_VGT_POLICY_RESET_AFTER_VF_SWITCH_KEY	0x8D00
>>   #define GUC_KLV_VGT_POLICY_RESET_AFTER_VF_SWITCH_LEN	1u
>>   
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c
>> index 003860661687..7738d515ea9e 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c
>> @@ -97,6 +97,23 @@ static int pf_push_policy_u32(struct xe_gt *gt, u16 key, u32 value)
>>   	return pf_push_policy_klvs(gt, 1, klv, ARRAY_SIZE(klv));
>>   }
>>   
>> +static int pf_push_policy_payload(struct xe_gt *gt, u16 key, void *payload, u32 num_dwords)
>> +{
>> +	CLASS(xe_guc_buf, buf)(&gt->uc.guc.buf, GUC_KLV_LEN_MIN + num_dwords);
>> +	u32 *klv;
>> +
>> +	if (!xe_guc_buf_is_valid(buf))
>> +		return -ENOBUFS;
>> +
>> +	klv = xe_guc_buf_cpu_ptr(buf);
>> +
>> +	klv[0] = PREP_GUC_KLV(key, num_dwords);
>> +	if (num_dwords)
>> +		memcpy(&klv[1], payload, num_dwords * sizeof(u32));
>> +
>> +	return pf_push_policy_buf_klvs(gt, 1, buf, GUC_KLV_LEN_MIN + num_dwords);
>> +}
>> +
>>   static int pf_update_policy_bool(struct xe_gt *gt, u16 key, bool *policy, bool value)
>>   {
>>   	int err;
>> @@ -397,6 +414,17 @@ static void pf_sched_group_media_slices(struct xe_gt *gt, struct guc_sched_group
>>   	if (group < 2)
>>   		return;
>>   
>> +	/*
>> +	 * If we have more groups than the GuC can support then we don't want to
>> +	 * expose this specific mode, because the GuC will return an error if we
>> +	 * try to enable it.
>> +	 */
>> +	if (group > gt->sriov.pf.policy.guc.sched_groups.max_groups) {
>> +		xe_gt_sriov_notice(gt, "media_slice mode has too many groups: %u vs %u\n",
>> +				   group, gt->sriov.pf.policy.guc.sched_groups.max_groups);
> nit: is this something that could happen in production build on production platform?
> maybe assert or dbg will be sufficient

My worry here is with derivative platforms, because those tend to be 
GuC-compatible with the base platform. Let's say we get a derivative 
with 3 media slices; a new GuC would be released to support 3 groups, 
but the old GuC would still likely run on the new platform and that 
would only support 2 groups, leading to this check failing. It's 
unlikely, but not impossible.

>
>> +		return;
>> +	}
>> +
>>   	/* The GuC expects an array with a guc_sched_group entry for each group */
>>   	values = drmm_kcalloc(&gt_to_xe(gt)->drm, group, sizeof(struct guc_sched_group),
>>   			      GFP_KERNEL);
>> @@ -459,6 +487,15 @@ static void pf_init_sched_groups(struct xe_gt *gt)
>>   	if (!xe_sriov_gt_pf_policy_has_sched_groups_support(gt))
>>   		return;
>>   
>> +	/*
>> +	 * The GuC interface supports up to 8 groups. However, the GuC only
>> +	 * fully allocates resources for a subset of groups, based on the number
>> +	 * of engines and expected usage. The plan is for this to become
>> +	 * queryable via H2G, but for now GuC FW for all devices supports a
>> +	 * maximum of 2 groups so we can just hardcode that.
>> +	 */
>> +	gt->sriov.pf.policy.guc.sched_groups.max_groups = 2;
>> +
>>   	for (m = 0; m < XE_SRIOV_SCHED_GROUPS_MODES_COUNT; m++) {
>>   		u32 *num_groups = &gt->sriov.pf.policy.guc.sched_groups.modes[m].num_groups;
>>   		struct guc_sched_group **groups =
>> @@ -478,14 +515,127 @@ static void pf_init_sched_groups(struct xe_gt *gt)
>>   		}
>>   
>>   		xe_gt_assert(gt, *num_groups < GUC_MAX_SCHED_GROUPS);
>> +
>> +		if (*num_groups)
>> +			gt->sriov.pf.policy.guc.sched_groups.supported_modes |= BIT(m);
>>   	}
>>   }
>>   
>> +/**
>> + * xe_sriov_gt_pf_policy_has_multi_group_modes() - check whether the GT supports
>> + * any scheduler modes that have multiple groups
>> + * @gt: the &xe_gt to check
>> + *
>> + * This function can only be called on PF.
>> + *
>> + * Return: true if the GT supports modes with multiple groups, false otherwise.
>> + */
>> +bool xe_sriov_gt_pf_policy_has_multi_group_modes(struct xe_gt *gt)
>> +{
>> +	return gt->sriov.pf.policy.guc.sched_groups.supported_modes;
>> +}
>> +
>> +/**
>> + * xe_sriov_gt_pf_policy_has_sched_group_mode() - check whether the GT supports
>> + * a specific scheduler group mode
>> + * @gt: the &xe_gt to check
>> + * @mode: the mode to check
>> + *
>> + * This function can only be called on PF.
>> + *
>> + * Return: true if the GT supports the specified mode, false otherwise.
>> + */
>> +bool xe_sriov_gt_pf_policy_has_sched_group_mode(struct xe_gt *gt, u32 mode)
> nit: shouldn't this 'mode' param be declared as enum xe_sriov_sched_group_modes ?

I wanted to avoid having enum xe_sriov_sched_group_modes in the .h, 
since that would require including the types.h as well, but if you think 
that's worth it I'll add it in.

Daniele

>
>> +{
>> +	if (mode == XE_SRIOV_SCHED_GROUPS_DISABLED)
>> +		return true;
>> +> +	return gt->sriov.pf.policy.guc.sched_groups.supported_modes & BIT(mode);
>> +}
>> +
>> +static int __pf_provision_sched_groups(struct xe_gt *gt, u32 mode)
>> +{
>> +	struct guc_sched_group *groups = gt->sriov.pf.policy.guc.sched_groups.modes[mode].groups;
>> +	u32 num_groups = gt->sriov.pf.policy.guc.sched_groups.modes[mode].num_groups;
>> +
>> +	return pf_push_policy_payload(gt, GUC_KLV_VGT_POLICY_ENGINE_GROUP_CONFIG_KEY,
>> +				      groups, num_groups * GUC_MAX_ENGINE_CLASSES);
>> +}
>> +
>> +static int pf_provision_sched_groups(struct xe_gt *gt, u32 mode)
>> +{
>> +	int err;
>> +
>> +	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
>> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
>> +
>> +	if (!xe_sriov_gt_pf_policy_has_sched_group_mode(gt, mode))
>> +		return -EINVAL;
>> +
>> +	/* already in the desired mode */
>> +	if (gt->sriov.pf.policy.guc.sched_groups.current_mode == mode)
>> +		return 0;
>> +
>> +	/*
>> +	 * We don't allow changing this with VFs active since it is hard for
>> +	 * VFs to check.
>> +	 */
>> +	if (xe_sriov_pf_num_vfs(gt_to_xe(gt)))
>> +		return -EBUSY;
>> +
>> +	err = __pf_provision_sched_groups(gt, mode);
>> +	if (err)
>> +		return err;
>> +
>> +	gt->sriov.pf.policy.guc.sched_groups.current_mode = mode;
>> +
>> +	return 0;
>> +}
>> +
>> +static int pf_reprovision_sched_groups(struct xe_gt *gt)
>> +{
>> +	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
>> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
>> +
>> +	/* We only have something to provision if we have possible groups */
>> +	if (!xe_sriov_gt_pf_policy_has_multi_group_modes(gt))
>> +		return 0;
>> +
>> +	return __pf_provision_sched_groups(gt, gt->sriov.pf.policy.guc.sched_groups.current_mode);
>> +}
>> +
>> +static void pf_sanitize_sched_groups(struct xe_gt *gt)
>> +{
>> +	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
>> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
>> +
>> +	gt->sriov.pf.policy.guc.sched_groups.current_mode = XE_SRIOV_SCHED_GROUPS_DISABLED;
>> +}
>> +
>> +/**
>> + * xe_gt_sriov_pf_policy_set_sched_groups_mode() - Control the 'sched_groups' policy.
>> + * @gt: the &xe_gt where to apply the policy
>> + * @value: the sched_group mode to be activated
>> + *
>> + * This function can only be called on PF.
>> + *
>> + * Return: 0 on success or a negative error code on failure.
>> + */
>> +int xe_gt_sriov_pf_policy_set_sched_groups_mode(struct xe_gt *gt, u32 value)
>> +{
>> +	if (!xe_sriov_gt_pf_policy_has_multi_group_modes(gt))
>> +		return -ENODEV;
> nit: maybe at this point we could just assert and force the caller (debugfs) to check?
>
>> +
>> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
>> +	return pf_provision_sched_groups(gt, value);
>> +}
>> +
>>   static void pf_sanitize_guc_policies(struct xe_gt *gt)
>>   {
>>   	pf_sanitize_sched_if_idle(gt);
>>   	pf_sanitize_reset_engine(gt);
>>   	pf_sanitize_sample_period(gt);
>> +	pf_sanitize_sched_groups(gt);
>>   }
>>   
>>   /**
>> @@ -524,6 +674,7 @@ int xe_gt_sriov_pf_policy_reprovision(struct xe_gt *gt, bool reset)
>>   	err |= pf_reprovision_sched_if_idle(gt);
>>   	err |= pf_reprovision_reset_engine(gt);
>>   	err |= pf_reprovision_sample_period(gt);
>> +	err |= pf_reprovision_sched_groups(gt);
>>   	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
>>   
>>   	xe_pm_runtime_put(gt_to_xe(gt));
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h
>> index f5e3b2595063..d1b1fa9f0a09 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h
>> @@ -18,6 +18,9 @@ bool xe_gt_sriov_pf_policy_get_reset_engine(struct xe_gt *gt);
>>   int xe_gt_sriov_pf_policy_set_sample_period(struct xe_gt *gt, u32 value);
>>   u32 xe_gt_sriov_pf_policy_get_sample_period(struct xe_gt *gt);
>>   bool xe_sriov_gt_pf_policy_has_sched_groups_support(struct xe_gt *gt);
>> +bool xe_sriov_gt_pf_policy_has_multi_group_modes(struct xe_gt *gt);
>> +bool xe_sriov_gt_pf_policy_has_sched_group_mode(struct xe_gt *gt, u32 mode);
>> +int xe_gt_sriov_pf_policy_set_sched_groups_mode(struct xe_gt *gt, u32 value);
>>   
>>   void xe_gt_sriov_pf_policy_init(struct xe_gt *gt);
>>   void xe_gt_sriov_pf_policy_sanitize(struct xe_gt *gt);
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy_types.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy_types.h
>> index d228cadcd8b0..04015fb907ee 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy_types.h
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy_types.h
>> @@ -24,6 +24,9 @@ enum xe_sriov_sched_group_modes {
>>   
>>   /**
>>    * struct xe_gt_sriov_scheduler_groups - Scheduler groups policy info
>> + * @max_groups: max number of groups supported by the GuC for the platform
>> + * @supported_modes: mask of supported modes
>> + * @current_mode: active scheduler groups mode
>>    * @modes: array of masks and their number for each mode
>>    * @modes.groups: array of engine instance groups in given mode, with each group
>>    *                consisting of GUC_MAX_ENGINE_CLASSES engine instances masks. A
>> @@ -33,6 +36,9 @@ enum xe_sriov_sched_group_modes {
>>    *                    are in the same group.
>>    */
>>   struct xe_gt_sriov_scheduler_groups {
>> +	u8 max_groups;
>> +	u32 supported_modes;
>> +	enum xe_sriov_sched_group_modes current_mode;
>>   	struct {
>>   		struct guc_sched_group *groups;
>>   		u32 num_groups;
>> diff --git a/drivers/gpu/drm/xe/xe_guc_klv_helpers.c b/drivers/gpu/drm/xe/xe_guc_klv_helpers.c
>> index 146a6eda9e06..1b08b443606e 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_klv_helpers.c
>> +++ b/drivers/gpu/drm/xe/xe_guc_klv_helpers.c
>> @@ -26,6 +26,8 @@ const char *xe_guc_klv_key_to_string(u16 key)
>>   		return "sched_if_idle";
>>   	case GUC_KLV_VGT_POLICY_ADVERSE_SAMPLE_PERIOD_KEY:
>>   		return "sample_period";
>> +	case GUC_KLV_VGT_POLICY_ENGINE_GROUP_CONFIG_KEY:
>> +		return "engine_group_config";
>>   	case GUC_KLV_VGT_POLICY_RESET_AFTER_VF_SWITCH_KEY:
>>   		return "reset_engine";
>>   	/* VF CFG keys */
> again, just nits, so:
>
> Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
>


  reply	other threads:[~2025-12-11 23:00 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-11  1:56 [PATCH v3 00/12] Introduce SRIOV scheduler groups Daniele Ceraolo Spurio
2025-12-11  1:57 ` [PATCH v3 01/12] drm/xe/gt: Add engine masks for each class Daniele Ceraolo Spurio
2025-12-11 18:19   ` Michal Wajdeczko
2025-12-11  1:57 ` [PATCH v3 02/12] drm/gt/guc: extract scheduler-related defines from guc_fwif.h Daniele Ceraolo Spurio
2025-12-11 18:20   ` Michal Wajdeczko
2025-12-11  1:57 ` [PATCH v3 03/12] drm/xe/sriov: Initialize scheduler groups Daniele Ceraolo Spurio
2025-12-11 18:52   ` Michal Wajdeczko
2025-12-11 22:55     ` Daniele Ceraolo Spurio
2025-12-11  1:57 ` [PATCH v3 04/12] drm/xe/sriov: Add support for enabling " Daniele Ceraolo Spurio
2025-12-11 18:59   ` Michal Wajdeczko
2025-12-11 23:00     ` Daniele Ceraolo Spurio [this message]
2025-12-11  1:57 ` [PATCH v3 05/12] drm/xe/sriov: Scheduler groups are incompatible with multi-lrc Daniele Ceraolo Spurio
2025-12-11 19:05   ` Michal Wajdeczko
2025-12-11  1:57 ` [PATCH v3 06/12] drm/xe/sriov: Add handling for MLRC adverse event threshold Daniele Ceraolo Spurio
2025-12-11 23:19   ` Michal Wajdeczko
2025-12-11  1:57 ` [PATCH v3 07/12] drm/xe/sriov: Add debugfs to enable scheduler groups Daniele Ceraolo Spurio
2025-12-11 21:07   ` Michal Wajdeczko
2025-12-11  1:57 ` [PATCH v3 08/12] drm/xe/sriov: Add debugfs with scheduler groups information Daniele Ceraolo Spurio
2025-12-11 22:40   ` Michal Wajdeczko
2025-12-11 22:44     ` Daniele Ceraolo Spurio
2025-12-11  1:57 ` [PATCH v3 09/12] drm/xe/sriov: Prep for multiple exec quantums and preemption timeouts Daniele Ceraolo Spurio
2025-12-11 22:41   ` Michal Wajdeczko
2025-12-11  1:57 ` [PATCH v3 10/12] drm/xe/sriov: Add functions to set exec quantums for each group Daniele Ceraolo Spurio
2025-12-11 22:47   ` Michal Wajdeczko
2025-12-11  1:57 ` [PATCH v3 11/12] drm/xe/sriov: Add functions to set preempt timeouts " Daniele Ceraolo Spurio
2025-12-11 22:49   ` Michal Wajdeczko
2025-12-11  1:57 ` [PATCH v3 12/12] drm/xe/sriov: Add debugfs to set EQ and PT for scheduler groups Daniele Ceraolo Spurio
2025-12-11 23:07   ` Michal Wajdeczko
2025-12-11  2:31 ` ✗ CI.checkpatch: warning for Introduce SRIOV scheduler groups (rev3) Patchwork
2025-12-11  2:32 ` ✓ CI.KUnit: success " Patchwork
2025-12-11  3:34 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-11 10:47 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7dacdde0-805c-4c57-82d9-5817d4050ea8@intel.com \
    --to=daniele.ceraolospurio@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=michal.wajdeczko@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox