Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>,
	<intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 07/10] drm/xe/sriov: Prep for multiple exec quantums and preemption timeouts
Date: Fri, 5 Dec 2025 17:55:20 -0800	[thread overview]
Message-ID: <92fa5ba5-ed77-4906-b7a9-a4a56a866bb1@intel.com> (raw)
In-Reply-To: <bd74d348-a8bc-4723-8148-31641be1b012@intel.com>



On 12/2/2025 8:42 AM, Michal Wajdeczko wrote:
>
> On 11/27/2025 2:45 AM, Daniele Ceraolo Spurio wrote:
>> Each scheduler group can be independently configured with its own exec
>> quantum and preemption timeouts. The existing KLVs to configure those
>> parameter will apply the value to all groups (even if they're not
> typo: parameters
>
>> enabled at the moment).
>>
>> When scheduler groups are disable the GuC used the values from Group 0.
> typo: ... disabled then GuC uses ... ?
>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> ---
>>   drivers/gpu/drm/xe/abi/guc_klvs_abi.h         |  7 ++++--
>>   drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c    | 25 +++++++++++++------
>>   .../gpu/drm/xe/xe_gt_sriov_pf_config_types.h  |  5 ++--
>>   3 files changed, 25 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
>> index a6dce9da339f..48f47e26132d 100644
>> --- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
>> +++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
>> @@ -290,7 +290,9 @@ enum  {
>>    *      infinitely long compute or shader kernel. In such a scenario, the
>>    *      PF would need to trigger a VM PAUSE and then change the KLV to force
>>    *      it to take effect. Such cases might typically happen on a 1PF+1VF
>> - *      Virtualization config enabled for heavier workloads like AI/ML.
>> + *      Virtualization config enabled for heavier workloads like AI/ML. If
> start with new empty line so it will rendered as separate paragraph
>
>> + *      scheduling groups are supported, the provided value is applied to all
>> + *      groups (even if they've not yet been enabled).
> should we document also here that "The scheduling groups are available starting from GuC 70.53" ?

ok

>
>>    *
>>    *      The max value for this KLV is 100 seconds, anything exceeding that
>>    *      will be clamped to the max.
>> @@ -312,7 +314,8 @@ enum  {
>>    *      In this case, the PF would need to trigger a VM PAUSE and then change
>>    *      the KLV to force it to take effect. Such cases might typically happen
>>    *      on a 1PF+1VF Virtualization config enabled for heavier workloads like
>> - *      AI/ML.
>> + *      AI/ML. If scheduling groups are supported, the provided value is applied
>> + *      to all groups (even if they've not yet been enabled).
> ditto
>
>>    *
>>    *      The max value for this KLV is 100 seconds, anything exceeding that
>>    *      will be clamped to the max.
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
>> index 59c5c6b4d994..eb547fedb6da 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
>> @@ -298,10 +298,10 @@ static u32 encode_config(u32 *cfg, const struct xe_gt_sriov_config *config, bool
>>   	}
>>   
>>   	cfg[n++] = PREP_GUC_KLV_TAG(VF_CFG_EXEC_QUANTUM);
>> -	cfg[n++] = config->exec_quantum;
>> +	cfg[n++] = config->exec_quantum[0];
>>   
>>   	cfg[n++] = PREP_GUC_KLV_TAG(VF_CFG_PREEMPT_TIMEOUT);
>> -	cfg[n++] = config->preempt_timeout;
>> +	cfg[n++] = config->preempt_timeout[0];
>>   
>>   #define encode_threshold_config(TAG, ...) ({					\
>>   	cfg[n++] = PREP_GUC_KLV_TAG(VF_CFG_THRESHOLD_##TAG);			\
>> @@ -1857,12 +1857,15 @@ static int pf_provision_exec_quantum(struct xe_gt *gt, unsigned int vfid,
>>   {
>>   	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
>>   	int err;
>> +	int i;
>>   
>>   	err = pf_push_vf_cfg_exec_quantum(gt, vfid, &exec_quantum);
>>   	if (unlikely(err))
>>   		return err;
>>   
>> -	config->exec_quantum = exec_quantum;
>> +	for (i = 0; i < GUC_KLV_VGT_POLICY_ENGINE_GROUP_MAX_COUNT; i++)
> maybe:
>
> 	for (i = 0; i < ARRAY_SIZE(config->exec_quantum); i++)

ok

>
>> +		config->exec_quantum[i] = exec_quantum;
>> +
>>   	return 0;
>>   }
>>   
>> @@ -1870,7 +1873,7 @@ static u32 pf_get_exec_quantum(struct xe_gt *gt, unsigned int vfid)
>>   {
>>   	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
>>   
>> -	return config->exec_quantum;
>> +	return config->exec_quantum[0];
>>   }
>>   
>>   /**
>> @@ -1987,12 +1990,14 @@ static int pf_provision_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
>>   {
>>   	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
>>   	int err;
>> +	int i;
>>   
>>   	err = pf_push_vf_cfg_preempt_timeout(gt, vfid, &preempt_timeout);
>>   	if (unlikely(err))
>>   		return err;
>>   
>> -	config->preempt_timeout = preempt_timeout;
>> +	for (i = 0; i < GUC_KLV_VGT_POLICY_ENGINE_GROUP_MAX_COUNT; i++)
> ditto
>
>> +		config->preempt_timeout[i] = preempt_timeout;
>>   
>>   	return 0;
>>   }
>> @@ -2001,7 +2006,7 @@ static u32 pf_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid)
>>   {
>>   	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
>>   
>> -	return config->preempt_timeout;
>> +	return config->preempt_timeout[0];
>>   }
>>   
>>   /**
>> @@ -2180,10 +2185,14 @@ u32 xe_gt_sriov_pf_config_get_sched_priority(struct xe_gt *gt, unsigned int vfid
>>   
>>   static void pf_reset_config_sched(struct xe_gt *gt, struct xe_gt_sriov_config *config)
>>   {
>> +	int i;
>> +
>>   	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
>>   
>> -	config->exec_quantum = 0;
>> -	config->preempt_timeout = 0;
>> +	for (i = 0; i < GUC_KLV_VGT_POLICY_ENGINE_GROUP_MAX_COUNT; i++) {
>> +		config->exec_quantum[i] = 0;
>> +		config->preempt_timeout[i] = 0;
>> +	}
>>   }
>>   
>>   static int pf_provision_threshold(struct xe_gt *gt, unsigned int vfid,
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config_types.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config_types.h
>> index 686c7b3b6d7a..abf003946242 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config_types.h
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config_types.h
>> @@ -6,6 +6,7 @@
>>   #ifndef _XE_GT_SRIOV_PF_CONFIG_TYPES_H_
>>   #define _XE_GT_SRIOV_PF_CONFIG_TYPES_H_
>>   
>> +#include "abi/guc_klvs_abi.h"
>>   #include "xe_ggtt_types.h"
>>   #include "xe_guc_klv_thresholds_set_types.h"
>>   
>> @@ -30,9 +31,9 @@ struct xe_gt_sriov_config {
>>   	/** @begin_db: start index of GuC doorbell ID range. */
>>   	u16 begin_db;
>>   	/** @exec_quantum: execution-quantum in milliseconds. */
>> -	u32 exec_quantum;
>> +	u32 exec_quantum[GUC_KLV_VGT_POLICY_ENGINE_GROUP_MAX_COUNT];
>>   	/** @preempt_timeout: preemption timeout in microseconds. */
>> -	u32 preempt_timeout;
>> +	u32 preempt_timeout[GUC_KLV_VGT_POLICY_ENGINE_GROUP_MAX_COUNT];
> maybe we should rename
>
> 	GUC_KLV_VGT_POLICY_ENGINE_GROUP_MAX_COUNT
> to
> 	GUC_MAX_ENGINE_GROUPS
>
> and make it independent from KLVs and define like we have
>
> 	GUC_MAX_ENGINE_CLASSES

ok, I'll do it in an earlier patch.

Daniele

>
>
>>   	/** @sched_priority: scheduling priority. */
>>   	u32 sched_priority;
>>   	/** @thresholds: GuC thresholds for adverse events notifications. */


  reply	other threads:[~2025-12-06  1:55 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-27  1:45 [PATCH 00/10] Introduce SRIOV scheduler groups Daniele Ceraolo Spurio
2025-11-27  1:45 ` [PATCH 01/10] drm/xe/gt: Add engine masks for each class Daniele Ceraolo Spurio
2025-12-01 16:52   ` Michal Wajdeczko
2025-11-27  1:45 ` [PATCH 02/10] drm/xe/sriov: Initialize scheduler groups Daniele Ceraolo Spurio
2025-12-01 22:37   ` Michal Wajdeczko
2025-12-01 23:33     ` Daniele Ceraolo Spurio
2025-12-02 21:08       ` Michal Wajdeczko
2025-12-02 23:02         ` Daniele Ceraolo Spurio
2025-12-03  1:15         ` Daniele Ceraolo Spurio
2025-11-27  1:45 ` [PATCH 03/10] drm/xe/sriov: Add support for enabling " Daniele Ceraolo Spurio
2025-12-02 11:49   ` Michal Wajdeczko
2025-12-02 17:39     ` Daniele Ceraolo Spurio
2025-12-04 22:06       ` Daniele Ceraolo Spurio
2025-11-27  1:45 ` [PATCH 04/10] drm/xe/sriov: Scheduler groups are incompatible with multi-lrc Daniele Ceraolo Spurio
2025-12-02 13:32   ` Michal Wajdeczko
2025-12-02 17:57     ` Daniele Ceraolo Spurio
2025-12-02 21:17       ` Michal Wajdeczko
2025-12-02 21:25         ` Daniele Ceraolo Spurio
2025-12-02 21:37           ` Michal Wajdeczko
2025-12-02 21:42             ` Daniele Ceraolo Spurio
2025-11-27  1:45 ` [PATCH 05/10] drm/xe/sriov: Add debugfs to enable scheduler groups Daniele Ceraolo Spurio
2025-12-02 15:52   ` Michal Wajdeczko
2025-12-02 18:03     ` Daniele Ceraolo Spurio
2025-12-02 21:24       ` Michal Wajdeczko
2025-11-27  1:45 ` [PATCH 06/10] drm/xe/sriov: Add debugfs with scheduler groups information Daniele Ceraolo Spurio
2025-12-02 16:24   ` Michal Wajdeczko
2025-12-02 18:20     ` Daniele Ceraolo Spurio
2025-12-02 21:31       ` Michal Wajdeczko
2025-11-27  1:45 ` [PATCH 07/10] drm/xe/sriov: Prep for multiple exec quantums and preemption timeouts Daniele Ceraolo Spurio
2025-12-02 16:42   ` Michal Wajdeczko
2025-12-06  1:55     ` Daniele Ceraolo Spurio [this message]
2025-11-27  1:45 ` [PATCH 08/10] drm/xe/sriov: Add functions to set exec quantums for each group Daniele Ceraolo Spurio
2025-12-02 19:54   ` Michal Wajdeczko
2025-12-06  1:58     ` Daniele Ceraolo Spurio
2025-11-27  1:45 ` [PATCH 09/10] drm/xe/sriov: Add functions to set preempt timeouts " Daniele Ceraolo Spurio
2025-12-02 20:01   ` Michal Wajdeczko
2025-11-27  1:45 ` [PATCH 10/10] drm/xe/sriov: Add debugfs to set EQ and PT for scheduler groups Daniele Ceraolo Spurio
2025-12-02 20:17   ` Michal Wajdeczko
2025-12-06  1:53     ` Daniele Ceraolo Spurio
2025-11-27  1:51 ` ✗ CI.checkpatch: warning for Introduce SRIOV " Patchwork
2025-11-27  1:52 ` ✓ CI.KUnit: success " Patchwork
2025-11-27  2:36 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-11-27  3:18 ` ✗ Xe.CI.Full: " Patchwork
2025-12-01 17:46   ` Daniele Ceraolo Spurio

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=92fa5ba5-ed77-4906-b7a9-a4a56a866bb1@intel.com \
    --to=daniele.ceraolospurio@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=michal.wajdeczko@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox