Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>,
	<intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v2 04/11] drm/xe/sriov: Scheduler groups are incompatible with multi-lrc
Date: Mon, 8 Dec 2025 09:48:02 -0800	[thread overview]
Message-ID: <3d4ca5f6-8861-49e8-bbc8-4ed945c696a4@intel.com> (raw)
In-Reply-To: <00656e7e-09d7-47d8-80da-35f035c9db20@intel.com>



On 12/7/2025 1:58 PM, Michal Wajdeczko wrote:
>
> On 12/7/2025 12:04 AM, Daniele Ceraolo Spurio wrote:
>> Since engines in the same class can be divided across multiple groups,
>> the GuC does not allow scheduler groups to be active if there are
>> multi-lrc contexts. This means that:
>>
>> 1) if a MLRC context is registered when we enable scheduler groups, the
>>     GuC will silently ignore the configuration
>> 2) if a MLRC context is registered after scheduler groups are enabled,
>>     the GuC will disable the groups and generate an adverse event.
>>
>> The expectation is that the admin will ensure that all apps that use
>> MLRC on PF have been terminated before scheduler groups are created. A
>> check on PF is added anyway to make sure we don't still have contexts
>> waiting to be cleaned up laying around.
>> On both PF and VF we block creation of new MLRC queues once scheduler
>> groups have been enabled.
>>
>> v2: move threshold handling to its own patch, move MLRC check to
>>      guc_submit.c, hide SRIOV interals from exec_queue creation code,
>>      better comments/docs (Michal)
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
>> ---
>>   drivers/gpu/drm/xe/abi/guc_klvs_abi.h      |  7 +++
>>   drivers/gpu/drm/xe/xe_exec_queue.c         | 19 +++++++
>>   drivers/gpu/drm/xe/xe_gt_sriov_pf.c        | 17 ++++++
>>   drivers/gpu/drm/xe/xe_gt_sriov_pf.h        |  8 +++
>>   drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c | 28 ++++++++++
>>   drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h |  1 +
>>   drivers/gpu/drm/xe/xe_gt_sriov_vf.c        | 60 ++++++++++++++++++++++
>>   drivers/gpu/drm/xe/xe_gt_sriov_vf.h        |  1 +
>>   drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h  |  2 +
>>   drivers/gpu/drm/xe/xe_guc_klv_helpers.c    |  3 ++
>>   drivers/gpu/drm/xe/xe_guc_submit.c         | 21 ++++++++
>>   drivers/gpu/drm/xe/xe_guc_submit.h         |  2 +
>>   12 files changed, 169 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
>> index 45733a87183a..edb0546fb163 100644
>> --- a/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
>> +++ b/drivers/gpu/drm/xe/abi/guc_klvs_abi.h
>> @@ -46,11 +46,18 @@
>>    *      Refers to 32 bit architecture version as reported by the HW IP.
>>    *      This key is supported on MTL+ platforms only.
>>    *      Requires GuC ABI 1.2+.
>> + *
>> + * _`GUC_KLV_GLOBAL_CFG_GROUP_SCHEDULING_AVAILABLE` : 0x3001
>> + *      Tells the driver whether scheduler groups are enabled or not.
>> + *      Requires GuC ABI 1.26+
>>    */
>>   
>>   #define GUC_KLV_GLOBAL_CFG_GMD_ID_KEY			0x3000u
>>   #define GUC_KLV_GLOBAL_CFG_GMD_ID_LEN			1u
>>   
>> +#define GUC_KLV_GLOBAL_CFG_GROUP_SCHEDULING_AVAILABLE_KEY	0x3001u
>> +#define GUC_KLV_GLOBAL_CFG_GROUP_SCHEDULING_AVAILABLE_LEN	1u
>> +
>>   /**
>>    * DOC: GuC Self Config KLVs
>>    *
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
>> index 226d07a3d852..df01c0664965 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> @@ -16,6 +16,7 @@
>>   #include "xe_dep_scheduler.h"
>>   #include "xe_device.h"
>>   #include "xe_gt.h"
>> +#include "xe_gt_sriov_pf.h"
>>   #include "xe_gt_sriov_vf.h"
>>   #include "xe_hw_engine_class_sysfs.h"
>>   #include "xe_hw_engine_group.h"
>> @@ -718,6 +719,17 @@ static u32 calc_validate_logical_mask(struct xe_device *xe,
>>   	return return_mask;
>>   }
>>   
>> +static bool has_sched_groups(struct xe_gt *gt)
>> +{
>> +	if (IS_SRIOV_PF(gt_to_xe(gt)) && xe_gt_sriov_pf_sched_groups_enabled(gt))
>> +		return true;
>> +
>> +	if (IS_SRIOV_VF(gt_to_xe(gt)) && xe_gt_sriov_vf_sched_groups_enabled(gt))
>> +		return true;
>> +
>> +	return false;
>> +}
>> +
>>   int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
>>   			       struct drm_file *file)
>>   {
>> @@ -810,6 +822,13 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
>>   			return -ENOENT;
>>   		}
>>   
>> +		/* SRIOV sched groups are not compatible with multi-lrc */
>> +		if (XE_IOCTL_DBG(xe, args->width > 1 && has_sched_groups(hwe->gt))) {
>> +			up_read(&vm->lock);
>> +			xe_vm_put(vm);
>> +			return -EINVAL;
>> +		}
>> +
>>   		q = xe_exec_queue_create(xe, vm, logical_mask,
>>   					 args->width, hwe, flags,
>>   					 args->extensions);
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
>> index 0d97a823e702..fb5c9101e275 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf.c
>> @@ -284,3 +284,20 @@ int xe_gt_sriov_pf_wait_ready(struct xe_gt *gt)
>>   	pf_flush_restart(gt);
>>   	return 0;
>>   }
>> +
>> +/**
>> + * xe_gt_sriov_pf_sched_groups_enabled - Check if multiple scheduler groups are
>> + * enabled
>> + * @gt: the &xe_gt
>> + *
>> + * This function is for PF use only.
>> + *
>> + * Return: true if shed groups were enabled, false otherwise.
>> + */
>> +bool xe_gt_sriov_pf_sched_groups_enabled(struct xe_gt *gt)
>> +{
>> +	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
>> +
>> +	return xe_gt_sriov_pf_policy_sched_groups_enabled(gt);
>> +}
>> +
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf.h
>> index e7fde3f9937a..1ccfc7137b98 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf.h
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf.h
>> @@ -6,6 +6,8 @@
>>   #ifndef _XE_GT_SRIOV_PF_H_
>>   #define _XE_GT_SRIOV_PF_H_
>>   
>> +#include <linux/types.h>
>> +
>>   struct xe_gt;
>>   
>>   #ifdef CONFIG_PCI_IOV
>> @@ -16,6 +18,7 @@ void xe_gt_sriov_pf_init_hw(struct xe_gt *gt);
>>   void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid);
>>   void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt);
>>   void xe_gt_sriov_pf_restart(struct xe_gt *gt);
>> +bool xe_gt_sriov_pf_sched_groups_enabled(struct xe_gt *gt);
>>   #else
>>   static inline int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
>>   {
>> @@ -38,6 +41,11 @@ static inline void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt)
>>   static inline void xe_gt_sriov_pf_restart(struct xe_gt *gt)
>>   {
>>   }
>> +
>> +static inline bool xe_gt_sriov_pf_sched_groups_enabled(struct xe_gt *gt)
>> +{
>> +	return false;
>> +}
>>   #endif
>>   
>>   #endif
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c
>> index 1109fec99fc3..6a682d788b02 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c
>> @@ -16,6 +16,7 @@
>>   #include "xe_guc_buf.h"
>>   #include "xe_guc_ct.h"
>>   #include "xe_guc_klv_helpers.h"
>> +#include "xe_guc_submit.h"
>>   #include "xe_pm.h"
>>   
>>   /*
>> @@ -567,6 +568,19 @@ static int pf_provision_sched_groups(struct xe_gt *gt, u32 mode)
>>   	if (xe_sriov_pf_num_vfs(gt_to_xe(gt)))
>>   		return -EBUSY;
>>   
>> +	/*
>> +	 * The GuC silently ignores the setting if any MLRC contexts are
>> +	 * registered. We expect the admin to make sure that all apps that use
>> +	 * MLRC are terminated before scheduler groups are enabled, so this
>> +	 * check is just to make sure that the exec_queue destruction has been
>> +	 * completed.
>> +	 */
>> +	if (mode != XE_SRIOV_SCHED_GROUPS_NONE &&
>> +	    xe_guc_has_registered_mlrc_queues(&gt->uc.guc)) {
>> +		xe_gt_sriov_notice(gt, "can't enable sched groups with active mlrc queues\n");
> s/mlrc/MLRC
>
>> +		return -EPERM;
>> +	}
>> +
>>   	err = __pf_provision_sched_groups(gt, mode);
>>   	if (err)
>>   		return err;
>> @@ -615,6 +629,20 @@ int xe_gt_sriov_pf_policy_set_sched_groups_mode(struct xe_gt *gt,
>>   	return pf_provision_sched_groups(gt, value);
>>   }
>>   
>> +/**
>> + * xe_gt_sriov_pf_policy_sched_groups_enabled() - check whether the GT has
>> + * multiple scheduler groups enabled
>> + * @gt: the &xe_gt to check
>> + *
>> + * This function can only be called on PF.
>> + *
>> + * Return: true if the GT has multiple groups enabled, false otherwise.
>> + */
>> +bool xe_gt_sriov_pf_policy_sched_groups_enabled(struct xe_gt *gt)
>> +{
>> +	return gt->sriov.pf.policy.guc.sched_groups.current_mode != XE_SRIOV_SCHED_GROUPS_NONE;
>> +}
>> +
>>   static void pf_sanitize_guc_policies(struct xe_gt *gt)
>>   {
>>   	pf_sanitize_sched_if_idle(gt);
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h
>> index 6b3e294bc934..ceaf797ca21b 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h
>> @@ -20,6 +20,7 @@ u32 xe_gt_sriov_pf_policy_get_sample_period(struct xe_gt *gt);
>>   bool xe_sriov_gt_pf_policy_has_multi_group_modes(struct xe_gt *gt);
>>   bool xe_sriov_gt_pf_policy_has_sched_group_mode(struct xe_gt *gt, u32 mode);
>>   int xe_gt_sriov_pf_policy_set_sched_groups_mode(struct xe_gt *gt, u32 value);
>> +bool xe_gt_sriov_pf_policy_sched_groups_enabled(struct xe_gt *gt);
>>   
>>   void xe_gt_sriov_pf_policy_init(struct xe_gt *gt);
>>   void xe_gt_sriov_pf_policy_sanitize(struct xe_gt *gt);
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
>> index 97c29c55f885..48e11c1a2d08 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
>> @@ -438,6 +438,30 @@ u32 xe_gt_sriov_vf_gmdid(struct xe_gt *gt)
>>   	return value;
>>   }
>>   
>> +static int query_vf_sched_groups(struct xe_gt *gt)
> s/query_vf_sched_groups/vf_query_sched_groups
>
> and keep it closer to vf_cache_sched_groups_status

ok

>
>> +{
>> +	struct xe_guc *guc = &gt->uc.guc;
>> +	u32 value = 0;
>> +	int err;
>> +
>> +	xe_gt_assert(gt, IS_SRIOV_VF(gt_to_xe(gt)));
>> +
>> +	if (MAKE_GUC_VER_STRUCT(gt->sriov.vf.guc_version) < MAKE_GUC_VER(1, 26, 0))
>> +		return 0;
> nit: maybe we can split above 'check' code from rest of 'query' code?
>
> and as we have more and more cases where version check is needed, maybe it's also a time to add helper like:
>
> 	bool vf_runs_on_guc(gt, MAKE_GUC_VER)

As far as I can tell this is only the second similar check we do (with 
the other one being the one in vf_migration_ccs_bb_support_check), so 
IMO a bit early for a dedicated helper.

Daniele

>
>> +
>> +	err = guc_action_query_single_klv32(guc,
>> +					    GUC_KLV_GLOBAL_CFG_GROUP_SCHEDULING_AVAILABLE_KEY,
>> +					    &value);
>> +	if (unlikely(err)) {
>> +		xe_gt_sriov_err(gt, "Failed to obtain sched groups status (%pe)\n",
>> +				ERR_PTR(err));
>> +		return err;
>> +	}
>> +
>> +	xe_gt_sriov_dbg(gt, "sched groups %s\n", str_enabled_disabled(value));
>> +	return value;
>> +}
>> +
>>   static int vf_get_ggtt_info(struct xe_gt *gt)
>>   {
>>   	struct xe_tile *tile = gt_to_tile(gt);
>> @@ -564,6 +588,21 @@ static void vf_cache_gmdid(struct xe_gt *gt)
>>   	gt->sriov.vf.runtime.gmdid = xe_gt_sriov_vf_gmdid(gt);
>>   }
>>   
>> +static int vf_cache_sched_groups_status(struct xe_gt *gt)
>> +{
>> +	int ret;
>> +
>> +	xe_gt_assert(gt, IS_SRIOV_VF(gt_to_xe(gt)));
>> +
>> +	ret = query_vf_sched_groups(gt);
>> +	if (ret < 0)
>> +		return ret;
>> +
>> +	gt->sriov.vf.runtime.uses_sched_groups = ret;
>> +
>> +	return 0;
>> +}
>> +
>>   /**
>>    * xe_gt_sriov_vf_query_config - Query SR-IOV config data over MMIO.
>>    * @gt: the &xe_gt
>> @@ -593,12 +632,33 @@ int xe_gt_sriov_vf_query_config(struct xe_gt *gt)
>>   	if (unlikely(err))
>>   		return err;
>>   
>> +	err = vf_cache_sched_groups_status(gt);
>> +	if (unlikely(err))
>> +		return err;
>> +
>>   	if (has_gmdid(xe))
>>   		vf_cache_gmdid(gt);
>>   
>>   	return 0;
>>   }
>>   
>> +/**
>> + * xe_gt_sriov_vf_sched_groups_enabled() - Check if PF has enabled multiple
>> + * scheduler groups
>> + * @gt: the &xe_gt
>> + *
>> + * This function is for VF use only.
>> + *
>> + * Return: true if shed groups were enabled, false otherwise.
>> + */
>> +bool xe_gt_sriov_vf_sched_groups_enabled(struct xe_gt *gt)
>> +{
>> +	xe_gt_assert(gt, IS_SRIOV_VF(gt_to_xe(gt)));
>> +	xe_gt_assert(gt, gt->sriov.vf.guc_version.major);
>> +
>> +	return gt->sriov.vf.runtime.uses_sched_groups;
>> +}
>> +
>>   /**
>>    * xe_gt_sriov_vf_guc_ids - VF GuC context IDs configuration.
>>    * @gt: the &xe_gt
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
>> index af40276790fa..7d97189c2d3d 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
>> @@ -30,6 +30,7 @@ bool xe_gt_sriov_vf_recovery_pending(struct xe_gt *gt);
>>   u32 xe_gt_sriov_vf_gmdid(struct xe_gt *gt);
>>   u16 xe_gt_sriov_vf_guc_ids(struct xe_gt *gt);
>>   u64 xe_gt_sriov_vf_lmem(struct xe_gt *gt);
>> +bool xe_gt_sriov_vf_sched_groups_enabled(struct xe_gt *gt);
>>   
>>   u32 xe_gt_sriov_vf_read32(struct xe_gt *gt, struct xe_reg reg);
>>   void xe_gt_sriov_vf_write32(struct xe_gt *gt, struct xe_reg reg, u32 val);
>> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
>> index 420b0e6089de..5267c097ecd0 100644
>> --- a/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
>> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf_types.h
>> @@ -27,6 +27,8 @@ struct xe_gt_sriov_vf_selfconfig {
>>   struct xe_gt_sriov_vf_runtime {
>>   	/** @gmdid: cached value of the GDMID register. */
>>   	u32 gmdid;
>> +	/** @uses_sched_groups: whether PF enabled sched groups or not. */
>> +	bool uses_sched_groups;
>>   	/** @regs_size: size of runtime register array. */
>>   	u32 regs_size;
>>   	/** @num_regs: number of runtime registers in the array. */
>> diff --git a/drivers/gpu/drm/xe/xe_guc_klv_helpers.c b/drivers/gpu/drm/xe/xe_guc_klv_helpers.c
>> index 1b08b443606e..dd504b77cb17 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_klv_helpers.c
>> +++ b/drivers/gpu/drm/xe/xe_guc_klv_helpers.c
>> @@ -21,6 +21,9 @@
>>   const char *xe_guc_klv_key_to_string(u16 key)
>>   {
>>   	switch (key) {
>> +	/* GuC Global Config KLVs */
>> +	case GUC_KLV_GLOBAL_CFG_GROUP_SCHEDULING_AVAILABLE_KEY:
>> +		return "group_scheduling_available";
>>   	/* VGT POLICY keys */
>>   	case GUC_KLV_VGT_POLICY_SCHED_IF_IDLE_KEY:
>>   		return "sched_if_idle";
>> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
>> index af43acf7baae..e8921219ac4e 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
>> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
>> @@ -2985,6 +2985,27 @@ void xe_guc_submit_print(struct xe_guc *guc, struct drm_printer *p)
>>   	mutex_unlock(&guc->submission_state.lock);
>>   }
>>   
>> +/**
>> + * xe_guc_has_registered_mlrc_queues - check whether there are any MLRC queues
>> + * registered with the GuC
>> + * @guc: GuC.
>> + *
>> + * Return: true if any MLRC queue is registered with the GuC, false otherwise.
>> + */
>> +bool xe_guc_has_registered_mlrc_queues(struct xe_guc *guc)
>> +{
>> +	struct xe_exec_queue *q;
>> +	unsigned long index;
>> +
>> +	guard(mutex)(&guc->submission_state.lock);
>> +
>> +	xa_for_each(&guc->submission_state.exec_queue_lookup, index, q)
>> +		if (q->width > 1)
>> +			return true;
>> +
>> +	return false;
>> +}
>> +
>>   /**
>>    * xe_guc_contexts_hwsp_rebase - Re-compute GGTT references within all
>>    * exec queues registered to given GuC.
>> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
>> index 100a7891b918..49e608500a4e 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_submit.h
>> +++ b/drivers/gpu/drm/xe/xe_guc_submit.h
>> @@ -49,6 +49,8 @@ xe_guc_exec_queue_snapshot_free(struct xe_guc_submit_exec_queue_snapshot *snapsh
>>   void xe_guc_submit_print(struct xe_guc *guc, struct drm_printer *p);
>>   void xe_guc_register_vf_exec_queue(struct xe_exec_queue *q, int ctx_type);
>>   
>> +bool xe_guc_has_registered_mlrc_queues(struct xe_guc *guc);
>> +
>>   int xe_guc_contexts_hwsp_rebase(struct xe_guc *guc, void *scratch);
>>   
>>   #endif


  reply	other threads:[~2025-12-08 17:48 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-06 23:03 [PATCH v2 00/11] Introduce SRIOV scheduler groups Daniele Ceraolo Spurio
2025-12-06 23:03 ` [PATCH v2 01/11] drm/xe/gt: Add engine masks for each class Daniele Ceraolo Spurio
2025-12-07 15:35   ` Michal Wajdeczko
2025-12-06 23:03 ` [PATCH v2 02/11] drm/xe/sriov: Initialize scheduler groups Daniele Ceraolo Spurio
2025-12-07 21:57   ` Michal Wajdeczko
2025-12-08 17:36     ` Daniele Ceraolo Spurio
2025-12-06 23:03 ` [PATCH v2 03/11] drm/xe/sriov: Add support for enabling " Daniele Ceraolo Spurio
2025-12-07 21:57   ` Michal Wajdeczko
2025-12-08 17:41     ` Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 04/11] drm/xe/sriov: Scheduler groups are incompatible with multi-lrc Daniele Ceraolo Spurio
2025-12-07 21:58   ` Michal Wajdeczko
2025-12-08 17:48     ` Daniele Ceraolo Spurio [this message]
2025-12-06 23:04 ` [PATCH v2 05/11] drm/xe/sriov: Add handling for MLRC adverse event threshold Daniele Ceraolo Spurio
2025-12-07 22:03   ` Michal Wajdeczko
2025-12-08 17:52     ` Daniele Ceraolo Spurio
2025-12-08 18:27       ` Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 06/11] drm/xe/sriov: Add debugfs to enable scheduler groups Daniele Ceraolo Spurio
2025-12-08 23:38   ` Michal Wajdeczko
2025-12-09  0:36     ` Daniele Ceraolo Spurio
2025-12-09 15:07       ` Michal Wajdeczko
2025-12-09 18:09         ` Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 07/11] drm/xe/sriov: Add debugfs with scheduler groups information Daniele Ceraolo Spurio
2025-12-09  0:08   ` Michal Wajdeczko
2025-12-09  0:23     ` Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 08/11] drm/xe/sriov: Prep for multiple exec quantums and preemption timeouts Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 09/11] drm/xe/sriov: Add functions to set exec quantums for each group Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 10/11] drm/xe/sriov: Add functions to set preempt timeouts " Daniele Ceraolo Spurio
2025-12-06 23:04 ` [PATCH v2 11/11] drm/xe/sriov: Add debugfs to set EQ and PT for scheduler groups Daniele Ceraolo Spurio
2025-12-06 23:10 ` ✗ CI.checkpatch: warning for Introduce SRIOV scheduler groups (rev2) Patchwork
2025-12-06 23:11 ` ✓ CI.KUnit: success " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3d4ca5f6-8861-49e8-bbc8-4ed945c696a4@intel.com \
    --to=daniele.ceraolospurio@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=michal.wajdeczko@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox