Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs
@ 2026-02-15 20:33 Michal Wajdeczko
  2026-02-15 20:33 ` [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions Michal Wajdeczko
                   ` (8 more replies)
  0 siblings, 9 replies; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Rodrigo Vivi, Matthew Brost

Extend Xe driver specific sysfs SR-IOV knobs with VRAM provisioning.

 /sys/bus/pci/drivers/xe/BDF/
 ├── sriov_admin/
     ├── .bulk_profile
     │   └── vram_quota                 [RW] unsigned integer
     ├── vf1/
     │   └── profile
     │       └── vram_quota             [RW] unsigned integer
     ├── vf2/
     │   └── profile
     │       └── vram_quota             [RW] unsigned integer

Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>

Michal Wajdeczko (9):
  drm/xe/pf: Add locked variants of VRAM configuration functions
  drm/xe/pf: Add functions for VRAM provisioning
  drm/xe/pf: Allow to change VFs VRAM quota using sysfs
  drm/xe/pf: Use migration-friendly VRAM auto-provisioning
  drm/xe/tests: Add KUnit tests for new VRAM fair provisioning
  drm/xe/pf: Don't check for empty config
  drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning
  drm/xe/pf: Skip VRAM auto-provisioning if already provisioned
  drm/xe/pf: Add documentation for vram_quota

 .../ABI/testing/sysfs-driver-intel-xe-sriov   |  31 ++++
 .../xe/tests/xe_gt_sriov_pf_config_kunit.c    |  90 +++++++++-
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c    | 167 ++++++++++++++++--
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h    |   4 +
 drivers/gpu/drm/xe/xe_sriov_pf_provision.c    | 121 +++++++++++--
 drivers/gpu/drm/xe/xe_sriov_pf_provision.h    |   4 +
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c        |  26 ++-
 7 files changed, 416 insertions(+), 27 deletions(-)

-- 
2.47.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions
  2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
@ 2026-02-15 20:33 ` Michal Wajdeczko
  2026-02-16 14:37   ` Piotr Piórkowski
  2026-02-15 20:33 ` [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning Michal Wajdeczko
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko

We already have few functions to configure LMEM (aka VRAM) but they
all are taking master mutex. Split them and expose locked variants
to allow use by the caller who already hold this mutex.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 77 ++++++++++++++++++++--
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  4 ++
 2 files changed, 74 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index 23601ce79348..23af49dc1bfa 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -1754,7 +1754,7 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
 }
 
 /**
- * xe_gt_sriov_pf_config_bulk_set_lmem - Provision many VFs with LMEM.
+ * xe_gt_sriov_pf_config_bulk_set_lmem_locked() - Provision many VFs with LMEM.
  * @gt: the &xe_gt (can't be media)
  * @vfid: starting VF identifier (can't be 0)
  * @num_vfs: number of VFs to provision
@@ -1764,31 +1764,94 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
  *
  * Return: 0 on success or a negative error code on failure.
  */
-int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid,
-					unsigned int num_vfs, u64 size)
+int xe_gt_sriov_pf_config_bulk_set_lmem_locked(struct xe_gt *gt, unsigned int vfid,
+					       unsigned int num_vfs, u64 size)
 {
 	unsigned int n;
 	int err = 0;
 
-	xe_gt_assert(gt, vfid);
+	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
+	xe_gt_assert(gt, xe_device_has_lmtt(gt_to_xe(gt)));
+	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
 	xe_gt_assert(gt, xe_gt_is_main_type(gt));
+	xe_gt_assert(gt, vfid);
 
 	if (!num_vfs)
 		return 0;
 
-	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
 	for (n = vfid; n < vfid + num_vfs; n++) {
 		err = pf_provision_vf_lmem(gt, n, size);
 		if (err)
 			break;
 	}
-	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
 
 	return pf_config_bulk_set_u64_done(gt, vfid, num_vfs, size,
-					   xe_gt_sriov_pf_config_get_lmem,
+					   pf_get_vf_config_lmem,
 					   "LMEM", n, err);
 }
 
+/**
+ * xe_gt_sriov_pf_config_bulk_set_lmem() - Provision many VFs with LMEM.
+ * @gt: the &xe_gt (can't be media)
+ * @vfid: starting VF identifier (can't be 0)
+ * @num_vfs: number of VFs to provision
+ * @size: requested LMEM size
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid,
+					unsigned int num_vfs, u64 size)
+{
+	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
+
+	return xe_gt_sriov_pf_config_bulk_set_lmem_locked(gt, vfid, num_vfs, size);
+}
+
+/**
+ * xe_gt_sriov_pf_config_get_lmem_locked() - Get VF's LMEM quota.
+ * @gt: the &xe_gt
+ * @vfid: the VF identifier (can't be 0 == PFID)
+ *
+ * This function can only be called on PF.
+ *
+ * Return: VF's LMEM quota.
+ */
+u64 xe_gt_sriov_pf_config_get_lmem_locked(struct xe_gt *gt, unsigned int vfid)
+{
+	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
+	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
+	xe_gt_assert(gt, vfid);
+
+	return pf_get_vf_config_lmem(gt, vfid);
+}
+
+/**
+ * xe_gt_sriov_pf_config_set_lmem_locked() - Provision VF with LMEM.
+ * @gt: the &xe_gt (can't be media)
+ * @vfid: the VF identifier (can't be 0 == PFID)
+ * @size: requested LMEM size
+ *
+ * This function can only be called on PF.
+ */
+int xe_gt_sriov_pf_config_set_lmem_locked(struct xe_gt *gt, unsigned int vfid, u64 size)
+{
+	int err;
+
+	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
+	xe_gt_assert(gt, xe_device_has_lmtt(gt_to_xe(gt)));
+	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
+	xe_gt_assert(gt, xe_gt_is_main_type(gt));
+	xe_gt_assert(gt, vfid);
+
+	err = pf_provision_vf_lmem(gt, vfid, size);
+
+	return pf_config_set_u64_done(gt, vfid, size,
+				      pf_get_vf_config_lmem(gt, vfid),
+				      "LMEM", err);
+}
+
 static struct xe_bo *pf_get_vf_config_lmem_obj(struct xe_gt *gt, unsigned int vfid)
 {
 	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
index 3c6c8b6655af..4a004ecd6140 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
@@ -36,6 +36,10 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
 int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
 int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs,
 					u64 size);
+u64 xe_gt_sriov_pf_config_get_lmem_locked(struct xe_gt *gt, unsigned int vfid);
+int xe_gt_sriov_pf_config_set_lmem_locked(struct xe_gt *gt, unsigned int vfid, u64 size);
+int xe_gt_sriov_pf_config_bulk_set_lmem_locked(struct xe_gt *gt, unsigned int vfid,
+					       unsigned int num_vfs, u64 size);
 struct xe_bo *xe_gt_sriov_pf_config_get_lmem_obj(struct xe_gt *gt, unsigned int vfid);
 
 u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid);
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning
  2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
  2026-02-15 20:33 ` [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions Michal Wajdeczko
@ 2026-02-15 20:33 ` Michal Wajdeczko
  2026-02-16 15:02   ` Piotr Piórkowski
  2026-02-15 20:33 ` [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko

We already have functions to configure VF LMEM (aka VRAM) on the
tile/GT level, used by the auto-provisioning and debugfs, but we
also need functions that will work on the device level that will
configure VRAM on all tiles at once.

We will use these new functions in upcoming patch.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 108 +++++++++++++++++++++
 drivers/gpu/drm/xe/xe_sriov_pf_provision.h |   4 +
 2 files changed, 112 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
index 01470c42e8a7..e7187d03fe1b 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
@@ -436,3 +436,111 @@ int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int v
 
 	return !count ? -ENODATA : 0;
 }
+
+static u64 vram_alignment(struct xe_device *xe)
+{
+	/* this might be platform dependent */
+	return SZ_2M;
+}
+
+static u64 vram_per_tile(struct xe_tile *tile, u64 total)
+{
+	struct xe_device *xe = tile->xe;
+	unsigned int tcount = xe->info.tile_count;
+	u64 alignment = vram_alignment(xe);
+
+	total = round_up(total, tcount * alignment);
+	return div_u64(total, tcount);
+}
+
+/**
+ * xe_sriov_pf_provision_bulk_apply_vram() - Change VRAM provisioning for all VFs.
+ * @xe: the PF &xe_device
+ * @size: the VRAM size in [bytes] to set
+ *
+ * Change all VFs VRAM (LMEM) provisioning on all tiles.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_bulk_apply_vram(struct xe_device *xe, u64 size)
+{
+	unsigned int num_vfs = xe_sriov_pf_get_totalvfs(xe);
+	struct xe_tile *tile;
+	unsigned int id;
+	int result = 0;
+	int err;
+
+	guard(mutex)(xe_sriov_pf_master_mutex(xe));
+
+	for_each_tile(tile, xe, id) {
+		err = xe_gt_sriov_pf_config_bulk_set_lmem_locked(tile->primary_gt,
+								 VFID(1), num_vfs,
+								 vram_per_tile(tile, size));
+		result = result ?: err;
+	}
+
+	return result;
+}
+
+/**
+ * xe_sriov_pf_provision_apply_vf_vram() - Change single VF VRAM allocatio.
+ * @xe: the PF &xe_device
+ * @vfid: the VF identifier (can't be 0 == PFID)
+ * @size: VRAM size to set
+ *
+ * Change VF's VRAM provisioning on all tiles/GTs.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_apply_vf_vram(struct xe_device *xe, unsigned int vfid, u64 size)
+{
+	struct xe_tile *tile;
+	unsigned int id;
+	int result = 0;
+	int err;
+
+	xe_assert(xe, vfid);
+
+	guard(mutex)(xe_sriov_pf_master_mutex(xe));
+
+	for_each_tile(tile, xe, id) {
+		err = xe_gt_sriov_pf_config_set_lmem_locked(tile->primary_gt, vfid,
+							    vram_per_tile(tile, size));
+		result = result ?: err;
+	}
+
+	return result;
+}
+
+/**
+ * xe_sriov_pf_provision_query_vf_vram() - Query VF's VRAM allocation.
+ * @xe: the PF &xe_device
+ * @vfid: the VF identifier (can't be 0 == PFID)
+ * @prio: placeholder for the returned VRAM size
+ *
+ * Query VF's VRAM provisioning from all tiles/GTs.
+ *
+ * This function can only be called on PF.
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int xe_sriov_pf_provision_query_vf_vram(struct xe_device *xe, unsigned int vfid, u64 *size)
+{
+	struct xe_tile *tile;
+	unsigned int id;
+	u64 total = 0;
+
+	xe_assert(xe, vfid);
+
+	guard(mutex)(xe_sriov_pf_master_mutex(xe));
+
+	for_each_tile(tile, xe, id)
+		total += xe_gt_sriov_pf_config_get_lmem_locked(tile->primary_gt, vfid);
+
+	*size = total;
+	return 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
index bccf23d51396..f26f49539697 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
@@ -24,6 +24,10 @@ int xe_sriov_pf_provision_bulk_apply_priority(struct xe_device *xe, u32 prio);
 int xe_sriov_pf_provision_apply_vf_priority(struct xe_device *xe, unsigned int vfid, u32 prio);
 int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int vfid, u32 *prio);
 
+int xe_sriov_pf_provision_bulk_apply_vram(struct xe_device *xe, u64 size);
+int xe_sriov_pf_provision_apply_vf_vram(struct xe_device *xe, unsigned int vfid, u64 size);
+int xe_sriov_pf_provision_query_vf_vram(struct xe_device *xe, unsigned int vfid, u64 *size);
+
 int xe_sriov_pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs);
 int xe_sriov_pf_unprovision_vfs(struct xe_device *xe, unsigned int num_vfs);
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs
  2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
  2026-02-15 20:33 ` [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions Michal Wajdeczko
  2026-02-15 20:33 ` [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning Michal Wajdeczko
@ 2026-02-15 20:33 ` Michal Wajdeczko
  2026-02-16 15:29   ` Piotr Piórkowski
  2026-02-18 21:07   ` Rodrigo Vivi
  2026-02-15 20:33 ` [PATCH 4/9] drm/xe/pf: Use migration-friendly VRAM auto-provisioning Michal Wajdeczko
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Rodrigo Vivi

On current discrete platforms, PF will provision all VFs with a fair
amount of the VRAM (LMEM) during VFs enabling. However, in some cases
this automatic VRAM provisioning might be either non-reproducible or
sub-optimal. This could break VF's migration or impact performance.

Expose per-VF VRAM quota read-write sysfs attributes to allow admin
change default VRAM provisioning performed by the PF.

 /sys/bus/pci/drivers/xe/BDF/
 ├── sriov_admin/
     ├── .bulk_profile
     │   └── vram_quota                 [RW] unsigned integer
     ├── vf1/
     │   └── profile
     │       └── vram_quota             [RW] unsigned integer
     ├── vf2/
     │   └── profile
     │       └── vram_quota             [RW] unsigned integer

Above values represent total provisioned VRAM from all tiles where
VFs were assigned, and currently it's from all tiles always.

Note that changing VRAM provisioning is only possible when VF is
not running, otherwise GuC will complain. To make sure that given
VF is idle, triggering VF FLR might be needed.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
index 82a1055985ba..09deda2fd8b2 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
@@ -44,7 +44,8 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
  *     ├── .bulk_profile
  *     │   ├── exec_quantum_ms
  *     │   ├── preempt_timeout_us
- *     │   └── sched_priority
+ *     │   ├── sched_priority
+ *     │   └── vram_quota
  *     ├── pf/
  *     │   ├── ...
  *     │   ├── device -> ../../../BDF
@@ -59,7 +60,8 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
  *     │   └── profile
  *     │       ├── exec_quantum_ms
  *     │       ├── preempt_timeout_us
- *     │       └── sched_priority
+ *     │       ├── sched_priority
+ *     │       └── vram_quota
  *     ├── vf2/
  *     :
  *     └── vfN/
@@ -132,6 +134,7 @@ static XE_SRIOV_DEV_ATTR_WO(NAME)
 
 DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(exec_quantum_ms, eq, u32);
 DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(preempt_timeout_us, pt, u32);
+DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(vram_quota, vram, u64);
 
 static const char * const sched_priority_names[] = {
 	[GUC_SCHED_PRIORITY_LOW] = "low",
@@ -181,12 +184,25 @@ static struct attribute *bulk_profile_dev_attrs[] = {
 	&xe_sriov_dev_attr_exec_quantum_ms.attr,
 	&xe_sriov_dev_attr_preempt_timeout_us.attr,
 	&xe_sriov_dev_attr_sched_priority.attr,
+	&xe_sriov_dev_attr_vram_quota.attr,
 	NULL
 };
 
+static umode_t profile_dev_attr_is_visible(struct kobject *kobj,
+					   struct attribute *attr, int index)
+{
+	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
+
+	if (attr == &xe_sriov_dev_attr_vram_quota.attr && !IS_DGFX(vkobj->xe))
+		return 0;
+
+	return attr->mode;
+}
+
 static const struct attribute_group bulk_profile_dev_attr_group = {
 	.name = ".bulk_profile",
 	.attrs = bulk_profile_dev_attrs,
+	.is_visible = profile_dev_attr_is_visible,
 };
 
 static const struct attribute_group *xe_sriov_dev_attr_groups[] = {
@@ -228,6 +244,7 @@ static XE_SRIOV_VF_ATTR(NAME)
 
 DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(exec_quantum_ms, eq, u32, "%u\n");
 DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(preempt_timeout_us, pt, u32, "%u\n");
+DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(vram_quota, vram, u64, "%llu\n");
 
 static ssize_t xe_sriov_vf_attr_sched_priority_show(struct xe_device *xe, unsigned int vfid,
 						    char *buf)
@@ -274,6 +291,7 @@ static struct attribute *profile_vf_attrs[] = {
 	&xe_sriov_vf_attr_exec_quantum_ms.attr,
 	&xe_sriov_vf_attr_preempt_timeout_us.attr,
 	&xe_sriov_vf_attr_sched_priority.attr,
+	&xe_sriov_vf_attr_vram_quota.attr,
 	NULL
 };
 
@@ -286,6 +304,10 @@ static umode_t profile_vf_attr_is_visible(struct kobject *kobj,
 	    !sched_priority_change_allowed(vkobj->vfid))
 		return attr->mode & 0444;
 
+	if (attr == &xe_sriov_vf_attr_vram_quota.attr &&
+	    (!IS_DGFX(vkobj->xe) || vkobj->vfid == PFID))
+		return 0;
+
 	return attr->mode;
 }
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 4/9] drm/xe/pf: Use migration-friendly VRAM auto-provisioning
  2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
                   ` (2 preceding siblings ...)
  2026-02-15 20:33 ` [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
@ 2026-02-15 20:33 ` Michal Wajdeczko
  2026-02-16 16:14   ` Piotr Piórkowski
  2026-02-15 20:33 ` [PATCH 5/9] drm/xe/tests: Add KUnit tests for new VRAM fair provisioning Michal Wajdeczko
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko

Instead of trying very hard to find the largest fair VRAM (aka LMEM)
size that could be allocated for VFs on the current tile, pick some
smaller rounded down to power-of-two value that is more likely to be
provisioned in the same manner by the other PF instances.

In some cases, the outcome of above calculation might not be optimal,
but it's expected that admin will do fine-tuning using sysfs files.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 27 ++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index 23af49dc1bfa..43041af81518 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -1919,6 +1919,26 @@ static u64 pf_estimate_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
 	return fair;
 }
 
+static u64 pf_profile_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
+{
+	struct xe_tile *tile = gt_to_tile(gt);
+	bool admin_only_pf = xe_sriov_pf_admin_only(tile->xe);
+	u64 usable = xe_vram_region_usable_size(tile->mem.vram);
+	u64 shareable = ALIGN_DOWN(usable, SZ_1G);
+	u64 alignment = pf_get_lmem_alignment(gt);
+	u64 fair;
+
+	if (admin_only_pf)
+		fair = div_u64(shareable, num_vfs);
+	else
+		fair = div_u64(shareable, 1 + num_vfs);
+
+	if (!admin_only_pf && fair)
+		fair = rounddown_pow_of_two(fair);
+
+	return ALIGN_DOWN(fair, alignment);
+}
+
 /**
  * xe_gt_sriov_pf_config_set_fair_lmem - Provision many VFs with fair LMEM.
  * @gt: the &xe_gt (can't be media)
@@ -1932,6 +1952,7 @@ static u64 pf_estimate_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
 int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
 					unsigned int num_vfs)
 {
+	u64 profile;
 	u64 fair;
 
 	xe_gt_assert(gt, vfid);
@@ -1948,6 +1969,12 @@ int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
 	if (!fair)
 		return -ENOSPC;
 
+	profile = pf_profile_fair_lmem(gt, num_vfs);
+	fair = min(fair, profile);
+	if (fair < profile)
+		xe_gt_sriov_info(gt, "Using non-profile provisioning (%s %llu vs %llu)\n",
+				 "VRAM", fair, profile);
+
 	return xe_gt_sriov_pf_config_bulk_set_lmem(gt, vfid, num_vfs, fair);
 }
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 5/9] drm/xe/tests: Add KUnit tests for new VRAM fair provisioning
  2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
                   ` (3 preceding siblings ...)
  2026-02-15 20:33 ` [PATCH 4/9] drm/xe/pf: Use migration-friendly VRAM auto-provisioning Michal Wajdeczko
@ 2026-02-15 20:33 ` Michal Wajdeczko
  2026-02-16 16:23   ` Piotr Piórkowski
  2026-02-15 20:33 ` [PATCH 6/9] drm/xe/pf: Don't check for empty config Michal Wajdeczko
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko

Add basic test cases to check outcome of the fair VRAM provisioning
for regular and admin-only PF mode.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 .../xe/tests/xe_gt_sriov_pf_config_kunit.c    | 90 ++++++++++++++++++-
 1 file changed, 89 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xe/tests/xe_gt_sriov_pf_config_kunit.c b/drivers/gpu/drm/xe/tests/xe_gt_sriov_pf_config_kunit.c
index 3889dc3e49ca..80e5065beb2c 100644
--- a/drivers/gpu/drm/xe/tests/xe_gt_sriov_pf_config_kunit.c
+++ b/drivers/gpu/drm/xe/tests/xe_gt_sriov_pf_config_kunit.c
@@ -11,6 +11,7 @@
 #include "xe_pci_test.h"
 
 #define TEST_MAX_VFS	63
+#define TEST_VRAM	0x37a800000ull
 
 static void pf_set_admin_mode(struct xe_device *xe, bool enable)
 {
@@ -19,6 +20,17 @@ static void pf_set_admin_mode(struct xe_device *xe, bool enable)
 	KUNIT_EXPECT_EQ(kunit_get_current_test(), enable, xe_sriov_pf_admin_only(xe));
 }
 
+static void pf_set_usable_vram(struct xe_device *xe, u64 usable)
+{
+	struct xe_tile *tile = xe_device_get_root_tile(xe);
+	struct kunit *test = kunit_get_current_test();
+
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, tile);
+	xe->mem.vram->usable_size = usable;
+	tile->mem.vram->usable_size = usable;
+	KUNIT_ASSERT_EQ(test, usable, xe_vram_region_usable_size(tile->mem.vram));
+}
+
 static const void *num_vfs_gen_param(struct kunit *test, const void *prev, char *desc)
 {
 	unsigned long next = 1 + (unsigned long)prev;
@@ -34,9 +46,11 @@ static int pf_gt_config_test_init(struct kunit *test)
 {
 	struct xe_pci_fake_data fake = {
 		.sriov_mode = XE_SRIOV_MODE_PF,
-		.platform = XE_TIGERLAKE, /* any random platform with SR-IOV */
+		.platform = XE_BATTLEMAGE, /* any random DGFX platform with SR-IOV */
 		.subplatform = XE_SUBPLATFORM_NONE,
+		.graphics_verx100 = 2001,
 	};
+	struct xe_vram_region *vram;
 	struct xe_device *xe;
 	struct xe_gt *gt;
 
@@ -50,6 +64,13 @@ static int pf_gt_config_test_init(struct kunit *test)
 	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gt);
 	test->priv = gt;
 
+	/* pretend it has some VRAM */
+	KUNIT_ASSERT_TRUE(test, IS_DGFX(xe));
+	vram = kunit_kzalloc(test, sizeof(*vram), GFP_KERNEL);
+	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vram);
+	vram->usable_size = TEST_VRAM;
+	xe->mem.vram = xe->tiles[0].mem.vram = vram;
+
 	/* pretend it can support up to 63 VFs */
 	xe->sriov.pf.device_total_vfs = TEST_MAX_VFS;
 	xe->sriov.pf.driver_max_vfs = TEST_MAX_VFS;
@@ -189,13 +210,80 @@ static void fair_ggtt(struct kunit *test)
 		KUNIT_ASSERT_EQ(test, SZ_2G, pf_profile_fair_ggtt(gt, num_vfs));
 }
 
+static const u64 vram_sizes[] = {
+	SZ_4G - SZ_512M,
+	SZ_8G + SZ_4G - SZ_512M,
+	SZ_16G - SZ_512M,
+	SZ_32G - SZ_512M,
+	SZ_64G - SZ_512M,
+	TEST_VRAM,
+};
+
+static void u64_param_get_desc(const u64 *p, char *desc)
+{
+	string_get_size(*p, 1, STRING_UNITS_2, desc, KUNIT_PARAM_DESC_SIZE);
+}
+
+KUNIT_ARRAY_PARAM(vram_size, vram_sizes, u64_param_get_desc);
+
+static void fair_vram_1vf(struct kunit *test)
+{
+	const u64 usable = *(const u64 *)test->param_value;
+	struct xe_gt *gt = test->priv;
+	struct xe_device *xe = gt_to_xe(gt);
+
+	pf_set_admin_mode(xe, false);
+	pf_set_usable_vram(xe, usable);
+
+	KUNIT_EXPECT_NE(test, 0, pf_profile_fair_lmem(gt, 1));
+	KUNIT_EXPECT_GE(test, usable, pf_profile_fair_lmem(gt, 1));
+	KUNIT_EXPECT_TRUE(test, is_power_of_2(pf_profile_fair_lmem(gt, 1)));
+	KUNIT_EXPECT_GE(test, usable - pf_profile_fair_lmem(gt, 1), pf_profile_fair_lmem(gt, 1));
+}
+
+static void fair_vram_1vf_admin_only(struct kunit *test)
+{
+	const u64 usable = *(const u64 *)test->param_value;
+	struct xe_gt *gt = test->priv;
+	struct xe_device *xe = gt_to_xe(gt);
+
+	pf_set_admin_mode(xe, true);
+	pf_set_usable_vram(xe, usable);
+
+	KUNIT_EXPECT_NE(test, 0, pf_profile_fair_lmem(gt, 1));
+	KUNIT_EXPECT_GE(test, usable, pf_profile_fair_lmem(gt, 1));
+	KUNIT_EXPECT_LT(test, usable - pf_profile_fair_lmem(gt, 1), pf_profile_fair_lmem(gt, 1));
+	KUNIT_EXPECT_TRUE(test, IS_ALIGNED(pf_profile_fair_lmem(gt, 1), SZ_1G));
+}
+
+static void fair_vram(struct kunit *test)
+{
+	unsigned int num_vfs = (unsigned long)test->param_value;
+	struct xe_gt *gt = test->priv;
+	struct xe_device *xe = gt_to_xe(gt);
+	u64 alignment = pf_get_lmem_alignment(gt);
+	char size[10];
+
+	pf_set_admin_mode(xe, false);
+
+	string_get_size(pf_profile_fair_lmem(gt, num_vfs), 1, STRING_UNITS_2, size, sizeof(size));
+	kunit_info(test, "fair %s %llx\n", size, pf_profile_fair_lmem(gt, num_vfs));
+
+	KUNIT_EXPECT_TRUE(test, is_power_of_2(pf_profile_fair_lmem(gt, num_vfs)));
+	KUNIT_EXPECT_TRUE(test, IS_ALIGNED(pf_profile_fair_lmem(gt, num_vfs), alignment));
+	KUNIT_EXPECT_GE(test, TEST_VRAM, num_vfs * pf_profile_fair_lmem(gt, num_vfs));
+}
+
 static struct kunit_case pf_gt_config_test_cases[] = {
 	KUNIT_CASE(fair_contexts_1vf),
 	KUNIT_CASE(fair_doorbells_1vf),
 	KUNIT_CASE(fair_ggtt_1vf),
+	KUNIT_CASE_PARAM(fair_vram_1vf, vram_size_gen_params),
+	KUNIT_CASE_PARAM(fair_vram_1vf_admin_only, vram_size_gen_params),
 	KUNIT_CASE_PARAM(fair_contexts, num_vfs_gen_param),
 	KUNIT_CASE_PARAM(fair_doorbells, num_vfs_gen_param),
 	KUNIT_CASE_PARAM(fair_ggtt, num_vfs_gen_param),
+	KUNIT_CASE_PARAM(fair_vram, num_vfs_gen_param),
 	{}
 };
 
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 6/9] drm/xe/pf: Don't check for empty config
  2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
                   ` (4 preceding siblings ...)
  2026-02-15 20:33 ` [PATCH 5/9] drm/xe/tests: Add KUnit tests for new VRAM fair provisioning Michal Wajdeczko
@ 2026-02-15 20:33 ` Michal Wajdeczko
  2026-02-16 16:27   ` Piotr Piórkowski
  2026-02-15 20:33 ` [PATCH 7/9] drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning Michal Wajdeczko
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko

We already turn off VFs auto-provisioning once we detect manual VFs
provisioning over the debugfs, so we can skip additional check for
all VFs configs being still empty.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
index e7187d03fe1b..95c8f01e0264 100644
--- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
+++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
@@ -32,17 +32,6 @@ static bool pf_auto_provisioning_mode(struct xe_device *xe)
 	return xe->sriov.pf.provision.mode == XE_SRIOV_PROVISIONING_MODE_AUTO;
 }
 
-static bool pf_needs_provisioning(struct xe_gt *gt, unsigned int num_vfs)
-{
-	unsigned int n;
-
-	for (n = 1; n <= num_vfs; n++)
-		if (!xe_gt_sriov_pf_config_is_empty(gt, n))
-			return false;
-
-	return true;
-}
-
 static int pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs)
 {
 	struct xe_gt *gt;
@@ -51,8 +40,6 @@ static int pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs)
 	int err;
 
 	for_each_gt(gt, xe, id) {
-		if (!pf_needs_provisioning(gt, num_vfs))
-			return -EUCLEAN;
 		err = xe_gt_sriov_pf_config_set_fair(gt, VFID(1), num_vfs);
 		result = result ?: err;
 	}
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 7/9] drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning
  2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
                   ` (5 preceding siblings ...)
  2026-02-15 20:33 ` [PATCH 6/9] drm/xe/pf: Don't check for empty config Michal Wajdeczko
@ 2026-02-15 20:33 ` Michal Wajdeczko
  2026-02-16 16:36   ` Piotr Piórkowski
  2026-02-15 20:33 ` [PATCH 8/9] drm/xe/pf: Skip VRAM auto-provisioning if already provisioned Michal Wajdeczko
  2026-02-15 20:33 ` [PATCH 9/9] drm/xe/pf: Add documentation for vram_quota Michal Wajdeczko
  8 siblings, 1 reply; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko

We will add more code there and with guard() it will easier to
avoid mistakes in unlocking.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index 43041af81518..1a9f3b85526c 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -1962,10 +1962,9 @@ int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
 	if (!xe_device_has_lmtt(gt_to_xe(gt)))
 		return 0;
 
-	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
+	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
+
 	fair = pf_estimate_fair_lmem(gt, num_vfs);
-	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
-
 	if (!fair)
 		return -ENOSPC;
 
@@ -1975,7 +1974,7 @@ int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
 		xe_gt_sriov_info(gt, "Using non-profile provisioning (%s %llu vs %llu)\n",
 				 "VRAM", fair, profile);
 
-	return xe_gt_sriov_pf_config_bulk_set_lmem(gt, vfid, num_vfs, fair);
+	return xe_gt_sriov_pf_config_bulk_set_lmem_locked(gt, vfid, num_vfs, fair);
 }
 
 /**
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 8/9] drm/xe/pf: Skip VRAM auto-provisioning if already provisioned
  2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
                   ` (6 preceding siblings ...)
  2026-02-15 20:33 ` [PATCH 7/9] drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning Michal Wajdeczko
@ 2026-02-15 20:33 ` Michal Wajdeczko
  2026-02-16 16:59   ` Piotr Piórkowski
  2026-02-15 20:33 ` [PATCH 9/9] drm/xe/pf: Add documentation for vram_quota Michal Wajdeczko
  8 siblings, 1 reply; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko

In case VF's VRAM provisioning using sysfs is done by the admin
prior to VFs enabling, this provisioning will be lost as PF will
run VRAM auto-provisioning anyway. To avoid that skip this auto-
provisioning if any VF has been already provisioned with VRAM.

To help admin find any mistakes, add diagnostics messages about
which VFs were provisioned with VRAM and which were missed.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 56 ++++++++++++++++++++++
 1 file changed, 56 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
index 1a9f3b85526c..b1eccd6712f4 100644
--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
@@ -1939,6 +1939,59 @@ static u64 pf_profile_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
 	return ALIGN_DOWN(fair, alignment);
 }
 
+static void __pf_show_provisioning_lmem(struct xe_gt *gt, unsigned int first_vf,
+					unsigned int num_vfs, bool provisioned)
+{
+	unsigned int allvfs = 1 + xe_gt_sriov_pf_get_totalvfs(gt); /* PF plus VFs */
+	unsigned long *bitmap __free(bitmap) = bitmap_zalloc(allvfs, GFP_KERNEL);
+	unsigned int weight;
+	unsigned int n;
+
+	if (!bitmap)
+		return;
+
+	for (n = first_vf; n < first_vf + num_vfs; n++) {
+		if (!!pf_get_vf_config_lmem(gt, VFID(n)) == provisioned)
+			bitmap_set(bitmap, n, 1);
+	}
+
+	weight = bitmap_weight(bitmap, allvfs);
+	if (!weight)
+		return;
+
+	xe_gt_sriov_info(gt, "VF%s%*pbl %s provisioned with VRAM\n",
+			 weight > 1 ? "s " : "", allvfs, bitmap,
+			 provisioned ? "already" : "not");
+}
+
+static void pf_show_all_provisioned_lmem(struct xe_gt *gt)
+{
+	__pf_show_provisioning_lmem(gt, 1, xe_gt_sriov_pf_get_totalvfs(gt), true);
+}
+
+static void pf_show_unprovisioned_lmem(struct xe_gt *gt, unsigned int first_vf,
+				       unsigned int num_vfs)
+{
+	__pf_show_provisioning_lmem(gt, first_vf, num_vfs, false);
+}
+
+static bool pf_needs_provision_lmem(struct xe_gt *gt, unsigned int first_vf,
+				    unsigned int num_vfs)
+{
+	unsigned int vfid;
+
+	for (vfid = first_vf; vfid < first_vf + num_vfs; vfid++) {
+		if (pf_get_vf_config_lmem(gt, vfid)) {
+			pf_show_all_provisioned_lmem(gt);
+			pf_show_unprovisioned_lmem(gt, first_vf, num_vfs);
+			return false;
+		}
+	}
+
+	pf_show_all_provisioned_lmem(gt);
+	return true;
+}
+
 /**
  * xe_gt_sriov_pf_config_set_fair_lmem - Provision many VFs with fair LMEM.
  * @gt: the &xe_gt (can't be media)
@@ -1964,6 +2017,9 @@ int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
 
 	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
 
+	if (!pf_needs_provision_lmem(gt, vfid, num_vfs))
+		return 0;
+
 	fair = pf_estimate_fair_lmem(gt, num_vfs);
 	if (!fair)
 		return -ENOSPC;
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 9/9] drm/xe/pf: Add documentation for vram_quota
  2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
                   ` (7 preceding siblings ...)
  2026-02-15 20:33 ` [PATCH 8/9] drm/xe/pf: Skip VRAM auto-provisioning if already provisioned Michal Wajdeczko
@ 2026-02-15 20:33 ` Michal Wajdeczko
  2026-02-16 17:04   ` Piotr Piórkowski
  8 siblings, 1 reply; 21+ messages in thread
From: Michal Wajdeczko @ 2026-02-15 20:33 UTC (permalink / raw)
  To: intel-xe; +Cc: Michal Wajdeczko, Rodrigo Vivi

Add initial documentation for recently added VRAM provisioning
Xe driver specific SR-IOV sysfs files under device/sriov_admin.

Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 .../ABI/testing/sysfs-driver-intel-xe-sriov   | 31 +++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov b/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
index 7f5ef9eada53..14b61bd2d602 100644
--- a/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
+++ b/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
@@ -129,6 +129,37 @@ Description:
 			-EIO if FW refuses to change the provisioning.
 
 
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/.bulk_profile/vram_quota
+What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/profile/vram_quota
+Date:		February 2026
+KernelVersion:	7.0
+Contact:	intel-xe@lists.freedesktop.org
+Description:
+		These files allow to perform initial VFs VRAM provisioning prior to VFs
+		enabling or to change VFs VRAM provisioning once the VFs are enabled.
+		Any non-zero initial VRAM provisioning will block VFs auto-provisioning.
+		Without initial VRAM provisioning those files will show result of the
+		VRAM auto-provisioning performed by the PF once the VFs are enabled.
+		Once the VFs are disabled, all VRAM provisioning will be released.
+		These files are visible only on discrete Intel Xe platforms with VRAM
+		and are writeable only if dynamic VFs VRAM provisioning is supported.
+
+		.bulk_profile/vram_quota: (WO) unsigned integer
+			The amount of the provisioned VRAM in [bytes] for each VF.
+			Actual final value might be aligned per HW/FW requirements.
+
+		profile/vram_quota: (RW) unsigned integer
+			The amount of the provisioned VRAM in [bytes] for this VF.
+			Actual quantum value might be aligned per HW/FW requirements.
+
+			Default is 0 (unprovisioned).
+
+		Writes to these attributes may fail with errors like:
+			-EINVAL if provided input is malformed or not recognized,
+			-EPERM if change is not applicable on given HW/FW,
+			-EIO if FW refuses to change the provisioning.
+
+
 What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/stop
 Date:		October 2025
 KernelVersion:	6.19
-- 
2.47.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions
  2026-02-15 20:33 ` [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions Michal Wajdeczko
@ 2026-02-16 14:37   ` Piotr Piórkowski
  0 siblings, 0 replies; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 14:37 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:15 +0100]:
> We already have few functions to configure LMEM (aka VRAM) but they
> all are taking master mutex. Split them and expose locked variants
> to allow use by the caller who already hold this mutex.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 77 ++++++++++++++++++++--
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h |  4 ++
>  2 files changed, 74 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index 23601ce79348..23af49dc1bfa 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1754,7 +1754,7 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
>  }
>  
>  /**
> - * xe_gt_sriov_pf_config_bulk_set_lmem - Provision many VFs with LMEM.
> + * xe_gt_sriov_pf_config_bulk_set_lmem_locked() - Provision many VFs with LMEM.
>   * @gt: the &xe_gt (can't be media)
>   * @vfid: starting VF identifier (can't be 0)
>   * @num_vfs: number of VFs to provision
> @@ -1764,31 +1764,94 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
>   *
>   * Return: 0 on success or a negative error code on failure.
>   */
> -int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid,
> -					unsigned int num_vfs, u64 size)
> +int xe_gt_sriov_pf_config_bulk_set_lmem_locked(struct xe_gt *gt, unsigned int vfid,
> +					       unsigned int num_vfs, u64 size)
>  {
>  	unsigned int n;
>  	int err = 0;
>  
> -	xe_gt_assert(gt, vfid);
> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> +	xe_gt_assert(gt, xe_device_has_lmtt(gt_to_xe(gt)));
> +	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
>  	xe_gt_assert(gt, xe_gt_is_main_type(gt));
> +	xe_gt_assert(gt, vfid);
>  
>  	if (!num_vfs)
>  		return 0;
>  
> -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
>  	for (n = vfid; n < vfid + num_vfs; n++) {
>  		err = pf_provision_vf_lmem(gt, n, size);
>  		if (err)
>  			break;
>  	}
> -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
>  
>  	return pf_config_bulk_set_u64_done(gt, vfid, num_vfs, size,
> -					   xe_gt_sriov_pf_config_get_lmem,
> +					   pf_get_vf_config_lmem,
>  					   "LMEM", n, err);
>  }
>  
> +/**
> + * xe_gt_sriov_pf_config_bulk_set_lmem() - Provision many VFs with LMEM.
> + * @gt: the &xe_gt (can't be media)
> + * @vfid: starting VF identifier (can't be 0)
> + * @num_vfs: number of VFs to provision
> + * @size: requested LMEM size
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid,
> +					unsigned int num_vfs, u64 size)
> +{
> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
> +
> +	return xe_gt_sriov_pf_config_bulk_set_lmem_locked(gt, vfid, num_vfs, size);
> +}
> +
> +/**
> + * xe_gt_sriov_pf_config_get_lmem_locked() - Get VF's LMEM quota.
> + * @gt: the &xe_gt
> + * @vfid: the VF identifier (can't be 0 == PFID)
> + *
> + * This function can only be called on PF.
> + *
> + * Return: VF's LMEM quota.
> + */
> +u64 xe_gt_sriov_pf_config_get_lmem_locked(struct xe_gt *gt, unsigned int vfid)
> +{
> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> +	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
> +	xe_gt_assert(gt, vfid);
> +
> +	return pf_get_vf_config_lmem(gt, vfid);
> +}
> +
> +/**
> + * xe_gt_sriov_pf_config_set_lmem_locked() - Provision VF with LMEM.
> + * @gt: the &xe_gt (can't be media)
> + * @vfid: the VF identifier (can't be 0 == PFID)
> + * @size: requested LMEM size
> + *
> + * This function can only be called on PF.
> + */
> +int xe_gt_sriov_pf_config_set_lmem_locked(struct xe_gt *gt, unsigned int vfid, u64 size)
> +{
> +	int err;
> +
> +	lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> +	xe_gt_assert(gt, xe_device_has_lmtt(gt_to_xe(gt)));
> +	xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
> +	xe_gt_assert(gt, xe_gt_is_main_type(gt));
> +	xe_gt_assert(gt, vfid);
> +
> +	err = pf_provision_vf_lmem(gt, vfid, size);
> +
> +	return pf_config_set_u64_done(gt, vfid, size,
> +				      pf_get_vf_config_lmem(gt, vfid),
> +				      "LMEM", err);
> +}
> +
>  static struct xe_bo *pf_get_vf_config_lmem_obj(struct xe_gt *gt, unsigned int vfid)
>  {
>  	struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> index 3c6c8b6655af..4a004ecd6140 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> @@ -36,6 +36,10 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
>  int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
>  int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs,
>  					u64 size);
> +u64 xe_gt_sriov_pf_config_get_lmem_locked(struct xe_gt *gt, unsigned int vfid);
> +int xe_gt_sriov_pf_config_set_lmem_locked(struct xe_gt *gt, unsigned int vfid, u64 size);
> +int xe_gt_sriov_pf_config_bulk_set_lmem_locked(struct xe_gt *gt, unsigned int vfid,
> +					       unsigned int num_vfs, u64 size);
>  struct xe_bo *xe_gt_sriov_pf_config_get_lmem_obj(struct xe_gt *gt, unsigned int vfid);
>  
>  u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid);

LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>


> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning
  2026-02-15 20:33 ` [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning Michal Wajdeczko
@ 2026-02-16 15:02   ` Piotr Piórkowski
  2026-02-16 15:11     ` Piotr Piórkowski
  0 siblings, 1 reply; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 15:02 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:16 +0100]:
> We already have functions to configure VF LMEM (aka VRAM) on the
> tile/GT level, used by the auto-provisioning and debugfs, but we
> also need functions that will work on the device level that will
> configure VRAM on all tiles at once.
> 
> We will use these new functions in upcoming patch.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 108 +++++++++++++++++++++
>  drivers/gpu/drm/xe/xe_sriov_pf_provision.h |   4 +
>  2 files changed, 112 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> index 01470c42e8a7..e7187d03fe1b 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> @@ -436,3 +436,111 @@ int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int v
>  
>  	return !count ? -ENODATA : 0;
>  }
> +
> +static u64 vram_alignment(struct xe_device *xe)
> +{
> +	/* this might be platform dependent */
> +	return SZ_2M;
> +}
> +

In xe_gt_sriov_pf_config.c we already have an almost identical function:
static u64 pf_get_lmem_alignment(struct xe_gt *gt)
{
	/* this might be platform dependent */
	return SZ_2M;
}

In my opinion, it is not worth duplicating it, but rather creating one common function.


> +static u64 vram_per_tile(struct xe_tile *tile, u64 total)
> +{
> +	struct xe_device *xe = tile->xe;
> +	unsigned int tcount = xe->info.tile_count;
> +	u64 alignment = vram_alignment(xe);
> +
> +	total = round_up(total, tcount * alignment);
> +	return div_u64(total, tcount);
> +}
> +
> +/**
> + * xe_sriov_pf_provision_bulk_apply_vram() - Change VRAM provisioning for all VFs.
> + * @xe: the PF &xe_device
> + * @size: the VRAM size in [bytes] to set
> + *
> + * Change all VFs VRAM (LMEM) provisioning on all tiles.
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_sriov_pf_provision_bulk_apply_vram(struct xe_device *xe, u64 size)
> +{
> +	unsigned int num_vfs = xe_sriov_pf_get_totalvfs(xe);
> +	struct xe_tile *tile;
> +	unsigned int id;
> +	int result = 0;
> +	int err;
> +
> +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> +
> +	for_each_tile(tile, xe, id) {
> +		err = xe_gt_sriov_pf_config_bulk_set_lmem_locked(tile->primary_gt,
> +								 VFID(1), num_vfs,
> +								 vram_per_tile(tile, size));
> +		result = result ?: err;
> +	}
> +
> +	return result;
> +}
> +
> +/**
> + * xe_sriov_pf_provision_apply_vf_vram() - Change single VF VRAM allocatio.
typo

> + * @xe: the PF &xe_device
> + * @vfid: the VF identifier (can't be 0 == PFID)
> + * @size: VRAM size to set
> + *
> + * Change VF's VRAM provisioning on all tiles/GTs.
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_sriov_pf_provision_apply_vf_vram(struct xe_device *xe, unsigned int vfid, u64 size)
> +{
> +	struct xe_tile *tile;
> +	unsigned int id;
> +	int result = 0;
> +	int err;
> +
> +	xe_assert(xe, vfid);
> +
> +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> +
> +	for_each_tile(tile, xe, id) {
> +		err = xe_gt_sriov_pf_config_set_lmem_locked(tile->primary_gt, vfid,
> +							    vram_per_tile(tile, size));
> +		result = result ?: err;
> +	}
> +
> +	return result;
> +}
> +
> +/**
> + * xe_sriov_pf_provision_query_vf_vram() - Query VF's VRAM allocation.
> + * @xe: the PF &xe_device
> + * @vfid: the VF identifier (can't be 0 == PFID)
> + * @prio: placeholder for the returned VRAM size
> + *
> + * Query VF's VRAM provisioning from all tiles/GTs.
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_sriov_pf_provision_query_vf_vram(struct xe_device *xe, unsigned int vfid, u64 *size)
> +{
> +	struct xe_tile *tile;
> +	unsigned int id;
> +	u64 total = 0;
NIT: The variable total is unnecessary

> +
> +	xe_assert(xe, vfid);
> +
> +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> +
> +	for_each_tile(tile, xe, id)
> +		total += xe_gt_sriov_pf_config_get_lmem_locked(tile->primary_gt, vfid);
> +
> +	*size = total;
> +	return 0;
> +}
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> index bccf23d51396..f26f49539697 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> @@ -24,6 +24,10 @@ int xe_sriov_pf_provision_bulk_apply_priority(struct xe_device *xe, u32 prio);
>  int xe_sriov_pf_provision_apply_vf_priority(struct xe_device *xe, unsigned int vfid, u32 prio);
>  int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int vfid, u32 *prio);
>  
> +int xe_sriov_pf_provision_bulk_apply_vram(struct xe_device *xe, u64 size);
> +int xe_sriov_pf_provision_apply_vf_vram(struct xe_device *xe, unsigned int vfid, u64 size);
> +int xe_sriov_pf_provision_query_vf_vram(struct xe_device *xe, unsigned int vfid, u64 *size);
> +
>  int xe_sriov_pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs);
>  int xe_sriov_pf_unprovision_vfs(struct xe_device *xe, unsigned int num_vfs);
>  
Code is functionally OK
Minor comments, with clarification of vram_alignment:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>
> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning
  2026-02-16 15:02   ` Piotr Piórkowski
@ 2026-02-16 15:11     ` Piotr Piórkowski
  0 siblings, 0 replies; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 15:11 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Piotr Piórkowski <piotr.piorkowski@intel.com> wrote on pon [2026-lut-16 16:02:20 +0100]:
> Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:16 +0100]:
> > We already have functions to configure VF LMEM (aka VRAM) on the
> > tile/GT level, used by the auto-provisioning and debugfs, but we
> > also need functions that will work on the device level that will
> > configure VRAM on all tiles at once.
> > 
> > We will use these new functions in upcoming patch.
> > 
> > Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 108 +++++++++++++++++++++
> >  drivers/gpu/drm/xe/xe_sriov_pf_provision.h |   4 +
> >  2 files changed, 112 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> > index 01470c42e8a7..e7187d03fe1b 100644
> > --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> > +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> > @@ -436,3 +436,111 @@ int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int v
> >  
> >  	return !count ? -ENODATA : 0;
> >  }
> > +
> > +static u64 vram_alignment(struct xe_device *xe)
> > +{
> > +	/* this might be platform dependent */
> > +	return SZ_2M;
> > +}
> > +
> 
> In xe_gt_sriov_pf_config.c we already have an almost identical function:
> static u64 pf_get_lmem_alignment(struct xe_gt *gt)
> {
> 	/* this might be platform dependent */
> 	return SZ_2M;
> }
> 
> In my opinion, it is not worth duplicating it, but rather creating one common function.
> 
> 
> > +static u64 vram_per_tile(struct xe_tile *tile, u64 total)
> > +{
> > +	struct xe_device *xe = tile->xe;
> > +	unsigned int tcount = xe->info.tile_count;
> > +	u64 alignment = vram_alignment(xe);
> > +
> > +	total = round_up(total, tcount * alignment);
> > +	return div_u64(total, tcount);
> > +}
> > +
> > +/**
> > + * xe_sriov_pf_provision_bulk_apply_vram() - Change VRAM provisioning for all VFs.
> > + * @xe: the PF &xe_device
> > + * @size: the VRAM size in [bytes] to set
> > + *
> > + * Change all VFs VRAM (LMEM) provisioning on all tiles.
> > + *
> > + * This function can only be called on PF.
> > + *
> > + * Return: 0 on success or a negative error code on failure.
> > + */
> > +int xe_sriov_pf_provision_bulk_apply_vram(struct xe_device *xe, u64 size)
> > +{
> > +	unsigned int num_vfs = xe_sriov_pf_get_totalvfs(xe);
> > +	struct xe_tile *tile;
> > +	unsigned int id;
> > +	int result = 0;
> > +	int err;
> > +

Reading the next patch, I realized that we still need to check if(xe_device_has_lmtt)
When defining sysfs, it cannot be done trivially, and sysfs will remain visible anyway due to DGFX.

The same for 
xe_sriov_pf_provision_apply_vf_vram

> > +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> > +
> > +	for_each_tile(tile, xe, id) {
> > +		err = xe_gt_sriov_pf_config_bulk_set_lmem_locked(tile->primary_gt,
> > +								 VFID(1), num_vfs,
> > +								 vram_per_tile(tile, size));
> > +		result = result ?: err;
> > +	}
> > +
> > +	return result;
> > +}
> > +
> > +/**
> > + * xe_sriov_pf_provision_apply_vf_vram() - Change single VF VRAM allocatio.
> typo
> 
> > + * @xe: the PF &xe_device
> > + * @vfid: the VF identifier (can't be 0 == PFID)
> > + * @size: VRAM size to set
> > + *
> > + * Change VF's VRAM provisioning on all tiles/GTs.
> > + *
> > + * This function can only be called on PF.
> > + *
> > + * Return: 0 on success or a negative error code on failure.
> > + */
> > +int xe_sriov_pf_provision_apply_vf_vram(struct xe_device *xe, unsigned int vfid, u64 size)
> > +{
> > +	struct xe_tile *tile;
> > +	unsigned int id;
> > +	int result = 0;
> > +	int err;
> > +
> > +	xe_assert(xe, vfid);
> > +
> > +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> > +
> > +	for_each_tile(tile, xe, id) {
> > +		err = xe_gt_sriov_pf_config_set_lmem_locked(tile->primary_gt, vfid,
> > +							    vram_per_tile(tile, size));
> > +		result = result ?: err;
> > +	}
> > +
> > +	return result;
> > +}
> > +
> > +/**
> > + * xe_sriov_pf_provision_query_vf_vram() - Query VF's VRAM allocation.
> > + * @xe: the PF &xe_device
> > + * @vfid: the VF identifier (can't be 0 == PFID)
> > + * @prio: placeholder for the returned VRAM size
> > + *
> > + * Query VF's VRAM provisioning from all tiles/GTs.
> > + *
> > + * This function can only be called on PF.
> > + *
> > + * Return: 0 on success or a negative error code on failure.
> > + */
> > +int xe_sriov_pf_provision_query_vf_vram(struct xe_device *xe, unsigned int vfid, u64 *size)
> > +{
> > +	struct xe_tile *tile;
> > +	unsigned int id;
> > +	u64 total = 0;
> NIT: The variable total is unnecessary
> 
> > +
> > +	xe_assert(xe, vfid);
> > +
> > +	guard(mutex)(xe_sriov_pf_master_mutex(xe));
> > +
> > +	for_each_tile(tile, xe, id)
> > +		total += xe_gt_sriov_pf_config_get_lmem_locked(tile->primary_gt, vfid);
> > +
> > +	*size = total;
> > +	return 0;
> > +}
> > diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> > index bccf23d51396..f26f49539697 100644
> > --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> > +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> > @@ -24,6 +24,10 @@ int xe_sriov_pf_provision_bulk_apply_priority(struct xe_device *xe, u32 prio);
> >  int xe_sriov_pf_provision_apply_vf_priority(struct xe_device *xe, unsigned int vfid, u32 prio);
> >  int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int vfid, u32 *prio);
> >  
> > +int xe_sriov_pf_provision_bulk_apply_vram(struct xe_device *xe, u64 size);
> > +int xe_sriov_pf_provision_apply_vf_vram(struct xe_device *xe, unsigned int vfid, u64 size);
> > +int xe_sriov_pf_provision_query_vf_vram(struct xe_device *xe, unsigned int vfid, u64 *size);
> > +
> >  int xe_sriov_pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs);
> >  int xe_sriov_pf_unprovision_vfs(struct xe_device *xe, unsigned int num_vfs);
> >  
> Code is functionally OK
> Minor comments, with clarification of vram_alignment:
> Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>
> > -- 
> > 2.47.1
> > 
> 
> -- 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs
  2026-02-15 20:33 ` [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
@ 2026-02-16 15:29   ` Piotr Piórkowski
  2026-02-18 21:07   ` Rodrigo Vivi
  1 sibling, 0 replies; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 15:29 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:17 +0100]:
> On current discrete platforms, PF will provision all VFs with a fair
> amount of the VRAM (LMEM) during VFs enabling. However, in some cases
> this automatic VRAM provisioning might be either non-reproducible or
> sub-optimal. This could break VF's migration or impact performance.
> 
> Expose per-VF VRAM quota read-write sysfs attributes to allow admin
> change default VRAM provisioning performed by the PF.
> 
>  /sys/bus/pci/drivers/xe/BDF/
>  ├── sriov_admin/
>      ├── .bulk_profile
>      │   └── vram_quota                 [RW] unsigned integer
>      ├── vf1/
>      │   └── profile
>      │       └── vram_quota             [RW] unsigned integer
>      ├── vf2/
>      │   └── profile
>      │       └── vram_quota             [RW] unsigned integer
> 
> Above values represent total provisioned VRAM from all tiles where
> VFs were assigned, and currently it's from all tiles always.
> 
> Note that changing VRAM provisioning is only possible when VF is
> not running, otherwise GuC will complain. To make sure that given
> VF is idle, triggering VF FLR might be needed.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> index 82a1055985ba..09deda2fd8b2 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> @@ -44,7 +44,8 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
>   *     ├── .bulk_profile
>   *     │   ├── exec_quantum_ms
>   *     │   ├── preempt_timeout_us
> - *     │   └── sched_priority
> + *     │   ├── sched_priority
> + *     │   └── vram_quota
>   *     ├── pf/
>   *     │   ├── ...
>   *     │   ├── device -> ../../../BDF
> @@ -59,7 +60,8 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
>   *     │   └── profile
>   *     │       ├── exec_quantum_ms
>   *     │       ├── preempt_timeout_us
> - *     │       └── sched_priority
> + *     │       ├── sched_priority
> + *     │       └── vram_quota
>   *     ├── vf2/
>   *     :
>   *     └── vfN/
> @@ -132,6 +134,7 @@ static XE_SRIOV_DEV_ATTR_WO(NAME)
>  
>  DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(exec_quantum_ms, eq, u32);
>  DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(preempt_timeout_us, pt, u32);
> +DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(vram_quota, vram, u64);
>  
>  static const char * const sched_priority_names[] = {
>  	[GUC_SCHED_PRIORITY_LOW] = "low",
> @@ -181,12 +184,25 @@ static struct attribute *bulk_profile_dev_attrs[] = {
>  	&xe_sriov_dev_attr_exec_quantum_ms.attr,
>  	&xe_sriov_dev_attr_preempt_timeout_us.attr,
>  	&xe_sriov_dev_attr_sched_priority.attr,
> +	&xe_sriov_dev_attr_vram_quota.attr,
>  	NULL
>  };
>  
> +static umode_t profile_dev_attr_is_visible(struct kobject *kobj,
> +					   struct attribute *attr, int index)
> +{
> +	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
> +
> +	if (attr == &xe_sriov_dev_attr_vram_quota.attr && !IS_DGFX(vkobj->xe))
> +		return 0;
> +
> +	return attr->mode;
> +}
> +
>  static const struct attribute_group bulk_profile_dev_attr_group = {
>  	.name = ".bulk_profile",
>  	.attrs = bulk_profile_dev_attrs,
> +	.is_visible = profile_dev_attr_is_visible,
>  };
>  
>  static const struct attribute_group *xe_sriov_dev_attr_groups[] = {
> @@ -228,6 +244,7 @@ static XE_SRIOV_VF_ATTR(NAME)
>  
>  DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(exec_quantum_ms, eq, u32, "%u\n");
>  DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(preempt_timeout_us, pt, u32, "%u\n");
> +DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(vram_quota, vram, u64, "%llu\n");
>  
>  static ssize_t xe_sriov_vf_attr_sched_priority_show(struct xe_device *xe, unsigned int vfid,
>  						    char *buf)
> @@ -274,6 +291,7 @@ static struct attribute *profile_vf_attrs[] = {
>  	&xe_sriov_vf_attr_exec_quantum_ms.attr,
>  	&xe_sriov_vf_attr_preempt_timeout_us.attr,
>  	&xe_sriov_vf_attr_sched_priority.attr,
> +	&xe_sriov_vf_attr_vram_quota.attr,
>  	NULL
>  };
>  
> @@ -286,6 +304,10 @@ static umode_t profile_vf_attr_is_visible(struct kobject *kobj,
>  	    !sched_priority_change_allowed(vkobj->vfid))
>  		return attr->mode & 0444;
>  
> +	if (attr == &xe_sriov_vf_attr_vram_quota.attr &&
> +	    (!IS_DGFX(vkobj->xe) || vkobj->vfid == PFID))
> +		return 0;
> +
>  	return attr->mode;
>  }

LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>


>  
> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/9] drm/xe/pf: Use migration-friendly VRAM auto-provisioning
  2026-02-15 20:33 ` [PATCH 4/9] drm/xe/pf: Use migration-friendly VRAM auto-provisioning Michal Wajdeczko
@ 2026-02-16 16:14   ` Piotr Piórkowski
  0 siblings, 0 replies; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 16:14 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:18 +0100]:
> Instead of trying very hard to find the largest fair VRAM (aka LMEM)
> size that could be allocated for VFs on the current tile, pick some
> smaller rounded down to power-of-two value that is more likely to be
> provisioned in the same manner by the other PF instances.
> 
> In some cases, the outcome of above calculation might not be optimal,
> but it's expected that admin will do fine-tuning using sysfs files.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 27 ++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index 23af49dc1bfa..43041af81518 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1919,6 +1919,26 @@ static u64 pf_estimate_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
>  	return fair;
>  }
>  
> +static u64 pf_profile_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
> +{
> +	struct xe_tile *tile = gt_to_tile(gt);
> +	bool admin_only_pf = xe_sriov_pf_admin_only(tile->xe);
> +	u64 usable = xe_vram_region_usable_size(tile->mem.vram);
> +	u64 shareable = ALIGN_DOWN(usable, SZ_1G);
> +	u64 alignment = pf_get_lmem_alignment(gt);
> +	u64 fair;
> +
> +	if (admin_only_pf)
> +		fair = div_u64(shareable, num_vfs);
> +	else
> +		fair = div_u64(shareable, 1 + num_vfs);
> +
> +	if (!admin_only_pf && fair)
> +		fair = rounddown_pow_of_two(fair);
> +
> +	return ALIGN_DOWN(fair, alignment);

I would add a hard guarantee that for admin_only_pf, PF will not be left without a minimum
amount of VRAM.
Currently, under certain conditions, it may run out of VRAM - although this is unlikely.
I have already discussed this with Michal offline and I am leaving it up to him to decide
what to do about it.
> +}
> +
>  /**
>   * xe_gt_sriov_pf_config_set_fair_lmem - Provision many VFs with fair LMEM.
>   * @gt: the &xe_gt (can't be media)
> @@ -1932,6 +1952,7 @@ static u64 pf_estimate_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
>  int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
>  					unsigned int num_vfs)
>  {
> +	u64 profile;
>  	u64 fair;
>  
>  	xe_gt_assert(gt, vfid);
> @@ -1948,6 +1969,12 @@ int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
>  	if (!fair)
>  		return -ENOSPC;
>  
> +	profile = pf_profile_fair_lmem(gt, num_vfs);
> +	fair = min(fair, profile);
> +	if (fair < profile)
> +		xe_gt_sriov_info(gt, "Using non-profile provisioning (%s %llu vs %llu)\n",
> +				 "VRAM", fair, profile);
> +
>  	return xe_gt_sriov_pf_config_bulk_set_lmem(gt, vfid, num_vfs, fair);
>  }
>  
Apart from what I mentioned earlier, the code is fine for me,
In the worst case (highly unlikely), provisioning will just fail.
So still:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>



> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 5/9] drm/xe/tests: Add KUnit tests for new VRAM fair provisioning
  2026-02-15 20:33 ` [PATCH 5/9] drm/xe/tests: Add KUnit tests for new VRAM fair provisioning Michal Wajdeczko
@ 2026-02-16 16:23   ` Piotr Piórkowski
  0 siblings, 0 replies; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 16:23 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:19 +0100]:
> Add basic test cases to check outcome of the fair VRAM provisioning
> for regular and admin-only PF mode.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  .../xe/tests/xe_gt_sriov_pf_config_kunit.c    | 90 ++++++++++++++++++-
>  1 file changed, 89 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/xe/tests/xe_gt_sriov_pf_config_kunit.c b/drivers/gpu/drm/xe/tests/xe_gt_sriov_pf_config_kunit.c
> index 3889dc3e49ca..80e5065beb2c 100644
> --- a/drivers/gpu/drm/xe/tests/xe_gt_sriov_pf_config_kunit.c
> +++ b/drivers/gpu/drm/xe/tests/xe_gt_sriov_pf_config_kunit.c
> @@ -11,6 +11,7 @@
>  #include "xe_pci_test.h"
>  
>  #define TEST_MAX_VFS	63
> +#define TEST_VRAM	0x37a800000ull
>  
>  static void pf_set_admin_mode(struct xe_device *xe, bool enable)
>  {
> @@ -19,6 +20,17 @@ static void pf_set_admin_mode(struct xe_device *xe, bool enable)
>  	KUNIT_EXPECT_EQ(kunit_get_current_test(), enable, xe_sriov_pf_admin_only(xe));
>  }
>  
> +static void pf_set_usable_vram(struct xe_device *xe, u64 usable)
> +{
> +	struct xe_tile *tile = xe_device_get_root_tile(xe);
> +	struct kunit *test = kunit_get_current_test();
> +
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, tile);
> +	xe->mem.vram->usable_size = usable;
> +	tile->mem.vram->usable_size = usable;
> +	KUNIT_ASSERT_EQ(test, usable, xe_vram_region_usable_size(tile->mem.vram));
> +}
> +
>  static const void *num_vfs_gen_param(struct kunit *test, const void *prev, char *desc)
>  {
>  	unsigned long next = 1 + (unsigned long)prev;
> @@ -34,9 +46,11 @@ static int pf_gt_config_test_init(struct kunit *test)
>  {
>  	struct xe_pci_fake_data fake = {
>  		.sriov_mode = XE_SRIOV_MODE_PF,
> -		.platform = XE_TIGERLAKE, /* any random platform with SR-IOV */
> +		.platform = XE_BATTLEMAGE, /* any random DGFX platform with SR-IOV */
>  		.subplatform = XE_SUBPLATFORM_NONE,
> +		.graphics_verx100 = 2001,
>  	};
> +	struct xe_vram_region *vram;
>  	struct xe_device *xe;
>  	struct xe_gt *gt;
>  
> @@ -50,6 +64,13 @@ static int pf_gt_config_test_init(struct kunit *test)
>  	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gt);
>  	test->priv = gt;
>  
> +	/* pretend it has some VRAM */
> +	KUNIT_ASSERT_TRUE(test, IS_DGFX(xe));
> +	vram = kunit_kzalloc(test, sizeof(*vram), GFP_KERNEL);
> +	KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vram);
> +	vram->usable_size = TEST_VRAM;
> +	xe->mem.vram = xe->tiles[0].mem.vram = vram;
> +
>  	/* pretend it can support up to 63 VFs */
>  	xe->sriov.pf.device_total_vfs = TEST_MAX_VFS;
>  	xe->sriov.pf.driver_max_vfs = TEST_MAX_VFS;
> @@ -189,13 +210,80 @@ static void fair_ggtt(struct kunit *test)
>  		KUNIT_ASSERT_EQ(test, SZ_2G, pf_profile_fair_ggtt(gt, num_vfs));
>  }
>  
> +static const u64 vram_sizes[] = {
> +	SZ_4G - SZ_512M,
> +	SZ_8G + SZ_4G - SZ_512M,
> +	SZ_16G - SZ_512M,
> +	SZ_32G - SZ_512M,
> +	SZ_64G - SZ_512M,
> +	TEST_VRAM,
> +};
> +
> +static void u64_param_get_desc(const u64 *p, char *desc)
> +{
> +	string_get_size(*p, 1, STRING_UNITS_2, desc, KUNIT_PARAM_DESC_SIZE);
> +}
> +
> +KUNIT_ARRAY_PARAM(vram_size, vram_sizes, u64_param_get_desc);
> +
> +static void fair_vram_1vf(struct kunit *test)
> +{
> +	const u64 usable = *(const u64 *)test->param_value;
> +	struct xe_gt *gt = test->priv;
> +	struct xe_device *xe = gt_to_xe(gt);
> +
> +	pf_set_admin_mode(xe, false);
> +	pf_set_usable_vram(xe, usable);
> +
> +	KUNIT_EXPECT_NE(test, 0, pf_profile_fair_lmem(gt, 1));
> +	KUNIT_EXPECT_GE(test, usable, pf_profile_fair_lmem(gt, 1));
> +	KUNIT_EXPECT_TRUE(test, is_power_of_2(pf_profile_fair_lmem(gt, 1)));
> +	KUNIT_EXPECT_GE(test, usable - pf_profile_fair_lmem(gt, 1), pf_profile_fair_lmem(gt, 1));
> +}
> +
> +static void fair_vram_1vf_admin_only(struct kunit *test)
> +{
> +	const u64 usable = *(const u64 *)test->param_value;
> +	struct xe_gt *gt = test->priv;
> +	struct xe_device *xe = gt_to_xe(gt);
> +
> +	pf_set_admin_mode(xe, true);
> +	pf_set_usable_vram(xe, usable);
> +
> +	KUNIT_EXPECT_NE(test, 0, pf_profile_fair_lmem(gt, 1));
> +	KUNIT_EXPECT_GE(test, usable, pf_profile_fair_lmem(gt, 1));
> +	KUNIT_EXPECT_LT(test, usable - pf_profile_fair_lmem(gt, 1), pf_profile_fair_lmem(gt, 1));
> +	KUNIT_EXPECT_TRUE(test, IS_ALIGNED(pf_profile_fair_lmem(gt, 1), SZ_1G));
> +}
> +
> +static void fair_vram(struct kunit *test)
> +{
> +	unsigned int num_vfs = (unsigned long)test->param_value;
> +	struct xe_gt *gt = test->priv;
> +	struct xe_device *xe = gt_to_xe(gt);
> +	u64 alignment = pf_get_lmem_alignment(gt);
> +	char size[10];
> +
> +	pf_set_admin_mode(xe, false);
> +
> +	string_get_size(pf_profile_fair_lmem(gt, num_vfs), 1, STRING_UNITS_2, size, sizeof(size));
> +	kunit_info(test, "fair %s %llx\n", size, pf_profile_fair_lmem(gt, num_vfs));
> +
> +	KUNIT_EXPECT_TRUE(test, is_power_of_2(pf_profile_fair_lmem(gt, num_vfs)));
> +	KUNIT_EXPECT_TRUE(test, IS_ALIGNED(pf_profile_fair_lmem(gt, num_vfs), alignment));
> +	KUNIT_EXPECT_GE(test, TEST_VRAM, num_vfs * pf_profile_fair_lmem(gt, num_vfs));
> +}
> +
>  static struct kunit_case pf_gt_config_test_cases[] = {
>  	KUNIT_CASE(fair_contexts_1vf),
>  	KUNIT_CASE(fair_doorbells_1vf),
>  	KUNIT_CASE(fair_ggtt_1vf),
> +	KUNIT_CASE_PARAM(fair_vram_1vf, vram_size_gen_params),
> +	KUNIT_CASE_PARAM(fair_vram_1vf_admin_only, vram_size_gen_params),
>  	KUNIT_CASE_PARAM(fair_contexts, num_vfs_gen_param),
>  	KUNIT_CASE_PARAM(fair_doorbells, num_vfs_gen_param),
>  	KUNIT_CASE_PARAM(fair_ggtt, num_vfs_gen_param),
> +	KUNIT_CASE_PARAM(fair_vram, num_vfs_gen_param),
>  	{}
>  };

LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

>  
> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 6/9] drm/xe/pf: Don't check for empty config
  2026-02-15 20:33 ` [PATCH 6/9] drm/xe/pf: Don't check for empty config Michal Wajdeczko
@ 2026-02-16 16:27   ` Piotr Piórkowski
  0 siblings, 0 replies; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 16:27 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:20 +0100]:
> We already turn off VFs auto-provisioning once we detect manual VFs
> provisioning over the debugfs, so we can skip additional check for
> all VFs configs being still empty.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 13 -------------
>  1 file changed, 13 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> index e7187d03fe1b..95c8f01e0264 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> @@ -32,17 +32,6 @@ static bool pf_auto_provisioning_mode(struct xe_device *xe)
>  	return xe->sriov.pf.provision.mode == XE_SRIOV_PROVISIONING_MODE_AUTO;
>  }
>  
> -static bool pf_needs_provisioning(struct xe_gt *gt, unsigned int num_vfs)
> -{
> -	unsigned int n;
> -
> -	for (n = 1; n <= num_vfs; n++)
> -		if (!xe_gt_sriov_pf_config_is_empty(gt, n))
> -			return false;
> -
> -	return true;
> -}
> -
>  static int pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs)
>  {
>  	struct xe_gt *gt;
> @@ -51,8 +40,6 @@ static int pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs)
>  	int err;
>  
>  	for_each_gt(gt, xe, id) {
> -		if (!pf_needs_provisioning(gt, num_vfs))
> -			return -EUCLEAN;
>  		err = xe_gt_sriov_pf_config_set_fair(gt, VFID(1), num_vfs);
>  		result = result ?: err;
>  	}

LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>


> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 7/9] drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning
  2026-02-15 20:33 ` [PATCH 7/9] drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning Michal Wajdeczko
@ 2026-02-16 16:36   ` Piotr Piórkowski
  0 siblings, 0 replies; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 16:36 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:21 +0100]:
> We will add more code there and with guard() it will easier to
> avoid mistakes in unlocking.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index 43041af81518..1a9f3b85526c 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1962,10 +1962,9 @@ int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
>  	if (!xe_device_has_lmtt(gt_to_xe(gt)))
>  		return 0;
>  
> -	mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
> +	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
> +
>  	fair = pf_estimate_fair_lmem(gt, num_vfs);
> -	mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
> -
>  	if (!fair)
>  		return -ENOSPC;
>  
> @@ -1975,7 +1974,7 @@ int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
>  		xe_gt_sriov_info(gt, "Using non-profile provisioning (%s %llu vs %llu)\n",
>  				 "VRAM", fair, profile);
>  
> -	return xe_gt_sriov_pf_config_bulk_set_lmem(gt, vfid, num_vfs, fair);
> +	return xe_gt_sriov_pf_config_bulk_set_lmem_locked(gt, vfid, num_vfs, fair);
>  }
>  
>  /**



LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>



> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 8/9] drm/xe/pf: Skip VRAM auto-provisioning if already provisioned
  2026-02-15 20:33 ` [PATCH 8/9] drm/xe/pf: Skip VRAM auto-provisioning if already provisioned Michal Wajdeczko
@ 2026-02-16 16:59   ` Piotr Piórkowski
  0 siblings, 0 replies; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 16:59 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:22 +0100]:
> In case VF's VRAM provisioning using sysfs is done by the admin
> prior to VFs enabling, this provisioning will be lost as PF will
> run VRAM auto-provisioning anyway. To avoid that skip this auto-
> provisioning if any VF has been already provisioned with VRAM.
> 
> To help admin find any mistakes, add diagnostics messages about
> which VFs were provisioned with VRAM and which were missed.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 56 ++++++++++++++++++++++
>  1 file changed, 56 insertions(+)
> 
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index 1a9f3b85526c..b1eccd6712f4 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1939,6 +1939,59 @@ static u64 pf_profile_fair_lmem(struct xe_gt *gt, unsigned int num_vfs)
>  	return ALIGN_DOWN(fair, alignment);
>  }
>  
> +static void __pf_show_provisioning_lmem(struct xe_gt *gt, unsigned int first_vf,
> +					unsigned int num_vfs, bool provisioned)
> +{
> +	unsigned int allvfs = 1 + xe_gt_sriov_pf_get_totalvfs(gt); /* PF plus VFs */
> +	unsigned long *bitmap __free(bitmap) = bitmap_zalloc(allvfs, GFP_KERNEL);
> +	unsigned int weight;
> +	unsigned int n;
> +
> +	if (!bitmap)
> +		return;
> +
> +	for (n = first_vf; n < first_vf + num_vfs; n++) {
> +		if (!!pf_get_vf_config_lmem(gt, VFID(n)) == provisioned)
> +			bitmap_set(bitmap, n, 1);
> +	}
> +
> +	weight = bitmap_weight(bitmap, allvfs);
> +	if (!weight)
> +		return;
> +
> +	xe_gt_sriov_info(gt, "VF%s%*pbl %s provisioned with VRAM\n",
> +			 weight > 1 ? "s " : "", allvfs, bitmap,
> +			 provisioned ? "already" : "not");
> +}
> +
> +static void pf_show_all_provisioned_lmem(struct xe_gt *gt)
> +{
> +	__pf_show_provisioning_lmem(gt, 1, xe_gt_sriov_pf_get_totalvfs(gt), true);

NIT: In my opinion, I would give it here VFID(1)
This makes it so we don't have a magic number here.

> +}
> +
> +static void pf_show_unprovisioned_lmem(struct xe_gt *gt, unsigned int first_vf,
> +				       unsigned int num_vfs)
> +{
> +	__pf_show_provisioning_lmem(gt, first_vf, num_vfs, false);
> +}
> +
> +static bool pf_needs_provision_lmem(struct xe_gt *gt, unsigned int first_vf,
> +				    unsigned int num_vfs)
> +{
> +	unsigned int vfid;
> +
> +	for (vfid = first_vf; vfid < first_vf + num_vfs; vfid++) {
> +		if (pf_get_vf_config_lmem(gt, vfid)) {
> +			pf_show_all_provisioned_lmem(gt);
> +			pf_show_unprovisioned_lmem(gt, first_vf, num_vfs);
> +			return false;
> +		}
> +	}
> +
> +	pf_show_all_provisioned_lmem(gt);
> +	return true;
> +}
> +
>  /**
>   * xe_gt_sriov_pf_config_set_fair_lmem - Provision many VFs with fair LMEM.
>   * @gt: the &xe_gt (can't be media)
> @@ -1964,6 +2017,9 @@ int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid,
>  
>  	guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
>  
> +	if (!pf_needs_provision_lmem(gt, vfid, num_vfs))
> +		return 0;
> +
>  	fair = pf_estimate_fair_lmem(gt, num_vfs);
>  	if (!fair)
>  		return -ENOSPC;


LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 9/9] drm/xe/pf: Add documentation for vram_quota
  2026-02-15 20:33 ` [PATCH 9/9] drm/xe/pf: Add documentation for vram_quota Michal Wajdeczko
@ 2026-02-16 17:04   ` Piotr Piórkowski
  0 siblings, 0 replies; 21+ messages in thread
From: Piotr Piórkowski @ 2026-02-16 17:04 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe, Rodrigo Vivi

Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:23 +0100]:
> Add initial documentation for recently added VRAM provisioning
> Xe driver specific SR-IOV sysfs files under device/sriov_admin.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>  .../ABI/testing/sysfs-driver-intel-xe-sriov   | 31 +++++++++++++++++++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov b/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
> index 7f5ef9eada53..14b61bd2d602 100644
> --- a/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
> +++ b/Documentation/ABI/testing/sysfs-driver-intel-xe-sriov
> @@ -129,6 +129,37 @@ Description:
>  			-EIO if FW refuses to change the provisioning.
>  
>  
> +What:		/sys/bus/pci/drivers/xe/.../sriov_admin/.bulk_profile/vram_quota
> +What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/profile/vram_quota
> +Date:		February 2026
> +KernelVersion:	7.0
> +Contact:	intel-xe@lists.freedesktop.org
> +Description:
> +		These files allow to perform initial VFs VRAM provisioning prior to VFs
> +		enabling or to change VFs VRAM provisioning once the VFs are enabled.
> +		Any non-zero initial VRAM provisioning will block VFs auto-provisioning.
> +		Without initial VRAM provisioning those files will show result of the
> +		VRAM auto-provisioning performed by the PF once the VFs are enabled.
> +		Once the VFs are disabled, all VRAM provisioning will be released.
> +		These files are visible only on discrete Intel Xe platforms with VRAM
> +		and are writeable only if dynamic VFs VRAM provisioning is supported.
> +
> +		.bulk_profile/vram_quota: (WO) unsigned integer
> +			The amount of the provisioned VRAM in [bytes] for each VF.
> +			Actual final value might be aligned per HW/FW requirements.
> +
> +		profile/vram_quota: (RW) unsigned integer
> +			The amount of the provisioned VRAM in [bytes] for this VF.
> +			Actual quantum value might be aligned per HW/FW requirements.

Copy-paste from exec_quantum_ms ? :)
> +
> +			Default is 0 (unprovisioned).
> +
> +		Writes to these attributes may fail with errors like:
> +			-EINVAL if provided input is malformed or not recognized,
> +			-EPERM if change is not applicable on given HW/FW,
> +			-EIO if FW refuses to change the provisioning.
> +
> +
>  What:		/sys/bus/pci/drivers/xe/.../sriov_admin/vf<n>/stop
>  Date:		October 2025
>  KernelVersion:	6.19


With fix:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>

> -- 
> 2.47.1
> 

-- 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs
  2026-02-15 20:33 ` [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
  2026-02-16 15:29   ` Piotr Piórkowski
@ 2026-02-18 21:07   ` Rodrigo Vivi
  1 sibling, 0 replies; 21+ messages in thread
From: Rodrigo Vivi @ 2026-02-18 21:07 UTC (permalink / raw)
  To: Michal Wajdeczko; +Cc: intel-xe

On Sun, Feb 15, 2026 at 09:33:17PM +0100, Michal Wajdeczko wrote:
> On current discrete platforms, PF will provision all VFs with a fair
> amount of the VRAM (LMEM) during VFs enabling. However, in some cases
> this automatic VRAM provisioning might be either non-reproducible or
> sub-optimal. This could break VF's migration or impact performance.
> 
> Expose per-VF VRAM quota read-write sysfs attributes to allow admin
> change default VRAM provisioning performed by the PF.
> 
>  /sys/bus/pci/drivers/xe/BDF/
>  ├── sriov_admin/
>      ├── .bulk_profile
>      │   └── vram_quota                 [RW] unsigned integer
>      ├── vf1/
>      │   └── profile
>      │       └── vram_quota             [RW] unsigned integer
>      ├── vf2/
>      │   └── profile
>      │       └── vram_quota             [RW] unsigned integer
> 

Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> Above values represent total provisioned VRAM from all tiles where
> VFs were assigned, and currently it's from all tiles always.
> 
> Note that changing VRAM provisioning is only possible when VF is
> not running, otherwise GuC will complain. To make sure that given
> VF is idle, triggering VF FLR might be needed.
> 
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 26 ++++++++++++++++++++++++--
>  1 file changed, 24 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> index 82a1055985ba..09deda2fd8b2 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> @@ -44,7 +44,8 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
>   *     ├── .bulk_profile
>   *     │   ├── exec_quantum_ms
>   *     │   ├── preempt_timeout_us
> - *     │   └── sched_priority
> + *     │   ├── sched_priority
> + *     │   └── vram_quota
>   *     ├── pf/
>   *     │   ├── ...
>   *     │   ├── device -> ../../../BDF
> @@ -59,7 +60,8 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
>   *     │   └── profile
>   *     │       ├── exec_quantum_ms
>   *     │       ├── preempt_timeout_us
> - *     │       └── sched_priority
> + *     │       ├── sched_priority
> + *     │       └── vram_quota
>   *     ├── vf2/
>   *     :
>   *     └── vfN/
> @@ -132,6 +134,7 @@ static XE_SRIOV_DEV_ATTR_WO(NAME)
>  
>  DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(exec_quantum_ms, eq, u32);
>  DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(preempt_timeout_us, pt, u32);
> +DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(vram_quota, vram, u64);
>  
>  static const char * const sched_priority_names[] = {
>  	[GUC_SCHED_PRIORITY_LOW] = "low",
> @@ -181,12 +184,25 @@ static struct attribute *bulk_profile_dev_attrs[] = {
>  	&xe_sriov_dev_attr_exec_quantum_ms.attr,
>  	&xe_sriov_dev_attr_preempt_timeout_us.attr,
>  	&xe_sriov_dev_attr_sched_priority.attr,
> +	&xe_sriov_dev_attr_vram_quota.attr,
>  	NULL
>  };
>  
> +static umode_t profile_dev_attr_is_visible(struct kobject *kobj,
> +					   struct attribute *attr, int index)
> +{
> +	struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
> +
> +	if (attr == &xe_sriov_dev_attr_vram_quota.attr && !IS_DGFX(vkobj->xe))
> +		return 0;
> +
> +	return attr->mode;
> +}
> +
>  static const struct attribute_group bulk_profile_dev_attr_group = {
>  	.name = ".bulk_profile",
>  	.attrs = bulk_profile_dev_attrs,
> +	.is_visible = profile_dev_attr_is_visible,
>  };
>  
>  static const struct attribute_group *xe_sriov_dev_attr_groups[] = {
> @@ -228,6 +244,7 @@ static XE_SRIOV_VF_ATTR(NAME)
>  
>  DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(exec_quantum_ms, eq, u32, "%u\n");
>  DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(preempt_timeout_us, pt, u32, "%u\n");
> +DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(vram_quota, vram, u64, "%llu\n");
>  
>  static ssize_t xe_sriov_vf_attr_sched_priority_show(struct xe_device *xe, unsigned int vfid,
>  						    char *buf)
> @@ -274,6 +291,7 @@ static struct attribute *profile_vf_attrs[] = {
>  	&xe_sriov_vf_attr_exec_quantum_ms.attr,
>  	&xe_sriov_vf_attr_preempt_timeout_us.attr,
>  	&xe_sriov_vf_attr_sched_priority.attr,
> +	&xe_sriov_vf_attr_vram_quota.attr,
>  	NULL
>  };
>  
> @@ -286,6 +304,10 @@ static umode_t profile_vf_attr_is_visible(struct kobject *kobj,
>  	    !sched_priority_change_allowed(vkobj->vfid))
>  		return attr->mode & 0444;
>  
> +	if (attr == &xe_sriov_vf_attr_vram_quota.attr &&
> +	    (!IS_DGFX(vkobj->xe) || vkobj->vfid == PFID))
> +		return 0;
> +
>  	return attr->mode;
>  }
>  
> -- 
> 2.47.1
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2026-02-18 21:07 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
2026-02-15 20:33 ` [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions Michal Wajdeczko
2026-02-16 14:37   ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning Michal Wajdeczko
2026-02-16 15:02   ` Piotr Piórkowski
2026-02-16 15:11     ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
2026-02-16 15:29   ` Piotr Piórkowski
2026-02-18 21:07   ` Rodrigo Vivi
2026-02-15 20:33 ` [PATCH 4/9] drm/xe/pf: Use migration-friendly VRAM auto-provisioning Michal Wajdeczko
2026-02-16 16:14   ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 5/9] drm/xe/tests: Add KUnit tests for new VRAM fair provisioning Michal Wajdeczko
2026-02-16 16:23   ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 6/9] drm/xe/pf: Don't check for empty config Michal Wajdeczko
2026-02-16 16:27   ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 7/9] drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning Michal Wajdeczko
2026-02-16 16:36   ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 8/9] drm/xe/pf: Skip VRAM auto-provisioning if already provisioned Michal Wajdeczko
2026-02-16 16:59   ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 9/9] drm/xe/pf: Add documentation for vram_quota Michal Wajdeczko
2026-02-16 17:04   ` Piotr Piórkowski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox