From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: intel-xe@lists.freedesktop.org,
"Piotr Piórkowski" <piotr.piorkowski@intel.com>
Subject: Re: [PATCH v2 04/10] drm/xe/pf: Allow to change VFs VRAM quota using sysfs
Date: Wed, 18 Feb 2026 16:21:17 -0500 [thread overview]
Message-ID: <aZYtTV5WyU27j8PW@intel.com> (raw)
In-Reply-To: <20260218205553.3561-5-michal.wajdeczko@intel.com>
On Wed, Feb 18, 2026 at 09:55:46PM +0100, Michal Wajdeczko wrote:
> On current discrete platforms, PF will provision all VFs with a fair
> amount of the VRAM (LMEM) during VFs enabling. However, in some cases
> this automatic VRAM provisioning might be either non-reproducible or
> sub-optimal. This could break VF's migration or impact performance.
>
> Expose per-VF VRAM quota read-write sysfs attributes to allow admin
> change default VRAM provisioning performed by the PF.
>
> /sys/bus/pci/drivers/xe/BDF/
> ├── sriov_admin/
> ├── .bulk_profile
> │ └── vram_quota [RW] unsigned integer
> ├── vf1/
> │ └── profile
> │ └── vram_quota [RW] unsigned integer
> ├── vf2/
> │ └── profile
> │ └── vram_quota [RW] unsigned integer
>
> Above values represent total provisioned VRAM from all tiles where
> VFs were assigned, and currently it's from all tiles always.
>
> Note that changing VRAM provisioning is only possible when VF is
> not running, otherwise GuC will complain. To make sure that given
> VF is idle, triggering VF FLR might be needed.
>
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>
> ---
> v2: allow VRAM change only if LMTT (Piotr)
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
> drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c | 31 ++++++++++++++++++++++++--
> 1 file changed, 29 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> index 82a1055985ba..aa05c143a4d6 100644
> --- a/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> +++ b/drivers/gpu/drm/xe/xe_sriov_pf_sysfs.c
> @@ -9,6 +9,7 @@
> #include <drm/drm_managed.h>
>
> #include "xe_assert.h"
> +#include "xe_device.h"
> #include "xe_pci_sriov.h"
> #include "xe_pm.h"
> #include "xe_sriov.h"
> @@ -44,7 +45,8 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
> * ├── .bulk_profile
> * │ ├── exec_quantum_ms
> * │ ├── preempt_timeout_us
> - * │ └── sched_priority
> + * │ ├── sched_priority
> + * │ └── vram_quota
> * ├── pf/
> * │ ├── ...
> * │ ├── device -> ../../../BDF
> @@ -59,7 +61,8 @@ static int emit_choice(char *buf, int choice, const char * const *array, size_t
> * │ └── profile
> * │ ├── exec_quantum_ms
> * │ ├── preempt_timeout_us
> - * │ └── sched_priority
> + * │ ├── sched_priority
> + * │ └── vram_quota
> * ├── vf2/
> * :
> * └── vfN/
> @@ -132,6 +135,7 @@ static XE_SRIOV_DEV_ATTR_WO(NAME)
>
> DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(exec_quantum_ms, eq, u32);
> DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(preempt_timeout_us, pt, u32);
> +DEFINE_SIMPLE_BULK_PROVISIONING_SRIOV_DEV_ATTR_WO(vram_quota, vram, u64);
>
> static const char * const sched_priority_names[] = {
> [GUC_SCHED_PRIORITY_LOW] = "low",
> @@ -181,12 +185,26 @@ static struct attribute *bulk_profile_dev_attrs[] = {
> &xe_sriov_dev_attr_exec_quantum_ms.attr,
> &xe_sriov_dev_attr_preempt_timeout_us.attr,
> &xe_sriov_dev_attr_sched_priority.attr,
> + &xe_sriov_dev_attr_vram_quota.attr,
> NULL
> };
>
> +static umode_t profile_dev_attr_is_visible(struct kobject *kobj,
> + struct attribute *attr, int index)
> +{
> + struct xe_sriov_kobj *vkobj = to_xe_sriov_kobj(kobj);
> +
> + if (attr == &xe_sriov_dev_attr_vram_quota.attr &&
> + !xe_device_has_lmtt(vkobj->xe))
> + return 0;
> +
> + return attr->mode;
> +}
> +
> static const struct attribute_group bulk_profile_dev_attr_group = {
> .name = ".bulk_profile",
> .attrs = bulk_profile_dev_attrs,
> + .is_visible = profile_dev_attr_is_visible,
> };
>
> static const struct attribute_group *xe_sriov_dev_attr_groups[] = {
> @@ -228,6 +246,7 @@ static XE_SRIOV_VF_ATTR(NAME)
>
> DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(exec_quantum_ms, eq, u32, "%u\n");
> DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(preempt_timeout_us, pt, u32, "%u\n");
> +DEFINE_SIMPLE_PROVISIONING_SRIOV_VF_ATTR(vram_quota, vram, u64, "%llu\n");
>
> static ssize_t xe_sriov_vf_attr_sched_priority_show(struct xe_device *xe, unsigned int vfid,
> char *buf)
> @@ -274,6 +293,7 @@ static struct attribute *profile_vf_attrs[] = {
> &xe_sriov_vf_attr_exec_quantum_ms.attr,
> &xe_sriov_vf_attr_preempt_timeout_us.attr,
> &xe_sriov_vf_attr_sched_priority.attr,
> + &xe_sriov_vf_attr_vram_quota.attr,
> NULL
> };
>
> @@ -286,6 +306,13 @@ static umode_t profile_vf_attr_is_visible(struct kobject *kobj,
> !sched_priority_change_allowed(vkobj->vfid))
> return attr->mode & 0444;
>
> + if (attr == &xe_sriov_vf_attr_vram_quota.attr) {
> + if (!IS_DGFX(vkobj->xe) || vkobj->vfid == PFID)
> + return 0;
> + if (!xe_device_has_lmtt(vkobj->xe))
> + return attr->mode & 0444;
> + }
> +
> return attr->mode;
> }
>
> --
> 2.47.1
>
next prev parent reply other threads:[~2026-02-18 21:21 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-18 20:55 [PATCH v2 00/10] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
2026-02-18 20:55 ` [PATCH v2 01/10] drm/xe/pf: Expose LMTT page size Michal Wajdeczko
2026-02-20 8:22 ` Piotr Piórkowski
2026-02-18 20:55 ` [PATCH v2 02/10] drm/xe/pf: Add locked variants of VRAM configuration functions Michal Wajdeczko
2026-02-18 20:55 ` [PATCH v2 03/10] drm/xe/pf: Add functions for VRAM provisioning Michal Wajdeczko
2026-02-18 20:55 ` [PATCH v2 04/10] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
2026-02-18 21:21 ` Rodrigo Vivi [this message]
2026-02-18 20:55 ` [PATCH v2 05/10] drm/xe/pf: Use migration-friendly VRAM auto-provisioning Michal Wajdeczko
2026-02-18 20:55 ` [PATCH v2 06/10] drm/xe/tests: Add KUnit tests for new VRAM fair provisioning Michal Wajdeczko
2026-02-27 1:16 ` Nathan Chancellor
2026-02-18 20:55 ` [PATCH v2 07/10] drm/xe/pf: Don't check for empty config Michal Wajdeczko
2026-02-18 20:55 ` [PATCH v2 08/10] drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning Michal Wajdeczko
2026-02-18 20:55 ` [PATCH v2 09/10] drm/xe/pf: Skip VRAM auto-provisioning if already provisioned Michal Wajdeczko
2026-02-18 20:55 ` [PATCH v2 10/10] drm/xe/pf: Add documentation for vram_quota Michal Wajdeczko
2026-02-18 21:21 ` Rodrigo Vivi
2026-02-18 21:01 ` ✗ CI.checkpatch: warning for drm/xe/pf: Allow to change VFs VRAM quota using sysfs (rev2) Patchwork
2026-02-18 21:03 ` ✓ CI.KUnit: success " Patchwork
2026-02-18 21:47 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-02-20 10:35 ` Michal Wajdeczko
2026-02-18 23:19 ` ✗ Xe.CI.FULL: " Patchwork
2026-02-20 10:37 ` Michal Wajdeczko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aZYtTV5WyU27j8PW@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=michal.wajdeczko@intel.com \
--cc=piotr.piorkowski@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox