From: "Piotr Piórkowski" <piotr.piorkowski@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning
Date: Mon, 16 Feb 2026 16:11:05 +0100 [thread overview]
Message-ID: <20260216151105.y7kdzvbvtu3w4szy@intel.com> (raw)
In-Reply-To: <20260216150220.zhlria7kynqsjwlu@intel.com>
Piotr Piórkowski <piotr.piorkowski@intel.com> wrote on pon [2026-lut-16 16:02:20 +0100]:
> Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:16 +0100]:
> > We already have functions to configure VF LMEM (aka VRAM) on the
> > tile/GT level, used by the auto-provisioning and debugfs, but we
> > also need functions that will work on the device level that will
> > configure VRAM on all tiles at once.
> >
> > We will use these new functions in upcoming patch.
> >
> > Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_sriov_pf_provision.c | 108 +++++++++++++++++++++
> > drivers/gpu/drm/xe/xe_sriov_pf_provision.h | 4 +
> > 2 files changed, 112 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> > index 01470c42e8a7..e7187d03fe1b 100644
> > --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> > +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.c
> > @@ -436,3 +436,111 @@ int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int v
> >
> > return !count ? -ENODATA : 0;
> > }
> > +
> > +static u64 vram_alignment(struct xe_device *xe)
> > +{
> > + /* this might be platform dependent */
> > + return SZ_2M;
> > +}
> > +
>
> In xe_gt_sriov_pf_config.c we already have an almost identical function:
> static u64 pf_get_lmem_alignment(struct xe_gt *gt)
> {
> /* this might be platform dependent */
> return SZ_2M;
> }
>
> In my opinion, it is not worth duplicating it, but rather creating one common function.
>
>
> > +static u64 vram_per_tile(struct xe_tile *tile, u64 total)
> > +{
> > + struct xe_device *xe = tile->xe;
> > + unsigned int tcount = xe->info.tile_count;
> > + u64 alignment = vram_alignment(xe);
> > +
> > + total = round_up(total, tcount * alignment);
> > + return div_u64(total, tcount);
> > +}
> > +
> > +/**
> > + * xe_sriov_pf_provision_bulk_apply_vram() - Change VRAM provisioning for all VFs.
> > + * @xe: the PF &xe_device
> > + * @size: the VRAM size in [bytes] to set
> > + *
> > + * Change all VFs VRAM (LMEM) provisioning on all tiles.
> > + *
> > + * This function can only be called on PF.
> > + *
> > + * Return: 0 on success or a negative error code on failure.
> > + */
> > +int xe_sriov_pf_provision_bulk_apply_vram(struct xe_device *xe, u64 size)
> > +{
> > + unsigned int num_vfs = xe_sriov_pf_get_totalvfs(xe);
> > + struct xe_tile *tile;
> > + unsigned int id;
> > + int result = 0;
> > + int err;
> > +
Reading the next patch, I realized that we still need to check if(xe_device_has_lmtt)
When defining sysfs, it cannot be done trivially, and sysfs will remain visible anyway due to DGFX.
The same for
xe_sriov_pf_provision_apply_vf_vram
> > + guard(mutex)(xe_sriov_pf_master_mutex(xe));
> > +
> > + for_each_tile(tile, xe, id) {
> > + err = xe_gt_sriov_pf_config_bulk_set_lmem_locked(tile->primary_gt,
> > + VFID(1), num_vfs,
> > + vram_per_tile(tile, size));
> > + result = result ?: err;
> > + }
> > +
> > + return result;
> > +}
> > +
> > +/**
> > + * xe_sriov_pf_provision_apply_vf_vram() - Change single VF VRAM allocatio.
> typo
>
> > + * @xe: the PF &xe_device
> > + * @vfid: the VF identifier (can't be 0 == PFID)
> > + * @size: VRAM size to set
> > + *
> > + * Change VF's VRAM provisioning on all tiles/GTs.
> > + *
> > + * This function can only be called on PF.
> > + *
> > + * Return: 0 on success or a negative error code on failure.
> > + */
> > +int xe_sriov_pf_provision_apply_vf_vram(struct xe_device *xe, unsigned int vfid, u64 size)
> > +{
> > + struct xe_tile *tile;
> > + unsigned int id;
> > + int result = 0;
> > + int err;
> > +
> > + xe_assert(xe, vfid);
> > +
> > + guard(mutex)(xe_sriov_pf_master_mutex(xe));
> > +
> > + for_each_tile(tile, xe, id) {
> > + err = xe_gt_sriov_pf_config_set_lmem_locked(tile->primary_gt, vfid,
> > + vram_per_tile(tile, size));
> > + result = result ?: err;
> > + }
> > +
> > + return result;
> > +}
> > +
> > +/**
> > + * xe_sriov_pf_provision_query_vf_vram() - Query VF's VRAM allocation.
> > + * @xe: the PF &xe_device
> > + * @vfid: the VF identifier (can't be 0 == PFID)
> > + * @prio: placeholder for the returned VRAM size
> > + *
> > + * Query VF's VRAM provisioning from all tiles/GTs.
> > + *
> > + * This function can only be called on PF.
> > + *
> > + * Return: 0 on success or a negative error code on failure.
> > + */
> > +int xe_sriov_pf_provision_query_vf_vram(struct xe_device *xe, unsigned int vfid, u64 *size)
> > +{
> > + struct xe_tile *tile;
> > + unsigned int id;
> > + u64 total = 0;
> NIT: The variable total is unnecessary
>
> > +
> > + xe_assert(xe, vfid);
> > +
> > + guard(mutex)(xe_sriov_pf_master_mutex(xe));
> > +
> > + for_each_tile(tile, xe, id)
> > + total += xe_gt_sriov_pf_config_get_lmem_locked(tile->primary_gt, vfid);
> > +
> > + *size = total;
> > + return 0;
> > +}
> > diff --git a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> > index bccf23d51396..f26f49539697 100644
> > --- a/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> > +++ b/drivers/gpu/drm/xe/xe_sriov_pf_provision.h
> > @@ -24,6 +24,10 @@ int xe_sriov_pf_provision_bulk_apply_priority(struct xe_device *xe, u32 prio);
> > int xe_sriov_pf_provision_apply_vf_priority(struct xe_device *xe, unsigned int vfid, u32 prio);
> > int xe_sriov_pf_provision_query_vf_priority(struct xe_device *xe, unsigned int vfid, u32 *prio);
> >
> > +int xe_sriov_pf_provision_bulk_apply_vram(struct xe_device *xe, u64 size);
> > +int xe_sriov_pf_provision_apply_vf_vram(struct xe_device *xe, unsigned int vfid, u64 size);
> > +int xe_sriov_pf_provision_query_vf_vram(struct xe_device *xe, unsigned int vfid, u64 *size);
> > +
> > int xe_sriov_pf_provision_vfs(struct xe_device *xe, unsigned int num_vfs);
> > int xe_sriov_pf_unprovision_vfs(struct xe_device *xe, unsigned int num_vfs);
> >
> Code is functionally OK
> Minor comments, with clarification of vram_alignment:
> Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>
> > --
> > 2.47.1
> >
>
> --
--
next prev parent reply other threads:[~2026-02-16 15:11 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
2026-02-15 20:33 ` [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions Michal Wajdeczko
2026-02-16 14:37 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning Michal Wajdeczko
2026-02-16 15:02 ` Piotr Piórkowski
2026-02-16 15:11 ` Piotr Piórkowski [this message]
2026-02-15 20:33 ` [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
2026-02-16 15:29 ` Piotr Piórkowski
2026-02-18 21:07 ` Rodrigo Vivi
2026-02-15 20:33 ` [PATCH 4/9] drm/xe/pf: Use migration-friendly VRAM auto-provisioning Michal Wajdeczko
2026-02-16 16:14 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 5/9] drm/xe/tests: Add KUnit tests for new VRAM fair provisioning Michal Wajdeczko
2026-02-16 16:23 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 6/9] drm/xe/pf: Don't check for empty config Michal Wajdeczko
2026-02-16 16:27 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 7/9] drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning Michal Wajdeczko
2026-02-16 16:36 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 8/9] drm/xe/pf: Skip VRAM auto-provisioning if already provisioned Michal Wajdeczko
2026-02-16 16:59 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 9/9] drm/xe/pf: Add documentation for vram_quota Michal Wajdeczko
2026-02-16 17:04 ` Piotr Piórkowski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260216151105.y7kdzvbvtu3w4szy@intel.com \
--to=piotr.piorkowski@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=michal.wajdeczko@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox