From: "Piotr Piórkowski" <piotr.piorkowski@intel.com>
To: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions
Date: Mon, 16 Feb 2026 15:37:22 +0100 [thread overview]
Message-ID: <20260216143722.n3m3fzrso5zadkdd@intel.com> (raw)
In-Reply-To: <20260215203323.595-2-michal.wajdeczko@intel.com>
Michal Wajdeczko <michal.wajdeczko@intel.com> wrote on nie [2026-lut-15 21:33:15 +0100]:
> We already have few functions to configure LMEM (aka VRAM) but they
> all are taking master mutex. Split them and expose locked variants
> to allow use by the caller who already hold this mutex.
>
> Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
> ---
> drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c | 77 ++++++++++++++++++++--
> drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h | 4 ++
> 2 files changed, 74 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> index 23601ce79348..23af49dc1bfa 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
> @@ -1754,7 +1754,7 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
> }
>
> /**
> - * xe_gt_sriov_pf_config_bulk_set_lmem - Provision many VFs with LMEM.
> + * xe_gt_sriov_pf_config_bulk_set_lmem_locked() - Provision many VFs with LMEM.
> * @gt: the &xe_gt (can't be media)
> * @vfid: starting VF identifier (can't be 0)
> * @num_vfs: number of VFs to provision
> @@ -1764,31 +1764,94 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
> *
> * Return: 0 on success or a negative error code on failure.
> */
> -int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid,
> - unsigned int num_vfs, u64 size)
> +int xe_gt_sriov_pf_config_bulk_set_lmem_locked(struct xe_gt *gt, unsigned int vfid,
> + unsigned int num_vfs, u64 size)
> {
> unsigned int n;
> int err = 0;
>
> - xe_gt_assert(gt, vfid);
> + lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> + xe_gt_assert(gt, xe_device_has_lmtt(gt_to_xe(gt)));
> + xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
> xe_gt_assert(gt, xe_gt_is_main_type(gt));
> + xe_gt_assert(gt, vfid);
>
> if (!num_vfs)
> return 0;
>
> - mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
> for (n = vfid; n < vfid + num_vfs; n++) {
> err = pf_provision_vf_lmem(gt, n, size);
> if (err)
> break;
> }
> - mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
>
> return pf_config_bulk_set_u64_done(gt, vfid, num_vfs, size,
> - xe_gt_sriov_pf_config_get_lmem,
> + pf_get_vf_config_lmem,
> "LMEM", n, err);
> }
>
> +/**
> + * xe_gt_sriov_pf_config_bulk_set_lmem() - Provision many VFs with LMEM.
> + * @gt: the &xe_gt (can't be media)
> + * @vfid: starting VF identifier (can't be 0)
> + * @num_vfs: number of VFs to provision
> + * @size: requested LMEM size
> + *
> + * This function can only be called on PF.
> + *
> + * Return: 0 on success or a negative error code on failure.
> + */
> +int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid,
> + unsigned int num_vfs, u64 size)
> +{
> + guard(mutex)(xe_gt_sriov_pf_master_mutex(gt));
> +
> + return xe_gt_sriov_pf_config_bulk_set_lmem_locked(gt, vfid, num_vfs, size);
> +}
> +
> +/**
> + * xe_gt_sriov_pf_config_get_lmem_locked() - Get VF's LMEM quota.
> + * @gt: the &xe_gt
> + * @vfid: the VF identifier (can't be 0 == PFID)
> + *
> + * This function can only be called on PF.
> + *
> + * Return: VF's LMEM quota.
> + */
> +u64 xe_gt_sriov_pf_config_get_lmem_locked(struct xe_gt *gt, unsigned int vfid)
> +{
> + lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> + xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
> + xe_gt_assert(gt, vfid);
> +
> + return pf_get_vf_config_lmem(gt, vfid);
> +}
> +
> +/**
> + * xe_gt_sriov_pf_config_set_lmem_locked() - Provision VF with LMEM.
> + * @gt: the &xe_gt (can't be media)
> + * @vfid: the VF identifier (can't be 0 == PFID)
> + * @size: requested LMEM size
> + *
> + * This function can only be called on PF.
> + */
> +int xe_gt_sriov_pf_config_set_lmem_locked(struct xe_gt *gt, unsigned int vfid, u64 size)
> +{
> + int err;
> +
> + lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
> + xe_gt_assert(gt, xe_device_has_lmtt(gt_to_xe(gt)));
> + xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
> + xe_gt_assert(gt, xe_gt_is_main_type(gt));
> + xe_gt_assert(gt, vfid);
> +
> + err = pf_provision_vf_lmem(gt, vfid, size);
> +
> + return pf_config_set_u64_done(gt, vfid, size,
> + pf_get_vf_config_lmem(gt, vfid),
> + "LMEM", err);
> +}
> +
> static struct xe_bo *pf_get_vf_config_lmem_obj(struct xe_gt *gt, unsigned int vfid)
> {
> struct xe_gt_sriov_config *config = pf_pick_vf_config(gt, vfid);
> diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> index 3c6c8b6655af..4a004ecd6140 100644
> --- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> +++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
> @@ -36,6 +36,10 @@ int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size
> int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
> int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs,
> u64 size);
> +u64 xe_gt_sriov_pf_config_get_lmem_locked(struct xe_gt *gt, unsigned int vfid);
> +int xe_gt_sriov_pf_config_set_lmem_locked(struct xe_gt *gt, unsigned int vfid, u64 size);
> +int xe_gt_sriov_pf_config_bulk_set_lmem_locked(struct xe_gt *gt, unsigned int vfid,
> + unsigned int num_vfs, u64 size);
> struct xe_bo *xe_gt_sriov_pf_config_get_lmem_obj(struct xe_gt *gt, unsigned int vfid);
>
> u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid);
LGTM:
Reviewed-by: Piotr Piórkowski <piotr.piorkowski@intel.com>
> --
> 2.47.1
>
--
next prev parent reply other threads:[~2026-02-16 14:37 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-15 20:33 [PATCH 0/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
2026-02-15 20:33 ` [PATCH 1/9] drm/xe/pf: Add locked variants of VRAM configuration functions Michal Wajdeczko
2026-02-16 14:37 ` Piotr Piórkowski [this message]
2026-02-15 20:33 ` [PATCH 2/9] drm/xe/pf: Add functions for VRAM provisioning Michal Wajdeczko
2026-02-16 15:02 ` Piotr Piórkowski
2026-02-16 15:11 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 3/9] drm/xe/pf: Allow to change VFs VRAM quota using sysfs Michal Wajdeczko
2026-02-16 15:29 ` Piotr Piórkowski
2026-02-18 21:07 ` Rodrigo Vivi
2026-02-15 20:33 ` [PATCH 4/9] drm/xe/pf: Use migration-friendly VRAM auto-provisioning Michal Wajdeczko
2026-02-16 16:14 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 5/9] drm/xe/tests: Add KUnit tests for new VRAM fair provisioning Michal Wajdeczko
2026-02-16 16:23 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 6/9] drm/xe/pf: Don't check for empty config Michal Wajdeczko
2026-02-16 16:27 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 7/9] drm/xe/pf: Prefer guard(mutex) when doing fair LMEM provisioning Michal Wajdeczko
2026-02-16 16:36 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 8/9] drm/xe/pf: Skip VRAM auto-provisioning if already provisioned Michal Wajdeczko
2026-02-16 16:59 ` Piotr Piórkowski
2026-02-15 20:33 ` [PATCH 9/9] drm/xe/pf: Add documentation for vram_quota Michal Wajdeczko
2026-02-16 17:04 ` Piotr Piórkowski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260216143722.n3m3fzrso5zadkdd@intel.com \
--to=piotr.piorkowski@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=michal.wajdeczko@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox