From: "Summers, Stuart" <stuart.summers@intel.com>
To: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
"Brost, Matthew" <matthew.brost@intel.com>
Cc: "Roper, Matthew D" <matthew.d.roper@intel.com>,
"lucas.demarchi@intel.com" <lucas.demarchi@intel.com>
Subject: Re: [PATCH v2 04/12] drm/xe: Add vm to exec queues association
Date: Fri, 12 Dec 2025 21:03:32 +0000 [thread overview]
Message-ID: <0da94fd75261b3f00e54325f8063515923d10184.camel@intel.com> (raw)
In-Reply-To: <20251104195616.3339137-5-matthew.brost@intel.com>
On Tue, 2025-11-04 at 11:56 -0800, Matthew Brost wrote:
> Maintain a list of exec queues per vm which will be used by TLB
> invalidation code to do context-ID based tlb invalidations.
>
> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> drivers/gpu/drm/xe/xe_device.h | 7 ----
> drivers/gpu/drm/xe/xe_device_types.h | 7 ++++
> drivers/gpu/drm/xe/xe_exec_queue.c | 7 +++-
> drivers/gpu/drm/xe/xe_exec_queue_types.h | 3 ++
> drivers/gpu/drm/xe/xe_vm.c | 47
> ++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_vm.h | 3 ++
> drivers/gpu/drm/xe/xe_vm_types.h | 13 +++++++
> 7 files changed, 79 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_device.h
> b/drivers/gpu/drm/xe/xe_device.h
> index 538202eebc16..764f24f4adfc 100644
> --- a/drivers/gpu/drm/xe/xe_device.h
> +++ b/drivers/gpu/drm/xe/xe_device.h
> @@ -62,13 +62,6 @@ static inline struct xe_tile
> *xe_device_get_root_tile(struct xe_device *xe)
> return &xe->tiles[0];
> }
>
> -/*
> - * Highest GT/tile count for any platform. Used only for memory
> allocation
> - * sizing. Any logic looping over GTs or mapping userspace GT IDs
> into GT
> - * structures should use the per-platform xe->info.max_gt_per_tile
> instead.
> - */
> -#define XE_MAX_GT_PER_TILE 2
> -
> static inline struct xe_gt *xe_device_get_gt(struct xe_device *xe,
> u8 gt_id)
> {
> struct xe_tile *tile;
> diff --git a/drivers/gpu/drm/xe/xe_device_types.h
> b/drivers/gpu/drm/xe/xe_device_types.h
> index af0ce275b032..145951dd95c9 100644
> --- a/drivers/gpu/drm/xe/xe_device_types.h
> +++ b/drivers/gpu/drm/xe/xe_device_types.h
> @@ -57,6 +57,13 @@ struct xe_vram_region;
> #define XE_GT1 1
> #define XE_MAX_TILES_PER_DEVICE (XE_GT1 + 1)
>
> +/*
> + * Highest GT/tile count for any platform. Used only for memory
> allocation
> + * sizing. Any logic looping over GTs or mapping userspace GT IDs
> into GT
> + * structures should use the per-platform xe->info.max_gt_per_tile
> instead.
> + */
> +#define XE_MAX_GT_PER_TILE 2
> +
> #define XE_MAX_ASID (BIT(20))
>
> #define IS_PLATFORM_STEP(_xe, _platform, min_step, max_step) \
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
> b/drivers/gpu/drm/xe/xe_exec_queue.c
> index 1b57d7c2cc94..49822baf5967 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> @@ -72,8 +72,10 @@ static void __xe_exec_queue_free(struct
> xe_exec_queue *q)
>
> if (xe_exec_queue_uses_pxp(q))
> xe_pxp_exec_queue_remove(gt_to_xe(q->gt)->pxp, q);
> - if (q->vm)
> + if (q->vm) {
> + xe_vm_remove_exec_queue(q->vm, q);
> xe_vm_put(q->vm);
> + }
>
> if (q->xef)
> xe_file_put(q->xef);
> @@ -143,6 +145,7 @@ static struct xe_exec_queue
> *__xe_exec_queue_alloc(struct xe_device *xe,
> q->ring_ops = gt->ring_ops[hwe->class];
> q->ops = gt->exec_queue_ops;
> INIT_LIST_HEAD(&q->lr.link);
> + INIT_LIST_HEAD(&q->vm_exec_queue_link);
> INIT_LIST_HEAD(&q->multi_gt_link);
> INIT_LIST_HEAD(&q->hw_engine_group_link);
> INIT_LIST_HEAD(&q->pxp.link);
> @@ -796,6 +799,8 @@ int xe_exec_queue_create_ioctl(struct drm_device
> *dev, void *data,
> }
>
> q->xef = xe_file_get(xef);
> + if (eci[0].engine_class != DRM_XE_ENGINE_CLASS_VM_BIND)
> + xe_vm_add_exec_queue(vm, q);
Discussed this offline, but because of potential memory corruption in
the register/deregister path, the plan is to continue for now in the
queue create/destroy. This means we'll get errors from the GuC when we
race for the register/deregister and submission which will be addressed
in some later cleanup. Since this feature is blocked behind a feature
flag anyway for testing, it shouldn't have any other adverse effects.
One other minor comment below...
>
> /* user id alloc must always be last in ioctl to prevent UAF
> */
> err = xa_alloc(&xef->exec_queue.xa, &id, q, xa_limit_32b,
> GFP_KERNEL);
> diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> index c8807268ec6c..a2281fcb55b1 100644
> --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> @@ -147,6 +147,9 @@ struct xe_exec_queue {
> struct xe_dep_scheduler *dep_scheduler;
> } tlb_inval[XE_EXEC_QUEUE_TLB_INVAL_COUNT];
>
> + /** @vm_exec_queue_link: Link to track exec queue within a
> VM's list of exec queues. */
> + struct list_head vm_exec_queue_link;
> +
> /** @pxp: PXP info tracking */
> struct {
> /** @pxp.type: PXP session type used by this queue */
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 84f4c8f1be33..cccdd931dd5e 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1507,8 +1507,20 @@ struct xe_vm *xe_vm_create(struct xe_device
> *xe, u32 flags, struct xe_file *xef)
> INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
>
> INIT_LIST_HEAD(&vm->preempt.exec_queues);
> + INIT_LIST_HEAD(&vm->exec_queues.list);
> vm->preempt.min_run_period_ms = 10; /* FIXME: Wire up to
> uAPI */
>
> + init_rwsem(&vm->exec_queues.lock);
> + if (IS_ENABLED(CONFIG_PROVE_LOCKING)) {
> + fs_reclaim_acquire(GFP_KERNEL);
> + might_lock(&vm->exec_queues.lock);
> + fs_reclaim_release(GFP_KERNEL);
> +
> + down_read(&vm->exec_queues.lock);
> + might_lock(&xe_root_mmio_gt(xe)->uc.guc.ct.lock);
> + up_read(&vm->exec_queues.lock);
> + }
> +
> for_each_tile(tile, xe, id)
> xe_range_fence_tree_init(&vm->rftree[id]);
>
> @@ -4387,3 +4399,38 @@ int xe_vm_alloc_cpu_addr_mirror_vma(struct
> xe_vm *vm, uint64_t start, uint64_t r
>
> return xe_vm_alloc_vma(vm, &map_req, false);
> }
> +
> +/**
> + * xe_vm_add_exec_queue() - Add exec queue to VM
> + * @vm: The VM.
> + * @q: The exec_queue
> + */
> +void xe_vm_add_exec_queue(struct xe_vm *vm, struct xe_exec_queue *q)
> +{
> + /* User VMs and queues only */
Why? So the expectation is we always do a full invalidation for kernel
VMs? Do we want the potential performance impact of that, particularly
for the migration queues?
I get that in the current version we are doing this as an explicit
result of a user queue create, but I don't otherwise see why we need
these restrictions.
Thanks,
Stuart
> + xe_assert(vm->xe, !(q->flags & EXEC_QUEUE_FLAG_KERNEL));
> + xe_assert(vm->xe, !(q->flags & EXEC_QUEUE_FLAG_PERMANENT));
> + xe_assert(vm->xe, !(q->flags & EXEC_QUEUE_FLAG_VM));
> + xe_assert(vm->xe, !(q->flags & EXEC_QUEUE_FLAG_MIGRATE));
> + xe_assert(vm->xe, vm->xef);
> +
> + down_write(&vm->exec_queues.lock);
> + list_add(&q->vm_exec_queue_link, &vm->exec_queues.list);
> + ++vm->exec_queues.count[q->gt->info.id];
> + up_write(&vm->exec_queues.lock);
> +}
> +
> +/**
> + * xe_vm_remove_exec_queue() - Remove exec queue from VM
> + * @vm: The VM.
> + * @q: The exec_queue
> + */
> +void xe_vm_remove_exec_queue(struct xe_vm *vm, struct xe_exec_queue
> *q)
> +{
> + down_write(&vm->exec_queues.lock);
> + if (!list_empty(&q->vm_exec_queue_link)) {
> + list_del(&q->vm_exec_queue_link);
> + --vm->exec_queues.count[q->gt->info.id];
> + }
> + up_write(&vm->exec_queues.lock);
> +}
> diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
> index ef8a5019574e..5f3341ef99d2 100644
> --- a/drivers/gpu/drm/xe/xe_vm.h
> +++ b/drivers/gpu/drm/xe/xe_vm.h
> @@ -284,6 +284,9 @@ static inline struct dma_resv *xe_vm_resv(struct
> xe_vm *vm)
>
> void xe_vm_kill(struct xe_vm *vm, bool unlocked);
>
> +void xe_vm_add_exec_queue(struct xe_vm *vm, struct xe_exec_queue
> *q);
> +void xe_vm_remove_exec_queue(struct xe_vm *vm, struct xe_exec_queue
> *q);
> +
> /**
> * xe_vm_assert_held(vm) - Assert that the vm's reservation object
> is held.
> * @vm: The vm
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> b/drivers/gpu/drm/xe/xe_vm_types.h
> index 830ed7b05c27..180b48d62480 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -290,6 +290,19 @@ struct xe_vm {
> struct list_head pm_activate_link;
> } preempt;
>
> + /** @exec_queues: Manages list of exec queues attached to
> this VM, protected by lock. */
> + struct {
> + /** @exec_queues.list: list of exec queues attached
> to this VM */
> + struct list_head list;
> + /**
> + * @exec_queues.count: count of exec queues attached
> to this VM,
> + * per GT
> + */
> + int count[XE_MAX_TILES_PER_DEVICE *
> XE_MAX_GT_PER_TILE];
> + /** @exec_queues.lock: lock to protect exec_queues
> list */
> + struct rw_semaphore lock;
> + } exec_queues;
> +
> /** @um: unified memory state */
> struct {
> /** @asid: address space ID, unique to each VM */
next prev parent reply other threads:[~2025-12-12 21:03 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-04 19:56 [PATCH v2 00/12] Context based TLB invalidations Matthew Brost
2025-11-04 19:56 ` [PATCH v2 01/12] drm/xe: Add normalize_invalidation_range Matthew Brost
2025-11-06 20:03 ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 02/12] drm/xe: Make usm.asid_to_vm allocation use GFP_NOWAIT Matthew Brost
2025-11-06 22:08 ` Summers, Stuart
2025-11-06 22:13 ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 03/12] drm/xe: Add xe_device_asid_to_vm helper Matthew Brost
2025-12-11 22:07 ` Matt Atwood
2025-11-04 19:56 ` [PATCH v2 04/12] drm/xe: Add vm to exec queues association Matthew Brost
2025-11-06 22:15 ` Summers, Stuart
2025-12-12 21:03 ` Summers, Stuart [this message]
2025-12-12 21:24 ` Matthew Brost
2025-12-12 21:37 ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 05/12] drm/xe: Taint TLB invalidation seqno lock with GFP_KERNEL Matthew Brost
2025-12-11 22:35 ` Matt Atwood
2025-11-04 19:56 ` [PATCH v2 06/12] drm/xe: Do not forward invalid TLB invalidation seqnos to upper layers Matthew Brost
2025-11-06 22:05 ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 07/12] drm/xe: Rename send_tlb_inval_ppgtt to send_tlb_inval_asid_ppgtt Matthew Brost
2025-11-06 20:22 ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 08/12] drm/xe: Add send_tlb_inval_ppgtt helper Matthew Brost
2025-11-06 20:25 ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 09/12] drm/xe: Add xe_tlb_inval_idle helper Matthew Brost
2025-11-10 18:48 ` Summers, Stuart
2025-12-12 22:00 ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 10/12] drm/xe: Add exec queue active vfunc Matthew Brost
2025-11-04 19:56 ` [PATCH v2 11/12] drm/xe: Add context-based invalidation to GuC TLB invalidation backend Matthew Brost
2025-11-06 21:50 ` Summers, Stuart
2025-11-07 7:01 ` Matthew Brost
2025-11-10 19:29 ` Summers, Stuart
2025-11-11 1:01 ` Matthew Brost
2025-12-12 22:30 ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 12/12] drm/xe: Enable context TLB invalidations for CI Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0da94fd75261b3f00e54325f8063515923d10184.camel@intel.com \
--to=stuart.summers@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=lucas.demarchi@intel.com \
--cc=matthew.brost@intel.com \
--cc=matthew.d.roper@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox