Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Matthew Auld <matthew.auld@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 2/3] drm/xe/vm: drop vm->destroy_work
Date: Fri, 12 Apr 2024 22:32:38 +0000	[thread overview]
Message-ID: <Zhm2huZGAm+pr9Z9@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <20240412113144.259426-5-matthew.auld@intel.com>

On Fri, Apr 12, 2024 at 12:31:46PM +0100, Matthew Auld wrote:
> Now that we no longer grab the usm.lock mutex (which might sleep) it
> looks like it should be safe to directly perform xe_vm_free when vm
> refcount reaches zero, instead of punting that off to some worker.
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>

This does look right in the current code base / series. However in [1] I
do suggest deferring 'close' part of xe_vm_close_and_put to the final put
if the device is wedged. If we do that, we might need worker again? I
guess we can figure it out if / when we decide to take my suggestion.

With that, this looks like a good cleanup:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

[1] https://patchwork.freedesktop.org/patch/588557/?series=132232&rev=1

> ---
>  drivers/gpu/drm/xe/xe_vm.c       | 17 ++---------------
>  drivers/gpu/drm/xe/xe_vm_types.h |  7 -------
>  2 files changed, 2 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index c5c26b3d1b76..300d166f412e 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1279,8 +1279,6 @@ static const struct xe_pt_ops xelp_pt_ops = {
>  	.pde_encode_bo = xelp_pde_encode_bo,
>  };
>  
> -static void vm_destroy_work_func(struct work_struct *w);
> -
>  /**
>   * xe_vm_create_scratch() - Setup a scratch memory pagetable tree for the
>   * given tile and vm.
> @@ -1360,8 +1358,6 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>  	init_rwsem(&vm->userptr.notifier_lock);
>  	spin_lock_init(&vm->userptr.invalidated_lock);
>  
> -	INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
> -
>  	INIT_LIST_HEAD(&vm->preempt.exec_queues);
>  	vm->preempt.min_run_period_ms = 10;	/* FIXME: Wire up to uAPI */
>  
> @@ -1599,10 +1595,9 @@ void xe_vm_close_and_put(struct xe_vm *vm)
>  	xe_vm_put(vm);
>  }
>  
> -static void vm_destroy_work_func(struct work_struct *w)
> +static void xe_vm_free(struct drm_gpuvm *gpuvm)
>  {
> -	struct xe_vm *vm =
> -		container_of(w, struct xe_vm, destroy_work);
> +	struct xe_vm *vm = container_of(gpuvm, struct xe_vm, gpuvm);
>  	struct xe_device *xe = vm->xe;
>  	struct xe_tile *tile;
>  	u8 id;
> @@ -1622,14 +1617,6 @@ static void vm_destroy_work_func(struct work_struct *w)
>  	kfree(vm);
>  }
>  
> -static void xe_vm_free(struct drm_gpuvm *gpuvm)
> -{
> -	struct xe_vm *vm = container_of(gpuvm, struct xe_vm, gpuvm);
> -
> -	/* To destroy the VM we need to be able to sleep */
> -	queue_work(system_unbound_wq, &vm->destroy_work);
> -}
> -
>  struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id)
>  {
>  	struct xe_vm *vm;
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index badf3945083d..7570c2c6c463 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -177,13 +177,6 @@ struct xe_vm {
>  	 */
>  	struct list_head rebind_list;
>  
> -	/**
> -	 * @destroy_work: worker to destroy VM, needed as a dma_fence signaling
> -	 * from an irq context can be last put and the destroy needs to be able
> -	 * to sleep.
> -	 */
> -	struct work_struct destroy_work;
> -
>  	/**
>  	 * @rftree: range fence tree to track updates to page table structure.
>  	 * Used to implement conflict tracking between independent bind engines.
> -- 
> 2.44.0
> 

  reply	other threads:[~2024-04-12 22:34 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-12 11:31 [PATCH 1/3] drm/xe/vm: prevent UAF with asid based lookup Matthew Auld
2024-04-12 11:31 ` [PATCH 2/3] drm/xe/vm: drop vm->destroy_work Matthew Auld
2024-04-12 22:32   ` Matthew Brost [this message]
2024-04-12 11:31 ` [PATCH 3/3] drm/xe/vm: don't include xe_gt.h Matthew Auld
2024-04-12 22:34   ` Matthew Brost
2024-04-12 14:06 ` [PATCH 1/3] drm/xe/vm: prevent UAF with asid based lookup Lucas De Marchi
2024-04-12 14:42   ` Matthew Auld
2024-04-12 22:26     ` Matthew Brost
2024-04-15  8:48 ` ✓ CI.Patch_applied: success for series starting with [1/3] " Patchwork
2024-04-15  8:48 ` ✓ CI.checkpatch: " Patchwork
2024-04-15  8:49 ` ✓ CI.KUnit: " Patchwork
2024-04-15  9:08 ` ✓ CI.Build: " Patchwork
2024-04-15  9:11 ` ✓ CI.Hooks: " Patchwork
2024-04-15  9:12 ` ✓ CI.checksparse: " Patchwork
2024-04-15  9:51 ` ✓ CI.BAT: " Patchwork
2024-04-15 13:13 ` ✗ CI.FULL: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zhm2huZGAm+pr9Z9@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox