Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Matthew Auld <matthew.auld@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 1/2] Revert "drm/xe/vm: drop vm->destroy_work"
Date: Wed, 24 Apr 2024 03:44:29 +0000	[thread overview]
Message-ID: <ZiiAHU/Ra5yeL7Ii@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <20240423074721.119633-3-matthew.auld@intel.com>

On Tue, Apr 23, 2024 at 08:47:22AM +0100, Matthew Auld wrote:
> This reverts commit 5b259c0d1d3caa6efc66c2b856840e68993f814e.
> 
> Cleanup here is good, however we need to able to flush a worker during
> vm destruction which might involve sleeping, so bring back the worker.
> 
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Matthew Brost <matthew.brost@intel.com>

I guess the alternative is is a lock + plus enable variable around
queuing of the rebind worker? I suppose I prefer leaving the destroy
worker intacted than a new lock. Also there is a large chance at some
point in the future we will need to sleep again on VM destroy and will
need the destroy worker anyways.

With that:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> ---
>  drivers/gpu/drm/xe/xe_vm.c       | 17 +++++++++++++++--
>  drivers/gpu/drm/xe/xe_vm_types.h |  7 +++++++
>  2 files changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 85d6f359142d..2ba7c920a8af 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -1178,6 +1178,8 @@ static const struct xe_pt_ops xelp_pt_ops = {
>  	.pde_encode_bo = xelp_pde_encode_bo,
>  };
>  
> +static void vm_destroy_work_func(struct work_struct *w);
> +
>  /**
>   * xe_vm_create_scratch() - Setup a scratch memory pagetable tree for the
>   * given tile and vm.
> @@ -1257,6 +1259,8 @@ struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)
>  	init_rwsem(&vm->userptr.notifier_lock);
>  	spin_lock_init(&vm->userptr.invalidated_lock);
>  
> +	INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
> +
>  	INIT_LIST_HEAD(&vm->preempt.exec_queues);
>  	vm->preempt.min_run_period_ms = 10;	/* FIXME: Wire up to uAPI */
>  
> @@ -1494,9 +1498,10 @@ void xe_vm_close_and_put(struct xe_vm *vm)
>  	xe_vm_put(vm);
>  }
>  
> -static void xe_vm_free(struct drm_gpuvm *gpuvm)
> +static void vm_destroy_work_func(struct work_struct *w)
>  {
> -	struct xe_vm *vm = container_of(gpuvm, struct xe_vm, gpuvm);
> +	struct xe_vm *vm =
> +		container_of(w, struct xe_vm, destroy_work);
>  	struct xe_device *xe = vm->xe;
>  	struct xe_tile *tile;
>  	u8 id;
> @@ -1516,6 +1521,14 @@ static void xe_vm_free(struct drm_gpuvm *gpuvm)
>  	kfree(vm);
>  }
>  
> +static void xe_vm_free(struct drm_gpuvm *gpuvm)
> +{
> +	struct xe_vm *vm = container_of(gpuvm, struct xe_vm, gpuvm);
> +
> +	/* To destroy the VM we need to be able to sleep */
> +	queue_work(system_unbound_wq, &vm->destroy_work);
> +}
> +
>  struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id)
>  {
>  	struct xe_vm *vm;
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> index 7570c2c6c463..badf3945083d 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -177,6 +177,13 @@ struct xe_vm {
>  	 */
>  	struct list_head rebind_list;
>  
> +	/**
> +	 * @destroy_work: worker to destroy VM, needed as a dma_fence signaling
> +	 * from an irq context can be last put and the destroy needs to be able
> +	 * to sleep.
> +	 */
> +	struct work_struct destroy_work;
> +
>  	/**
>  	 * @rftree: range fence tree to track updates to page table structure.
>  	 * Used to implement conflict tracking between independent bind engines.
> -- 
> 2.44.0
> 

      parent reply	other threads:[~2024-04-24  3:45 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-23  7:47 [PATCH 1/2] Revert "drm/xe/vm: drop vm->destroy_work" Matthew Auld
2024-04-23  7:47 ` [PATCH 2/2] drm/xe/vm: prevent UAF in rebind_work_func() Matthew Auld
2024-04-24  3:45   ` Matthew Brost
2024-04-23  7:53 ` ✓ CI.Patch_applied: success for series starting with [1/2] Revert "drm/xe/vm: drop vm->destroy_work" Patchwork
2024-04-23  7:53 ` ✓ CI.checkpatch: " Patchwork
2024-04-23  7:54 ` ✓ CI.KUnit: " Patchwork
2024-04-23  8:08 ` ✓ CI.Build: " Patchwork
2024-04-23  8:12 ` ✓ CI.Hooks: " Patchwork
2024-04-23  8:14 ` ✓ CI.checksparse: " Patchwork
2024-04-23 12:07 ` ✓ CI.FULL: " Patchwork
2024-04-23 16:03 ` ✓ CI.Patch_applied: success for series starting with [1/2] Revert "drm/xe/vm: drop vm->destroy_work" (rev2) Patchwork
2024-04-23 16:04 ` ✓ CI.checkpatch: " Patchwork
2024-04-23 16:05 ` ✓ CI.KUnit: " Patchwork
2024-04-23 16:17 ` ✓ CI.Build: " Patchwork
2024-04-23 16:20 ` ✓ CI.Hooks: " Patchwork
2024-04-23 16:22 ` ✓ CI.checksparse: " Patchwork
2024-04-23 16:44 ` ✓ CI.BAT: " Patchwork
2024-04-24  1:11 ` ✓ CI.FULL: " Patchwork
2024-04-24  3:44 ` Matthew Brost [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZiiAHU/Ra5yeL7Ii@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox