Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org
Subject: Re: [Intel-xe] [PATCH 4/5] drm/xe: Combine destroy_cb and destroy_work in xe_vma into union
Date: Fri, 14 Jul 2023 10:28:04 -0400	[thread overview]
Message-ID: <ZLFbdCsfJ80M7xJH@intel.com> (raw)
In-Reply-To: <ZLDGxJsRmdEKrCnZ@DUT025-TGLU>

On Fri, Jul 14, 2023 at 03:53:40AM +0000, Matthew Brost wrote:
> On Thu, Jul 13, 2023 at 04:23:43PM -0400, Rodrigo Vivi wrote:
> > On Tue, Jul 11, 2023 at 02:27:47PM -0700, Matthew Brost wrote:
> > > The callback kicks the worker thus mutually exclusive execution,
> > > combining saves a bit of space in xe_vma.
> > 
> > could you please open up a bit on why they are multually exclusive?!
> > 
> 
> Not sure how else to word this. The callback function (below) queues the
> worker, the callback argument at this point is safe to clobber.
> 
> 1047 static void vma_destroy_cb(struct dma_fence *fence,
> 1048                            struct dma_fence_cb *cb)
> 1049 {
> 1050         struct xe_vma *vma = container_of(cb, struct xe_vma, destroy_cb);
> 1051
> 1052         INIT_WORK(&vma->destroy_work, vma_destroy_work_func);
> 1053         queue_work(system_unbound_wq, &vma->destroy_work);
> 1054 }

oh, indeed... I should had checked that.
for a moment I wondered about scenarios where the callback would
be called more than once, but that is really not possible and
we would have other errors anyway.

no change needed in the original patch or commit msg.
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>

> 
> Matt
> 
> > > 
> > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_vm_types.h | 11 ++++++-----
> > >  1 file changed, 6 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
> > > index 2a8691a48c55..30beae541aca 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm_types.h
> > > +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> > > @@ -51,11 +51,12 @@ struct xe_vma {
> > >  		struct list_head destroy;
> > >  	} combined_links;
> > >  
> > > -	/** @destroy_cb: callback to destroy VMA when unbind job is done */
> > > -	struct dma_fence_cb destroy_cb;
> > > -
> > > -	/** @destroy_work: worker to destroy this BO */
> > > -	struct work_struct destroy_work;
> > > +	union {
> > > +		/** @destroy_cb: callback to destroy VMA when unbind job is done */
> > > +		struct dma_fence_cb destroy_cb;
> > > +		/** @destroy_work: worker to destroy this BO */
> > > +		struct work_struct destroy_work;
> > > +	};
> > >  
> > >  	/** @userptr: user pointer state */
> > >  	struct {
> > > -- 
> > > 2.34.1
> > > 

  reply	other threads:[~2023-07-14 14:28 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-11 21:27 [Intel-xe] [PATCH 0/5] More VM cleanups Matthew Brost
2023-07-11 21:27 ` [Intel-xe] [PATCH 1/5] drm/xe: Avoid doing rebinds Matthew Brost
2023-07-13 20:10   ` Rodrigo Vivi
2023-07-13 20:11   ` Rodrigo Vivi
2023-07-11 21:27 ` [Intel-xe] [PATCH 2/5] drm/xe: Reduce the number list links in xe_vma Matthew Brost
2023-07-13 20:18   ` Rodrigo Vivi
2023-07-14  3:56     ` Matthew Brost
2023-07-14 13:42       ` Rodrigo Vivi
2023-07-15 13:11   ` Thomas Hellström
2023-07-19 22:19     ` Matthew Brost
2023-07-11 21:27 ` [Intel-xe] [PATCH 3/5] drm/xe: Change tile masks from u64 to u8 Matthew Brost
2023-07-13 20:21   ` Rodrigo Vivi
2023-07-14  3:46     ` Matthew Brost
2023-07-11 21:27 ` [Intel-xe] [PATCH 4/5] drm/xe: Combine destroy_cb and destroy_work in xe_vma into union Matthew Brost
2023-07-13 20:23   ` Rodrigo Vivi
2023-07-14  3:53     ` Matthew Brost
2023-07-14 14:28       ` Rodrigo Vivi [this message]
2023-07-11 21:27 ` [Intel-xe] [PATCH 5/5] drm/xe: Only alloc userptr part of xe_vma for userptrs Matthew Brost
2023-07-13 20:26   ` Rodrigo Vivi
2023-07-11 21:29 ` [Intel-xe] ✓ CI.Patch_applied: success for More VM cleanups Patchwork
2023-07-11 21:29 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-07-11 21:30 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-07-11 21:34 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-07-11 21:35 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-07-11 21:36 ` [Intel-xe] ✓ CI.checksparse: " Patchwork
2023-07-11 22:10 ` [Intel-xe] ○ CI.BAT: info " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZLFbdCsfJ80M7xJH@intel.com \
    --to=rodrigo.vivi@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox