Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <lucas.demarchi@intel.com>
Subject: Re: [PATCH 1/5] drm/xe: Move part of xe_file cleanup to a helper
Date: Mon, 8 Jul 2024 22:18:14 +0000	[thread overview]
Message-ID: <Zoxlpnw+xWtRGOk5@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <ZoxkUZMQUrzSdVGy@orsosgc001>

On Mon, Jul 08, 2024 at 03:12:33PM -0700, Umesh Nerlige Ramappa wrote:
> On Mon, Jul 08, 2024 at 09:21:01PM +0000, Matthew Brost wrote:
> > On Mon, Jul 08, 2024 at 01:20:59PM -0700, Umesh Nerlige Ramappa wrote:
> > > In order to make xe_file ref counted, move destruction of xe_file
> > > members to a helper.
> > > 
> > 
> > I don't see where xe_file_close is called in this patch but appears to
> > called in .postclose later in the series. I'd move that change into this
> > patch.
> 
> I don't follow this point. The series is not calling
> xe_file_close/.postclose, so not sure what change I need to move.
> 
> Thanks,
> Umesh

Yea my bad, disregard this comment. I though xe_file_close was a new
function but misread the diff.

Comment below about moving xe_vm_close_and_put to xe_file_close is valid
though.

Matt
 
> 
> > 
> > Also I think the VM should ref count the xref too to be uniform and it
> > appears this series is not doing that either. With a VM ref counting
> > xref, the 'xe_vm_close_and_put' is going to need to be in xe_file_close
> > too.
> > 
> > Matt
> > 
> > > Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_device.c | 36 +++++++++++++++++++++-------------
> > >  1 file changed, 22 insertions(+), 14 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> > > index cfda7cb5df2c..babb697652d5 100644
> > > --- a/drivers/gpu/drm/xe/xe_device.c
> > > +++ b/drivers/gpu/drm/xe/xe_device.c
> > > @@ -90,24 +90,12 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
> > >  	return 0;
> > >  }
> > > 
> > > -static void xe_file_close(struct drm_device *dev, struct drm_file *file)
> > > +static void xe_file_destroy(struct xe_file *xef)
> > >  {
> > > -	struct xe_device *xe = to_xe_device(dev);
> > > -	struct xe_file *xef = file->driver_priv;
> > > +	struct xe_device *xe = xef->xe;
> > >  	struct xe_vm *vm;
> > > -	struct xe_exec_queue *q;
> > >  	unsigned long idx;
> > > 
> > > -	/*
> > > -	 * No need for exec_queue.lock here as there is no contention for it
> > > -	 * when FD is closing as IOCTLs presumably can't be modifying the
> > > -	 * xarray. Taking exec_queue.lock here causes undue dependency on
> > > -	 * vm->lock taken during xe_exec_queue_kill().
> > > -	 */
> > > -	xa_for_each(&xef->exec_queue.xa, idx, q) {
> > > -		xe_exec_queue_kill(q);
> > > -		xe_exec_queue_put(q);
> > > -	}
> > >  	xa_destroy(&xef->exec_queue.xa);
> > >  	mutex_destroy(&xef->exec_queue.lock);
> > >  	mutex_lock(&xef->vm.lock);
> > > @@ -125,6 +113,26 @@ static void xe_file_close(struct drm_device *dev, struct drm_file *file)
> > >  	kfree(xef);
> > >  }
> > > 
> > > +static void xe_file_close(struct drm_device *dev, struct drm_file *file)
> > > +{
> > > +	struct xe_file *xef = file->driver_priv;
> > > +	struct xe_exec_queue *q;
> > > +	unsigned long idx;
> > > +
> > > +	/*
> > > +	 * No need for exec_queue.lock here as there is no contention for it
> > > +	 * when FD is closing as IOCTLs presumably can't be modifying the
> > > +	 * xarray. Taking exec_queue.lock here causes undue dependency on
> > > +	 * vm->lock taken during xe_exec_queue_kill().
> > > +	 */
> > > +	xa_for_each(&xef->exec_queue.xa, idx, q) {
> > > +		xe_exec_queue_kill(q);
> > > +		xe_exec_queue_put(q);
> > > +	}
> > > +
> > > +	xe_file_destroy(xef);
> > > +}
> > > +
> > >  static const struct drm_ioctl_desc xe_ioctls[] = {
> > >  	DRM_IOCTL_DEF_DRV(XE_DEVICE_QUERY, xe_query_ioctl, DRM_RENDER_ALLOW),
> > >  	DRM_IOCTL_DEF_DRV(XE_GEM_CREATE, xe_gem_create_ioctl, DRM_RENDER_ALLOW),
> > > --
> > > 2.38.1
> > > 

  reply	other threads:[~2024-07-08 22:19 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-08 20:20 [PATCH 0/5] Have xe_vm and xe_exec_queue take references to xef Umesh Nerlige Ramappa
2024-07-08 20:20 ` [PATCH 1/5] drm/xe: Move part of xe_file cleanup to a helper Umesh Nerlige Ramappa
2024-07-08 21:21   ` Matthew Brost
2024-07-08 22:12     ` Umesh Nerlige Ramappa
2024-07-08 22:18       ` Matthew Brost [this message]
2024-07-08 20:21 ` [PATCH 2/5] drm/xe: Add ref counting for xe_file Umesh Nerlige Ramappa
2024-07-08 21:23   ` Matthew Brost
2024-07-08 22:15     ` Umesh Nerlige Ramappa
2024-07-08 22:19       ` Matthew Brost
2024-07-08 23:00     ` Umesh Nerlige Ramappa
2024-07-08 22:52   ` Lucas De Marchi
2024-07-08 23:26     ` Umesh Nerlige Ramappa
2024-07-09  0:28       ` Matthew Brost
2024-07-09 15:11         ` Lucas De Marchi
2024-07-09 16:37           ` Umesh Nerlige Ramappa
2024-07-09 20:37             ` Matthew Brost
2024-07-11 13:46               ` Lucas De Marchi
2024-07-08 20:21 ` [PATCH 3/5] drm/xe: Take a reference to xe file when user creates exec_queue Umesh Nerlige Ramappa
2024-07-08 21:25   ` Matthew Brost
2024-07-08 20:21 ` [PATCH 4/5] drm/xe: Take a ref to xe file when user creates a VM Umesh Nerlige Ramappa
2024-07-08 21:32   ` Matthew Brost
2024-07-08 20:21 ` [PATCH 5/5] Revert "drm/xe: Do not access xe file when updating exec queue run_ticks" Umesh Nerlige Ramappa
2024-07-08 21:33   ` Matthew Brost
2024-07-08 20:26 ` ✓ CI.Patch_applied: success for Have xe_vm and xe_exec_queue take references to xef Patchwork
2024-07-08 20:27 ` ✓ CI.checkpatch: " Patchwork
2024-07-08 20:28 ` ✓ CI.KUnit: " Patchwork
2024-07-08 20:40 ` ✓ CI.Build: " Patchwork
2024-07-08 20:42 ` ✓ CI.Hooks: " Patchwork
2024-07-08 20:43 ` ✓ CI.checksparse: " Patchwork
2024-07-08 21:16 ` ✗ CI.BAT: failure " Patchwork
2024-07-08 21:37 ` [PATCH 0/5] " Lucas De Marchi
2024-07-09  1:07 ` ✗ CI.FULL: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zoxlpnw+xWtRGOk5@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=umesh.nerlige.ramappa@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox