Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Vivekanandan, Balasubramani" <balasubramani.vivekanandan@intel.com>
To: Lucas De Marchi <lucas.demarchi@intel.com>,
	"Upadhyay, Tejas" <tejas.upadhyay@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Nerlige Ramappa, Umesh" <umesh.nerlige.ramappa@intel.com>,
	"Vishwanathapura,
	 Niranjana" <niranjana.vishwanathapura@intel.com>
Subject: Re: [PATCH] drm/xe: Decrement client count immediately on file close
Date: Thu, 19 Sep 2024 13:49:04 +0530	[thread overview]
Message-ID: <ZuveeB0xRivXDe4H@bvivekan-mobl1> (raw)
In-Reply-To: <fmn7yawuj6t2o6iwlwftyrxyjefkx2sicow6ynpb3evsdzluwi@vtamjjybuavm>

On 18.09.2024 15:43, Lucas De Marchi wrote:
> On Wed, Sep 18, 2024 at 11:11:10AM GMT, Upadhyay, Tejas wrote:
> > 
> > 
> > > -----Original Message-----
> > > From: Vivekanandan, Balasubramani
> > > <balasubramani.vivekanandan@intel.com>
> > > Sent: Wednesday, September 18, 2024 3:33 PM
> > > To: Upadhyay, Tejas <tejas.upadhyay@intel.com>; intel-
> > > xe@lists.freedesktop.org
> > > Cc: Nerlige Ramappa, Umesh <umesh.nerlige.ramappa@intel.com>;
> > > Vishwanathapura, Niranjana <niranjana.vishwanathapura@intel.com>; De
> > > Marchi, Lucas <lucas.demarchi@intel.com>
> > > Subject: Re: [PATCH] drm/xe: Decrement client count immediately on file close
> > > 
> > > On 18.09.2024 15:09, Upadhyay, Tejas wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Intel-xe <intel-xe-bounces@lists.freedesktop.org> On Behalf Of
> > > > > Balasubramani Vivekanandan
> > > > > Sent: Wednesday, September 18, 2024 1:42 PM
> > > > > To: intel-xe@lists.freedesktop.org
> > > > > Cc: Nerlige Ramappa, Umesh <umesh.nerlige.ramappa@intel.com>;
> > > > > Vishwanathapura, Niranjana <niranjana.vishwanathapura@intel.com>; De
> > > > > Marchi, Lucas <lucas.demarchi@intel.com>; Vivekanandan,
> > > > > Balasubramani <balasubramani.vivekanandan@intel.com>
> > > > > Subject: [PATCH] drm/xe: Decrement client count immediately on file
> > > > > close
> > > > >
> > > > > Decrement the client count immediately on file close. It is not
> > > > > required to be deferred to the resource cleanup function. Otherwise
> > > > > there will be a small time window, where there will be a non-zero
> > > > > client count even after closing all open file handles.
> > > > > This affects ccs_mode(xe_compute) igt tests as these tests try to
> > > > > change the ccs_mode immediately after closing all file handles, but
> > > > > the driver rejects the ccs_mode change request as it sees a non-zero client
> > > count.
> > > > >
> > > > > Fixes: ce8c161cbad4 ("drm/xe: Add ref counting for xe_file")
> > > > > Signed-off-by: Balasubramani Vivekanandan
> > > > > <balasubramani.vivekanandan@intel.com>
> > > > > ---
> > > > >  drivers/gpu/drm/xe/xe_device.c | 9 ++++-----
> > > > >  1 file changed, 4 insertions(+), 5 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/xe/xe_device.c
> > > > > b/drivers/gpu/drm/xe/xe_device.c index 4d3c794f134c..3bccea6212ff
> > > > > 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_device.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_device.c
> > > > > @@ -107,17 +107,12 @@ static int xe_file_open(struct drm_device
> > > > > *dev, struct drm_file *file)  static void xe_file_destroy(struct kref *ref)  {
> > > > >  	struct xe_file *xef = container_of(ref, struct xe_file, refcount);
> > > > > -	struct xe_device *xe = xef->xe;
> > > > >
> > > > >  	xa_destroy(&xef->exec_queue.xa);
> > > > >  	mutex_destroy(&xef->exec_queue.lock);
> > > > >  	xa_destroy(&xef->vm.xa);
> > > > >  	mutex_destroy(&xef->vm.lock);
> > > > >
> > > > > -	spin_lock(&xe->clients.lock);
> > > > > -	xe->clients.count--;
> > > > > -	spin_unlock(&xe->clients.lock);
> > > > > -
> > > > >  	xe_drm_client_put(xef->client);
> > > > >  	kfree(xef->process_name);
> > > > >  	kfree(xef);
> > > > > @@ -178,6 +173,10 @@ static void xe_file_close(struct drm_device
> > > > > *dev, struct drm_file *file)
> > > > >
> > > > >  	xe_file_put(xef);
> > > > >
> > > > > +	spin_lock(&xe->clients.lock);
> > > > > +	xe->clients.count--;
> > > > > +	spin_unlock(&xe->clients.lock);
> > > >
> > > > The file_close here is sychronus and serialized call with respect to
> > > userspace. Any settings done through sysfs post file_close should not required
> > > this change as far as I know. Would please explain scenario better?
> > > 
> > > In the current code, the client count is decremented in the function
> > > xe_file_destroy which is not invoked synchronously from xe_file_close.
> > > It is called when all references to xe_file are lost.
> > > References to xe_file are held during creation of vm and exec_queues. So the
> > > somebody might still be holding reference to xe_file while xe_file_close is
> > > called. Therefore the invocation of xe_file_destroy might be deferred.
> > > As of result, driver might see a non-zero client count even after all file handles
> > > are actually closed which is incorrect. We can defer only the freeing of
> > > resources to xe_file_destroy but the client count can be immediately adjusted
> > > in xe_file_close.
> > 
> > If whoever has hold ref, might still use ccs, why would we allow changing ccs mode in that case!
> 
> if the file is closed, there wouldn't be any submission at all. ref is
> kept alive for the data to be available for updates on the sw side only.
> 
> but I'm wondering on the value of this client tracking... this is so
> uncommon that we maybe should rely just on the list of open files being
> empty?

Using this client count also blocks changing the ccs mode even when any
display activity increments the client count. During module load, fbdev
initialization increments the client count. ccs_mode change should be
independent of display.

I am thinking does it make sense to confirm if the all ccs engines are
idle before attempting the ccs_mode change.
@Niranjana your thoughts?

Regards,
Bala

> 
> Lucas De Marchi
> 
> > 
> > Tejas
> > > 
> > > Regards,
> > > Bala
> > > >
> > > > Tejas
> > > > > +
> > > > >  	xe_pm_runtime_put(xe);
> > > > >  }
> > > > >
> > > > > --
> > > > > 2.34.1
> > > >

  reply	other threads:[~2024-09-19  8:19 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-18  8:11 [PATCH] drm/xe: Decrement client count immediately on file close Balasubramani Vivekanandan
2024-09-18  9:39 ` Upadhyay, Tejas
2024-09-18 10:03   ` Vivekanandan, Balasubramani
2024-09-18 11:11     ` Upadhyay, Tejas
2024-09-18 20:43       ` Lucas De Marchi
2024-09-19  8:19         ` Vivekanandan, Balasubramani [this message]
2024-09-19 11:55           ` Lucas De Marchi
2024-10-08  7:44             ` Vivekanandan, Balasubramani
2024-09-18 11:54 ` ✓ CI.Patch_applied: success for " Patchwork
2024-09-18 11:54 ` ✓ CI.checkpatch: " Patchwork
2024-09-18 11:55 ` ✓ CI.KUnit: " Patchwork
2024-09-18 12:09 ` ✓ CI.Build: " Patchwork
2024-09-18 12:14 ` ✓ CI.Hooks: " Patchwork
2024-09-18 12:33 ` ✓ CI.checksparse: " Patchwork
2024-09-18 13:16 ` ✓ CI.BAT: " Patchwork
2024-09-18 17:47 ` ✗ CI.FULL: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZuveeB0xRivXDe4H@bvivekan-mobl1 \
    --to=balasubramani.vivekanandan@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=niranjana.vishwanathapura@intel.com \
    --cc=tejas.upadhyay@intel.com \
    --cc=umesh.nerlige.ramappa@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox