Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: "Upadhyay, Tejas" <tejas.upadhyay@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH] drm/xe: Unlink client during vm close
Date: Fri, 19 Jul 2024 06:52:15 +0000	[thread overview]
Message-ID: <ZpoNHwGU6OXxmpqJ@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <SJ1PR11MB62048D1E37A856E52AE40C4381AD2@SJ1PR11MB6204.namprd11.prod.outlook.com>

On Thu, Jul 18, 2024 at 11:08:42PM -0600, Upadhyay, Tejas wrote:
> 
> 
> > -----Original Message-----
> > From: Brost, Matthew <matthew.brost@intel.com>
> > Sent: Thursday, July 18, 2024 9:28 PM
> > To: Upadhyay, Tejas <tejas.upadhyay@intel.com>
> > Cc: intel-xe@lists.freedesktop.org
> > Subject: Re: [PATCH] drm/xe: Unlink client during vm close
> > 
> > On Thu, Jul 18, 2024 at 06:47:52PM +0530, Tejas Upadhyay wrote:
> > > We have async call which does not know if client unlinked from vm by
> > > the time it is accessed. Set client unlink early during xe_vm_close()
> > > so that async API do not touch closed client info.
> > >
> > > Also, debugs related to job timeout is not useful when its "no
> > > process" or client already unlinked.
> > >
> > 
> > It kernel exec queue timeout jobs, now the 'Timedout job' message will not
> > be displayed which is not ideal.
> > 
> > > Fixes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2273
> > 
> > Where is exactly is this access coming from?
> > BUG: kernel NULL pointer dereference, address: 0000000000000058
> 
> In guc_exec_queue_timedout_job() accessing "q->vm->xef->drm" after client closed fd causing crash. We cant take ref and keep client awake till jobs timedout is what I thought. 
> 

Taking ref to q->vm->xef is exactly what Umesh's series [1] here is
doing. I believe this is the correct behavior and based on you comment
above, I also I believe it will fix this issue. Please test with this
series. Hopefully Umesh gets this in soon.

[1] https://patchwork.freedesktop.org/series/135865/

> > 
> > Also btw, the correct tag for gitlab link is 'Closes', "Fixes' is the offending
> > kernel patch so the fixe can be pulled into stable kernels.
> 
> Ok
> 
> > 
> > > Signed-off-by: Tejas Upadhyay <tejas.upadhyay@intel.com>
> > > ---
> > >  drivers/gpu/drm/xe/xe_guc_submit.c | 7 ++++---
> > >  drivers/gpu/drm/xe/xe_vm.c         | 1 +
> > >  2 files changed, 5 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > index 860405527115..1de141cb84c6 100644
> > > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > @@ -1166,10 +1166,11 @@ guc_exec_queue_timedout_job(struct
> > drm_sched_job *drm_job)
> > >  			process_name = task->comm;
> > >  			pid = task->pid;
> > >  		}
> > > +		xe_gt_notice(guc_to_gt(guc), "Timedout job: seqno=%u,
> > lrc_seqno=%u, guc_id=%d, flags=0x%lx in %s [%d]",
> > > +			     xe_sched_job_seqno(job),
> > xe_sched_job_lrc_seqno(job),
> > > +			     q->guc->id, q->flags, process_name, pid);
> > >  	}
> > > -	xe_gt_notice(guc_to_gt(guc), "Timedout job: seqno=%u,
> > lrc_seqno=%u, guc_id=%d, flags=0x%lx in %s [%d]",
> > > -		     xe_sched_job_seqno(job), xe_sched_job_lrc_seqno(job),
> > > -		     q->guc->id, q->flags, process_name, pid);
> > > +
> > >  	if (task)
> > >  		put_task_struct(task);
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > index cf3aea5d8cdc..660b20e0e207 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -1537,6 +1537,7 @@ static void xe_vm_close(struct xe_vm *vm)  {
> > >  	down_write(&vm->lock);
> > >  	vm->size = 0;
> > > +	vm->xef = NULL;
> > 
> > This doesn't appear to be thread safe.
> 
> Would you please elaborate!
>

Sure.

vm->xef is to NULL under vm->lock in write while
guc_exec_queue_timedout_job doesn't hold the lock so the two can race.
If you wanted to be thread safe, the latter would at least need vm->lock
in read mode.

Anyways this patch is likely not needed based on my feedback above.

Matt
 
> Thanks,
> Tejas
> > 
> > Matt
> > 
> > >  	up_write(&vm->lock);
> > >  }
> > >
> > > --
> > > 2.25.1
> > >

  reply	other threads:[~2024-07-19  6:53 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-18 13:17 [PATCH] drm/xe: Unlink client during vm close Tejas Upadhyay
2024-07-18 13:22 ` ✓ CI.Patch_applied: success for " Patchwork
2024-07-18 13:22 ` ✗ CI.checkpatch: warning " Patchwork
2024-07-18 13:23 ` ✓ CI.KUnit: success " Patchwork
2024-07-18 13:35 ` ✓ CI.Build: " Patchwork
2024-07-18 13:38 ` ✓ CI.Hooks: " Patchwork
2024-07-18 13:39 ` ✓ CI.checksparse: " Patchwork
2024-07-18 14:00 ` ✓ CI.BAT: " Patchwork
2024-07-18 15:58 ` [PATCH] " Matthew Brost
2024-07-19  5:08   ` Upadhyay, Tejas
2024-07-19  6:52     ` Matthew Brost [this message]
2024-07-19  7:08       ` Matthew Brost
2024-07-19 10:14       ` Upadhyay, Tejas
2024-07-19 16:29         ` Lucas De Marchi
2024-07-18 18:39 ` ✗ CI.FULL: failure for " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZpoNHwGU6OXxmpqJ@DUT025-TGLU.fm.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=tejas.upadhyay@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox