public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Raag Jadav <raag.jadav@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <rodrigo.vivi@intel.com>,
	<thomas.hellstrom@linux.intel.com>, <riana.tauro@intel.com>,
	<michal.wajdeczko@intel.com>, <matthew.d.roper@intel.com>,
	<michal.winiarski@intel.com>, <matthew.auld@intel.com>,
	<maarten@lankhorst.se>, <jani.nikula@intel.com>,
	<lukasz.laguna@intel.com>, <zhanjun.dong@intel.com>
Subject: Re: [PATCH v4 2/9] drm/xe/guc_submit: Introduce xe_guc_submit_reinit()
Date: Fri, 27 Mar 2026 15:12:57 -0700	[thread overview]
Message-ID: <accA6VhvpBTzh2rD@gsse-cloud1.jf.intel.com> (raw)
In-Reply-To: <accAogC8IHidoyng@gsse-cloud1.jf.intel.com>

On Fri, Mar 27, 2026 at 03:11:46PM -0700, Matthew Brost wrote:
> On Sat, Mar 28, 2026 at 02:06:13AM +0530, Raag Jadav wrote:
> > In preparation of usecases which require re-initializing GuC submission
> > after PCIe FLR, introduce xe_guc_submit_reinit() helper. This will restore
> > exec queues which might have been killed before PCIe FLR.
> > 
> > Signed-off-by: Raag Jadav <raag.jadav@intel.com>
> > ---
> > v4: Teardown exec queues instead of mangling scheduler pending list (Matthew Brost)
> > ---
> >  drivers/gpu/drm/xe/xe_gpu_scheduler.h |  5 ++++
> >  drivers/gpu/drm/xe/xe_guc_submit.c    | 35 +++++++++++++++++++++++++++
> >  drivers/gpu/drm/xe/xe_guc_submit.h    |  1 +
> >  3 files changed, 41 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> > index 664c2db56af3..1e079ca3891c 100644
> > --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> > +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> > @@ -51,6 +51,11 @@ static inline void xe_sched_tdr_queue_imm(struct xe_gpu_scheduler *sched)
> >  	drm_sched_tdr_queue_imm(&sched->base);
> >  }
> >  
> > +static inline void xe_sched_update_timeout(struct xe_gpu_scheduler *sched, long timeout)
> > +{
> > +	sched->base.timeout = timeout;
> > +}
> > +
> >  static inline void xe_sched_resubmit_jobs(struct xe_gpu_scheduler *sched)
> >  {
> >  	struct drm_sched_job *s_job;
> > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> > index a145234f662b..c92bf42b8d3d 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > @@ -193,6 +193,11 @@ static void set_exec_queue_killed(struct xe_exec_queue *q)
> >  	atomic_or(EXEC_QUEUE_STATE_KILLED, &q->guc->state);
> >  }
> >  
> > +static void clear_exec_queue_killed(struct xe_exec_queue *q)
> > +{
> > +	atomic_and(~EXEC_QUEUE_STATE_KILLED, &q->guc->state);
> > +}
> 
> I think you'd want to do an atomic_set(&q->guc->state, 0) as we need
> clear everything.
> 
> > +
> >  static bool exec_queue_wedged(struct xe_exec_queue *q)
> >  {
> >  	return atomic_read(&q->guc->state) & EXEC_QUEUE_STATE_WEDGED;
> > @@ -347,6 +352,36 @@ int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids)
> >  	return devm_add_action_or_reset(xe->drm.dev, guc_submit_fini, guc);
> >  }
> >  
> > +/**
> > + * xe_guc_submit_reinit() - Re-initialize GuC submission.
> > + * @guc: the &xe_guc to re-initialize
> > + */
> > +void xe_guc_submit_reinit(struct xe_guc *guc)
> > +{
> > +	struct xe_exec_queue *q;
> > +	unsigned long index;
> > +
> > +	xe_gt_assert(guc_to_gt(guc), xe_guc_read_stopped(guc) == 1);
> > +
> > +	mutex_lock(&guc->submission_state.lock);
> > +
> > +	xa_for_each(&guc->submission_state.exec_queue_lookup, index, q) {
> 
> This is will reset user queues too which I don't think is desired as
> those should be torn down and cannot be reused.
> 
> We really only to reset kernel queues.
> 
> So how about a xe_exec_queue_ops reinit...
> 
> Something like:
> 
> static void guc_exec_queue_active(struct xe_exec_queue *q)
> {
> 	xe_sched_update_timeout(sched, timeout);
> 	atomic_set(&q->guc->state, 0);
> }
> 

Opps, typo... s/guc_exec_queue_active/guc_exec_queue_reinit

Matt

> Matt
> 
> > +		struct xe_gpu_scheduler *sched = &q->guc->sched;
> > +		long timeout;
> > +
> > +		/* Prevent redundant attempts to reinit parallel queues */
> > +		if (q->guc->id != index)
> > +			continue;
> > +
> > +		timeout = (q->vm && xe_vm_in_lr_mode(q->vm)) ? MAX_SCHEDULE_TIMEOUT :
> > +			   msecs_to_jiffies(q->sched_props.job_timeout_ms);
> > +		xe_sched_update_timeout(sched, timeout);
> > +		clear_exec_queue_killed(q);
> > +	}
> > +
> > +	mutex_unlock(&guc->submission_state.lock);
> > +}
> > +
> >  /*
> >   * Given that we want to guarantee enough RCS throughput to avoid missing
> >   * frames, we set the yield policy to 20% of each 80ms interval.
> > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.h b/drivers/gpu/drm/xe/xe_guc_submit.h
> > index b3839a90c142..a40aef196cbf 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_submit.h
> > +++ b/drivers/gpu/drm/xe/xe_guc_submit.h
> > @@ -13,6 +13,7 @@ struct xe_exec_queue;
> >  struct xe_guc;
> >  
> >  int xe_guc_submit_init(struct xe_guc *guc, unsigned int num_ids);
> > +void xe_guc_submit_reinit(struct xe_guc *guc);
> >  int xe_guc_submit_enable(struct xe_guc *guc);
> >  void xe_guc_submit_disable(struct xe_guc *guc);
> >  
> > -- 
> > 2.43.0
> > 

  reply	other threads:[~2026-03-27 22:13 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-27 20:36 [PATCH v4 0/9] Introduce Xe PCIe FLR Raag Jadav
2026-03-27 20:36 ` [PATCH v4 1/9] drm/xe/uc_fw: Allow re-initializing firmware Raag Jadav
2026-03-27 20:36 ` [PATCH v4 2/9] drm/xe/guc_submit: Introduce xe_guc_submit_reinit() Raag Jadav
2026-03-27 22:11   ` Matthew Brost
2026-03-27 22:12     ` Matthew Brost [this message]
2026-03-27 20:36 ` [PATCH v4 3/9] drm/xe/gt: Introduce FLR helpers Raag Jadav
2026-03-27 20:36 ` [PATCH v4 4/9] drm/xe/irq: Introduce xe_irq_disable() Raag Jadav
2026-03-27 20:36 ` [PATCH v4 5/9] drm/xe: Introduce xe_device_assert_lmem_ready() Raag Jadav
2026-03-27 20:36 ` [PATCH v4 6/9] drm/xe/bo_evict: Introduce xe_bo_restore_map() Raag Jadav
2026-03-27 20:36 ` [PATCH v4 7/9] drm/xe/exec_queue: Introduce xe_exec_queue_reinit() Raag Jadav
2026-03-27 20:36 ` [PATCH v4 8/9] drm/xe/migrate: Introduce xe_migrate_reinit() Raag Jadav
2026-03-27 20:36 ` [PATCH v4 9/9] drm/xe/pci: Introduce PCIe FLR Raag Jadav
2026-03-27 21:00   ` Matthew Brost
2026-03-28  4:46     ` Raag Jadav
2026-03-27 22:06 ` ✗ CI.checkpatch: warning for Introduce Xe PCIe FLR (rev4) Patchwork
2026-03-27 22:07 ` ✓ CI.KUnit: success " Patchwork
2026-03-27 22:56 ` ✓ Xe.CI.BAT: " Patchwork
2026-03-28 17:07 ` ✗ Xe.CI.FULL: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=accA6VhvpBTzh2rD@gsse-cloud1.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jani.nikula@intel.com \
    --cc=lukasz.laguna@intel.com \
    --cc=maarten@lankhorst.se \
    --cc=matthew.auld@intel.com \
    --cc=matthew.d.roper@intel.com \
    --cc=michal.wajdeczko@intel.com \
    --cc=michal.winiarski@intel.com \
    --cc=raag.jadav@intel.com \
    --cc=riana.tauro@intel.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=thomas.hellstrom@linux.intel.com \
    --cc=zhanjun.dong@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox