Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v2 1/2] drm/xe: Ban entire multi-queue group on any job timeout
Date: Fri, 16 Jan 2026 13:51:58 -0800	[thread overview]
Message-ID: <aWqy/o201NPK7utm@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <aWnmS2ZmTqNq5Hhd@nvishwa1-desk>

On Thu, Jan 15, 2026 at 11:18:35PM -0800, Niranjana Vishwanathapura wrote:
> On Mon, Jan 12, 2026 at 08:29:12PM -0800, Matthew Brost wrote:
> > In multi-queue mode, we only have control over the entire group, so we
> > cannot ban individual queues or signal fences until the whole group is
> > removed from hardware. Implement banning of the entire group if any job
> > within it times out.
> > 
> > v2:
> > - Fix CT lock inversion (Niranjana)
> 
> Turned out it was not the CT lock, but work_completion lock.
> 
> > - Initialize new queues in group to stopped
> > 
> > Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_exec_queue_types.h |   2 +
> > drivers/gpu/drm/xe/xe_guc_submit.c       | 103 +++++++++++++++++------
> > 2 files changed, 81 insertions(+), 24 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > index 5fc516b0bb77..562ea75891ba 100644
> > --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > @@ -66,6 +66,8 @@ struct xe_exec_queue_group {
> > 	bool sync_pending;
> > 	/** @banned: Group banned */
> > 	bool banned;
> > +	/** @stopped: Group is stopped, protected by list_lock */
> > +	bool stopped;
> > };
> > 
> > /**
> > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
> > index be8fa76baf1d..a11f3e572d25 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > @@ -558,6 +558,57 @@ static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue *q)
> > 	xe_sched_tdr_queue_imm(&q->guc->sched);
> > }
> > 
> > +static void xe_guc_exec_queue_group_stop(struct xe_exec_queue *q)
> > +{
> > +	struct xe_exec_queue *primary = xe_exec_queue_multi_queue_primary(q);
> > +	struct xe_exec_queue_group *group = q->multi_queue.group;
> > +	struct xe_exec_queue *eq, *next;
> > +	LIST_HEAD(tmp);
> > +
> > +	xe_gt_assert(guc_to_gt(exec_queue_to_guc(q)),
> > +		     xe_exec_queue_is_multi_queue(q));
> > +
> > +	mutex_lock(&group->list_lock);
> > +	group->stopped = true;
> > +	list_for_each_entry_safe(eq, next, &group->list, multi_queue.link)
> > +		if (xe_exec_queue_get_unless_zero(eq))
> > +			list_move_tail(&eq->multi_queue.link, &tmp);
> > +	mutex_unlock(&group->list_lock);
> > +
> > +	/* We cannot stop under list lock without getting inversions */
> > +	xe_sched_submission_stop(&primary->guc->sched);
> > +	list_for_each_entry(eq, &tmp, multi_queue.link)
> > +		xe_sched_submission_stop(&eq->guc->sched);
> > +
> > +	mutex_lock(&group->list_lock);
> > +	list_for_each_entry_safe(eq, next, &tmp, multi_queue.link) {
> > +		/* Corner where we got banned while stopping */
> > +		if (READ_ONCE(group->banned))
> > +			xe_guc_exec_queue_trigger_cleanup(eq);
> > +		list_move_tail(&eq->multi_queue.link, &group->list);
> > +		xe_exec_queue_put(eq);
> > +	}
> > +	mutex_unlock(&group->list_lock);
> > +}
> 
> May be add some documentatin for this function why we are doing this
> list copying (ie., about locking requirement)?
> 

Sure.

> > +
> > +static void xe_guc_exec_queue_group_start(struct xe_exec_queue *q)
> > +{
> > +	struct xe_exec_queue *primary = xe_exec_queue_multi_queue_primary(q);
> > +	struct xe_exec_queue_group *group = q->multi_queue.group;
> > +	struct xe_exec_queue *eq;
> > +
> > +	xe_gt_assert(guc_to_gt(exec_queue_to_guc(q)),
> > +		     xe_exec_queue_is_multi_queue(q));
> > +
> > +	xe_sched_submission_start(&primary->guc->sched);
> > +
> > +	mutex_lock(&group->list_lock);
> > +	group->stopped = false;
> > +	list_for_each_entry(eq, &group->list, multi_queue.link)
> > +		xe_sched_submission_start(&eq->guc->sched);
> > +	mutex_unlock(&group->list_lock);
> > +}
> > +
> > static void xe_guc_exec_queue_group_trigger_cleanup(struct xe_exec_queue *q)
> > {
> > 	struct xe_exec_queue *primary = xe_exec_queue_multi_queue_primary(q);
> > @@ -1411,7 +1462,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > {
> > 	struct xe_sched_job *job = to_xe_sched_job(drm_job);
> > 	struct drm_sched_job *tmp_job;
> > -	struct xe_exec_queue *q = job->q;
> > +	struct xe_exec_queue *q = job->q, *primary;
> > 	struct xe_gpu_scheduler *sched = &q->guc->sched;
> > 	struct xe_guc *guc = exec_queue_to_guc(q);
> > 	const char *process_name = "no process";
> > @@ -1422,6 +1473,11 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > 
> > 	xe_gt_assert(guc_to_gt(guc), !exec_queue_destroyed(q));
> > 
> > +	if (xe_exec_queue_is_multi_queue_secondary(q))
> > +		primary = xe_exec_queue_multi_queue_primary(q);
> > +	else
> > +		primary = q;
> > +
> 
> The xe_exec_queue_multi_queue_primary(q) already returns 'q' if it
> is not a multi-queue. So, we can remove the if/else here.
> 

Ah yes, it does. Will fix.

> > 	/*
> > 	 * TDR has fired before free job worker. Common if exec queue
> > 	 * immediately closed after last fence signaled. Add back to pending
> > @@ -1433,7 +1489,10 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > 		return DRM_GPU_SCHED_STAT_NO_HANG;
> > 
> > 	/* Kill the run_job entry point */
> > -	xe_sched_submission_stop(sched);
> > +	if (xe_exec_queue_is_multi_queue(q))
> > +		xe_guc_exec_queue_group_stop(q);
> > +	else
> > +		xe_sched_submission_stop(sched);
> > 
> > 	/* Must check all state after stopping scheduler */
> > 	skip_timeout_check = exec_queue_reset(q) ||
> > @@ -1448,14 +1507,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > 	if (xe_exec_queue_is_lr(q))
> > 		xe_gt_assert(guc_to_gt(guc), skip_timeout_check);
> > 
> > -	/*
> > -	 * FIXME: In multi-queue scenario, the TDR must ensure that the whole
> > -	 * multi-queue group is off the HW before signaling the fences to avoid
> > -	 * possible memory corruptions. This means disabling scheduling on the
> > -	 * primary queue before or during the secondary queue's TDR. Need to
> > -	 * implement this in least obtrusive way.
> > -	 */
> > -
> > 	/*
> > 	 * If devcoredump not captured and GuC capture for the job is not ready
> > 	 * do manual capture first and decide later if we need to use it
> > @@ -1482,10 +1533,11 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > 	set_exec_queue_banned(q);
> > 
> > 	/* Kick job / queue off hardware */
> > -	if (!wedged && (exec_queue_enabled(q) || exec_queue_pending_disable(q))) {
> > +	if (!wedged && (exec_queue_enabled(primary) ||
> > +			exec_queue_pending_disable(primary))) {
> > 		int ret;
> > 
> > -		if (exec_queue_reset(q))
> > +		if (exec_queue_reset(primary))
> > 			err = -EIO;
> > 
> > 		if (xe_uc_fw_is_running(&guc->fw)) {
> > @@ -1494,8 +1546,8 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > 			 * modifying state
> > 			 */
> > 			ret = wait_event_timeout(guc->ct.wq,
> > -						 (!exec_queue_pending_enable(q) &&
> > -						  !exec_queue_pending_disable(q)) ||
> > +						 (!exec_queue_pending_enable(primary) &&
> > +						  !exec_queue_pending_disable(primary)) ||
> > 						 xe_guc_read_stopped(guc) ||
> > 						 vf_recovery(guc), HZ * 5);
> > 			if (vf_recovery(guc))
> > @@ -1503,7 +1555,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > 			if (!ret || xe_guc_read_stopped(guc))
> > 				goto trigger_reset;
> > 
> > -			disable_scheduling(q, skip_timeout_check);
> > +			disable_scheduling(primary, skip_timeout_check);
> > 		}
> > 
> > 		/*
> > @@ -1517,7 +1569,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > 		smp_rmb();
> > 		ret = wait_event_timeout(guc->ct.wq,
> > 					 !xe_uc_fw_is_running(&guc->fw) ||
> > -					 !exec_queue_pending_disable(q) ||
> > +					 !exec_queue_pending_disable(primary) ||
> > 					 xe_guc_read_stopped(guc) ||
> > 					 vf_recovery(guc), HZ * 5);
> > 		if (vf_recovery(guc))
> > @@ -1527,11 +1579,11 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > 			if (!ret)
> > 				xe_gt_warn(guc_to_gt(guc),
> > 					   "Schedule disable failed to respond, guc_id=%d",
> > -					   q->guc->id);
> > -			xe_devcoredump(q, job,
> > +					   primary->guc->id);
> > +			xe_devcoredump(primary, job,
> > 				       "Schedule disable failed to respond, guc_id=%d, ret=%d, guc_read=%d",
> > -				       q->guc->id, ret, xe_guc_read_stopped(guc));
> > -			xe_gt_reset_async(q->gt);
> > +				       primary->guc->id, ret, xe_guc_read_stopped(guc));
> > +			xe_gt_reset_async(primary->gt);
> > 			xe_sched_tdr_queue_imm(sched);
> > 			goto rearm;
> > 		}
> > @@ -1577,12 +1629,13 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
> > 	drm_sched_for_each_pending_job(tmp_job, &sched->base, NULL)
> > 		xe_sched_job_set_error(to_xe_sched_job(tmp_job), -ECANCELED);
> > 
> > -	xe_sched_submission_start(sched);
> > -
> > -	if (xe_exec_queue_is_multi_queue(q))
> > +	if (xe_exec_queue_is_multi_queue(q)) {
> > +		xe_guc_exec_queue_group_start(q);
> > 		xe_guc_exec_queue_group_trigger_cleanup(q);
> > -	else
> > +	} else {
> > +		xe_sched_submission_start(sched);
> > 		xe_guc_exec_queue_trigger_cleanup(q);
> > +	}
> 
> There is another place below where we call xe_sched_submission_start()
> in the TDR. It also needs to be changed.
> 

Yep, will do.

Matt

> Niranjana
> 
> > 
> > 	/*
> > 	 * We want the job added back to the pending list so it gets freed; this
> > @@ -1962,6 +2015,8 @@ static int guc_exec_queue_init(struct xe_exec_queue *q)
> > 
> > 		INIT_LIST_HEAD(&q->multi_queue.link);
> > 		mutex_lock(&group->list_lock);
> > +		if (group->stopped)
> > +			WRITE_ONCE(q->guc->sched.base.pause_submit, true);
> > 		list_add_tail(&q->multi_queue.link, &group->list);
> > 		mutex_unlock(&group->list_lock);
> > 	}
> > -- 
> > 2.34.1
> > 

  reply	other threads:[~2026-01-16 21:52 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-13  4:29 [PATCH v2 0/2] Enable multi_queue Matthew Brost
2026-01-13  4:29 ` [PATCH v2 1/2] drm/xe: Ban entire multi-queue group on any job timeout Matthew Brost
2026-01-16  7:18   ` Niranjana Vishwanathapura
2026-01-16 21:51     ` Matthew Brost [this message]
2026-01-13  4:29 ` [PATCH v2 2/2] drm/xe/multi_queue: Enable multi_queue on xe3p_xpc Matthew Brost
2026-01-13  5:35 ` ✓ CI.KUnit: success for Enable multi_queue (rev2) Patchwork
2026-01-13  6:12 ` ✓ Xe.CI.BAT: " Patchwork
2026-01-13 11:41 ` ✓ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aWqy/o201NPK7utm@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=niranjana.vishwanathapura@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox