Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <stuart.summers@intel.com>
Subject: Re: [PATCH] drm/xe: Limit number of jobs per exec queue
Date: Mon, 27 Oct 2025 12:53:27 -0700	[thread overview]
Message-ID: <aP/Nt8TKVfwhfoVn@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <aP+evyn9qN1iVJDK@lstrano-desk.jf.intel.com>

On Mon, Oct 27, 2025 at 09:33:03AM -0700, Matthew Brost wrote:
> On Wed, Oct 22, 2025 at 06:10:37PM +0000, Shuicheng Lin wrote:
> > Add a limit to the number of jobs that can be queued in a single
> > exec queue to avoid potential resource exhaustion.
> > 
> > A new field `job_cnt` is introduced in `struct xe_exec_queue` to
> > track the number of active DRM jobs, along with a maximum limit
> > `XE_MAX_JOB_COUNT_PER_EXEC_QUEUE` set to 0x1000.
> > 
> > If the job count exceeds this threshold, `xe_exec_ioctl()` now
> > returns `-EAGAIN` to signal that the caller should retry later.
> > 
> 
> This patch looks correct. One extra suggestion - can we get an assert in
> xe_exec_queue_destroy that q->job_cnt is zero?
> 

This was sent to wrong version.

Matt

> > Suggested-by: Matthew Brost <matthew.brost@intel.com>
> > Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_exec.c             | 5 +++++
> >  drivers/gpu/drm/xe/xe_exec_queue_types.h | 5 +++++
> >  drivers/gpu/drm/xe/xe_sched_job.c        | 2 ++
> >  3 files changed, 12 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
> > index 0dc27476832b..722a5ac0200a 100644
> > --- a/drivers/gpu/drm/xe/xe_exec.c
> > +++ b/drivers/gpu/drm/xe/xe_exec.c
> > @@ -154,6 +154,11 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
> >  		goto err_exec_queue;
> >  	}
> >  
> > +	if (q->job_cnt >= XE_MAX_JOB_COUNT_PER_EXEC_QUEUE) {
> > +		err = -EAGAIN;
> > +		goto err_exec_queue;
> > +	}
> > +
> >  	if (args->num_syncs) {
> >  		syncs = kcalloc(args->num_syncs, sizeof(*syncs), GFP_KERNEL);
> >  		if (!syncs) {
> > diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > index 282505fa1377..5f3219acec3c 100644
> > --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > @@ -162,6 +162,11 @@ struct xe_exec_queue {
> >  	const struct xe_ring_ops *ring_ops;
> >  	/** @entity: DRM sched entity for this exec queue (1 to 1 relationship) */
> >  	struct drm_sched_entity *entity;
> > +
> > +#define XE_MAX_JOB_COUNT_PER_EXEC_QUEUE	0x1000
> > +
> 
> I'd lose newline here
> 
> > +	/** @job_cnt: number of drm jobs in this exec queue */
> > +	u32 job_cnt;
> 
> And add one here.
> 
> Matt
> 
> >  	/**
> >  	 * @tlb_flush_seqno: The seqno of the last rebind tlb flush performed
> >  	 * Protected by @vm's resv. Unused if @vm == NULL.
> > diff --git a/drivers/gpu/drm/xe/xe_sched_job.c b/drivers/gpu/drm/xe/xe_sched_job.c
> > index d21bf8f26964..37f58be7cbcc 100644
> > --- a/drivers/gpu/drm/xe/xe_sched_job.c
> > +++ b/drivers/gpu/drm/xe/xe_sched_job.c
> > @@ -146,6 +146,7 @@ struct xe_sched_job *xe_sched_job_create(struct xe_exec_queue *q,
> >  	for (i = 0; i < width; ++i)
> >  		job->ptrs[i].batch_addr = batch_addr[i];
> >  
> > +	q->job_cnt++;
> >  	xe_pm_runtime_get_noresume(job_to_xe(job));
> >  	trace_xe_sched_job_create(job);
> >  	return job;
> > @@ -177,6 +178,7 @@ void xe_sched_job_destroy(struct kref *ref)
> >  	dma_fence_put(job->fence);
> >  	drm_sched_job_cleanup(&job->drm);
> >  	job_free(job);
> > +	q->job_cnt--;
> >  	xe_exec_queue_put(q);
> >  	xe_pm_runtime_put(xe);
> >  }
> > -- 
> > 2.49.0
> > 

  reply	other threads:[~2025-10-27 19:53 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-22 18:10 [PATCH] drm/xe: Limit number of jobs per exec queue Shuicheng Lin
2025-10-22 18:48 ` Matthew Brost
2025-10-23 15:51   ` Lin, Shuicheng
2025-10-23 19:05     ` Matthew Brost
2025-10-23  1:04 ` ✓ CI.KUnit: success for " Patchwork
2025-10-23  1:43 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-10-23  8:26 ` ✗ Xe.CI.Full: " Patchwork
2025-10-25 18:10 ` [PATCH v2] " Shuicheng Lin
2025-10-25 18:20 ` ✓ CI.KUnit: success for drm/xe: Limit number of jobs per exec queue (rev2) Patchwork
2025-10-25 18:59 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-25 20:09 ` ✗ Xe.CI.Full: failure " Patchwork
2025-10-27 16:33 ` [PATCH] drm/xe: Limit number of jobs per exec queue Matthew Brost
2025-10-27 19:53   ` Matthew Brost [this message]
2025-10-27 20:21 ` [PATCH v3] " Shuicheng Lin
2025-10-27 21:47   ` Matthew Brost
2025-10-29  1:48   ` Matthew Brost
2025-10-27 21:14 ` ✓ CI.KUnit: success for drm/xe: Limit number of jobs per exec queue (rev3) Patchwork
2025-10-27 21:52 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-28  4:39 ` ✓ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aP/Nt8TKVfwhfoVn@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=shuicheng.lin@intel.com \
    --cc=stuart.summers@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox