From: Matthew Brost <matthew.brost@intel.com>
To: "Lin, Shuicheng" <shuicheng.lin@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
"Summers, Stuart" <stuart.summers@intel.com>
Subject: Re: [PATCH] drm/xe: Limit number of jobs per exec queue
Date: Thu, 23 Oct 2025 12:05:42 -0700 [thread overview]
Message-ID: <aPp8hq0uUu6YCpWC@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <DM4PR11MB5456A69535CDE3915CE71A3AEAF0A@DM4PR11MB5456.namprd11.prod.outlook.com>
On Thu, Oct 23, 2025 at 09:51:27AM -0600, Lin, Shuicheng wrote:
> On Wed, Oct 22, 2025 11:48 AM Matthew Brost wrote:
> > On Wed, Oct 22, 2025 at 06:10:37PM +0000, Shuicheng Lin wrote:
> > > Add a limit to the number of jobs that can be queued in a single exec
> > > queue to avoid potential resource exhaustion.
> > >
> > > A new field `job_cnt` is introduced in `struct xe_exec_queue` to track
> > > the number of active DRM jobs, along with a maximum limit
> > > `XE_MAX_JOB_COUNT_PER_EXEC_QUEUE` set to 0x1000.
> > >
> > > If the job count exceeds this threshold, `xe_exec_ioctl()` now returns
> > > `-EAGAIN` to signal that the caller should retry later.
> > >
> > > Suggested-by: Matthew Brost <matthew.brost@intel.com>
> > > Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_exec.c | 5 +++++
> > > drivers/gpu/drm/xe/xe_exec_queue_types.h | 5 +++++
> > > drivers/gpu/drm/xe/xe_sched_job.c | 2 ++
> > > 3 files changed, 12 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_exec.c
> > > b/drivers/gpu/drm/xe/xe_exec.c index 0dc27476832b..722a5ac0200a
> > 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec.c
> > > +++ b/drivers/gpu/drm/xe/xe_exec.c
> > > @@ -154,6 +154,11 @@ int xe_exec_ioctl(struct drm_device *dev, void
> > *data, struct drm_file *file)
> > > goto err_exec_queue;
> > > }
> > >
> > > + if (q->job_cnt >= XE_MAX_JOB_COUNT_PER_EXEC_QUEUE) {
> > > + err = -EAGAIN;
> >
> > How about an ftrace point if this occurs? Should help us detect if users
> > somehow hit this during normal operation.
>
> How about use " xe_err_once" log to detect it? err log in dmesg is more noticeable.
>
It is not an error. -EAGAIN is a specific return code in DRM that means
"try again." We want to avoid returning -EAGAIN to well-behaved
applications, while still being able to detect it. At the same time, we
want to prevent misbehaving applications from spamming dmesg.
Matt
> >
> > > + goto err_exec_queue;
> > > + }
> > > +
> > > if (args->num_syncs) {
> > > syncs = kcalloc(args->num_syncs, sizeof(*syncs),
> > GFP_KERNEL);
> > > if (!syncs) {
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > index 282505fa1377..5f3219acec3c 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > @@ -162,6 +162,11 @@ struct xe_exec_queue {
> > > const struct xe_ring_ops *ring_ops;
> > > /** @entity: DRM sched entity for this exec queue (1 to 1 relationship)
> > */
> > > struct drm_sched_entity *entity;
> > > +
> > > +#define XE_MAX_JOB_COUNT_PER_EXEC_QUEUE 0x1000
> >
> > Hmm, I'm trying to think of a reasonable limit. 4K seems high. Maybe we
> > should check with the UMDs to understand the deepest reasonable pipeline of
> > jobs they build — to avoid blocking them here while still limiting the DoS
> > surface. My initial thought that 512 or 1k should be a reasonable limit for now.
> > Can always adjust going forward.
>
> Yes. Let me change it to 1k. And adjust it later if needed.
>
> >
> > > +
> > > + /** @job_cnt: number of drm jobs in this exec queue */
> > > + u32 job_cnt;
> >
> > This needs to be atomic. xe_sched_job_destroy can run in parallel with the
> > exec IOCTL.
>
> Thanks. Yes, it should be atomic.
>
> Shuicheng
>
> >
> > Everything else looks correct.
> >
> > Matt
> >
> > > /**
> > > * @tlb_flush_seqno: The seqno of the last rebind tlb flush performed
> > > * Protected by @vm's resv. Unused if @vm == NULL.
> > > diff --git a/drivers/gpu/drm/xe/xe_sched_job.c
> > > b/drivers/gpu/drm/xe/xe_sched_job.c
> > > index d21bf8f26964..37f58be7cbcc 100644
> > > --- a/drivers/gpu/drm/xe/xe_sched_job.c
> > > +++ b/drivers/gpu/drm/xe/xe_sched_job.c
> > > @@ -146,6 +146,7 @@ struct xe_sched_job *xe_sched_job_create(struct
> > xe_exec_queue *q,
> > > for (i = 0; i < width; ++i)
> > > job->ptrs[i].batch_addr = batch_addr[i];
> > >
> > > + q->job_cnt++;
> > > xe_pm_runtime_get_noresume(job_to_xe(job));
> > > trace_xe_sched_job_create(job);
> > > return job;
> > > @@ -177,6 +178,7 @@ void xe_sched_job_destroy(struct kref *ref)
> > > dma_fence_put(job->fence);
> > > drm_sched_job_cleanup(&job->drm);
> > > job_free(job);
> > > + q->job_cnt--;
> > > xe_exec_queue_put(q);
> > > xe_pm_runtime_put(xe);
> > > }
> > > --
> > > 2.49.0
> > >
next prev parent reply other threads:[~2025-10-23 19:05 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-22 18:10 [PATCH] drm/xe: Limit number of jobs per exec queue Shuicheng Lin
2025-10-22 18:48 ` Matthew Brost
2025-10-23 15:51 ` Lin, Shuicheng
2025-10-23 19:05 ` Matthew Brost [this message]
2025-10-23 1:04 ` ✓ CI.KUnit: success for " Patchwork
2025-10-23 1:43 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-10-23 8:26 ` ✗ Xe.CI.Full: " Patchwork
2025-10-25 18:10 ` [PATCH v2] " Shuicheng Lin
2025-10-25 18:20 ` ✓ CI.KUnit: success for drm/xe: Limit number of jobs per exec queue (rev2) Patchwork
2025-10-25 18:59 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-25 20:09 ` ✗ Xe.CI.Full: failure " Patchwork
2025-10-27 16:33 ` [PATCH] drm/xe: Limit number of jobs per exec queue Matthew Brost
2025-10-27 19:53 ` Matthew Brost
2025-10-27 20:21 ` [PATCH v3] " Shuicheng Lin
2025-10-27 21:47 ` Matthew Brost
2025-10-29 1:48 ` Matthew Brost
2025-10-27 21:14 ` ✓ CI.KUnit: success for drm/xe: Limit number of jobs per exec queue (rev3) Patchwork
2025-10-27 21:52 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-28 4:39 ` ✓ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aPp8hq0uUu6YCpWC@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=shuicheng.lin@intel.com \
--cc=stuart.summers@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox