From: Boris Brezillon <boris.brezillon@collabora.com>
To: Chia-I Wu <olvaffe@gmail.com>
Cc: "ML dri-devel" <dri-devel@lists.freedesktop.org>,
intel-xe@lists.freedesktop.org,
"Steven Price" <steven.price@arm.com>,
"Liviu Dudau" <liviu.dudau@arm.com>,
"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
"Maxime Ripard" <mripard@kernel.org>,
"Thomas Zimmermann" <tzimmermann@suse.de>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Matthew Brost" <matthew.brost@intel.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"Philipp Stanner" <phasta@kernel.org>,
"Christian König" <ckoenig.leichtzumerken@gmail.com>,
"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Rodrigo Vivi" <rodrigo.vivi@intel.com>,
"open list" <linux-kernel@vger.kernel.org>
Subject: Re: drm_sched run_job and scheduling latency
Date: Thu, 5 Mar 2026 10:23:23 +0100 [thread overview]
Message-ID: <20260305102323.11b07502@fedora> (raw)
In-Reply-To: <CAPaKu7RbCtkz1BbX57+CebB2uepyCAi-3QzBy8BDGngCJ-Du0w@mail.gmail.com>
On Wed, 4 Mar 2026 14:51:39 -0800
Chia-I Wu <olvaffe@gmail.com> wrote:
> Hi,
>
> Our system compositor (surfaceflinger on android) submits gpu jobs
> from a SCHED_FIFO thread to an RT gpu queue. However, because
> workqueue threads are SCHED_NORMAL, the scheduling latency from submit
> to run_job can sometimes cause frame misses. We are seeing this on
> panthor and xe, but the issue should be common to all drm_sched users.
>
> Using a WQ_HIGHPRI workqueue helps, but it is still not RT (and won't
> meet future android requirements). It seems either workqueue needs to
> gain RT support, or drm_sched needs to support kthread_worker.
>
> I know drm_sched switched from kthread_worker to workqueue for better
> scaling when xe was introduced.
Actually, it went from a plain kthread with open-coded "work" support to
workqueues. The kthread_worker+kthread_work model looks closer to what
workqueues provide, so transitioning drivers to it shouldn't be too
hard. The scalability issue you mentioned (one thread per GPU context
doesn't scale) doesn't apply, because we can pretty easily share the
same kthread_worker for all drm_gpu_scheduler instances, just like we
can share the same workqueue for all drm_gpu_scheduler instances today.
Luckily, it seems that no one so far has been using
WQ_PERCPU-workqueues, so that's one less thing we need to worry about.
The last remaining drawback with a kthread_work[er] based solution is
the fact workqueues can adjust the number of worker threads on demand
based on the load. If we really need this flexibility (a non static
number of threads per-prio level per-driver), that's something we'll
have to add support for.
For Panthor, the way I see it, we could start with one thread per-group
priority, and then pick the worker thread to use at drm_sched_init()
based on the group prio. If we need something with a thread pool, then
drm_sched will have to know about those threads, and do some load
balancing when queueing the works...
Note that someone at Collabora is working on dynamic context priority
support, meaning we'll have to be able to change the drm_gpu_scheduler
kthread_worker at runtime.
TLDR; All of this is doable, but it's more work (for us, DRM devs) than
asking RT prio support to be added to workqueues.
> But if drm_sched can support either
> workqueue or kthread_worker during drm_sched_init, drivers can
> selectively use kthread_worker only for RT gpu queues. And because
> drivers require CAP_SYS_NICE for RT gpu queues, this should not cause
> scaling issues.
I think, whatever we choose to go for, we probably don't want to keep
both models around, because that's going to be a pain to maintain.
next prev parent reply other threads:[~2026-03-05 9:23 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-04 22:51 drm_sched run_job and scheduling latency Chia-I Wu
2026-03-05 2:04 ` Matthew Brost
2026-03-05 8:27 ` Boris Brezillon
2026-03-05 8:38 ` Philipp Stanner
2026-03-05 9:10 ` Matthew Brost
2026-03-05 9:47 ` Philipp Stanner
2026-03-16 4:05 ` Matthew Brost
2026-03-16 4:14 ` Matthew Brost
2026-03-05 10:19 ` Boris Brezillon
2026-03-05 12:27 ` Danilo Krummrich
2026-03-05 10:09 ` Matthew Brost
2026-03-05 10:52 ` Boris Brezillon
2026-03-05 20:51 ` Matthew Brost
2026-03-06 5:13 ` Chia-I Wu
2026-03-06 7:21 ` Matthew Brost
2026-03-06 9:36 ` Michel Dänzer
2026-03-06 9:40 ` Michel Dänzer
2026-03-05 8:35 ` Tvrtko Ursulin
2026-03-05 9:40 ` Boris Brezillon
2026-03-27 9:19 ` Tvrtko Ursulin
2026-03-05 9:23 ` Boris Brezillon [this message]
2026-03-06 5:33 ` Chia-I Wu
2026-03-06 7:36 ` Matthew Brost
2026-03-05 23:09 ` Hillf Danton
2026-03-06 5:46 ` Chia-I Wu
2026-03-06 11:58 ` Hillf Danton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260305102323.11b07502@fedora \
--to=boris.brezillon@collabora.com \
--cc=airlied@gmail.com \
--cc=ckoenig.leichtzumerken@gmail.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=liviu.dudau@arm.com \
--cc=maarten.lankhorst@linux.intel.com \
--cc=matthew.brost@intel.com \
--cc=mripard@kernel.org \
--cc=olvaffe@gmail.com \
--cc=phasta@kernel.org \
--cc=rodrigo.vivi@intel.com \
--cc=simona@ffwll.ch \
--cc=steven.price@arm.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tzimmermann@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox