From: Danilo Krummrich <dakr@kernel.org>
To: "Philipp Stanner" <phasta@kernel.org>,
"Matthew Brost" <matthew.brost@intel.com>,
"Christian König" <ckoenig.leichtzumerken@gmail.com>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Sumit Semwal" <sumit.semwal@linaro.org>,
dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
linux-media@vger.kernel.org
Subject: Re: [PATCH v2] drm/sched: Clarify scenarios for separate workqueues
Date: Tue, 17 Jun 2025 16:10:40 +0200 [thread overview]
Message-ID: <aFF3YIAFkgsAKvQV@pollux> (raw)
In-Reply-To: <aFFy5aG1eOeMU44S@phenom.ffwll.local>
On Tue, Jun 17, 2025 at 03:51:33PM +0200, Simona Vetter wrote:
> On Thu, Jun 12, 2025 at 04:49:54PM +0200, Philipp Stanner wrote:
> > + * NOTE that sharing &struct drm_sched_init_args.submit_wq with the driver
> > + * theoretically can deadlock. It must be guaranteed that submit_wq never has
> > + * more than max_active - 1 active tasks, or if max_active tasks are reached at
> > + * least one of them does not execute operations that may block on dma_fences
> > + * that potentially make progress through this scheduler instance. Otherwise,
> > + * it is possible that all max_active tasks end up waiting on a dma_fence (that
> > + * can only make progress through this schduler instance), while the
> > + * scheduler's queued work waits for at least one of the max_active tasks to
> > + * finish. Thus, this can result in a deadlock.
>
> Uh if you have an ordered wq you deadlock with just one misuse. I'd just
> explain that the wq must provide sufficient forward-progress guarantees
> for the scheduler, specifically that it's on the dma_fence signalling
> critical path and leave the concrete examples for people to figure out
> when the design a specific locking scheme.
This isn't a concrete example, is it? It's exactly what you say in slightly
different words, with the addition of highlighting the impact of the workqueue's
max_active configuration.
I think that's relevant, because N - 1 active tasks can be on the dma_fence
signalling critical path without issues.
We could change
"if max_active tasks are reached at least one of them must not execute
operations that may block on dma_fences that potentially make progress
through this scheduler instance"
to
"if max_active tasks are reached at least one of them must not be on the
dma_fence signalling critical path"
which is a bit more to the point I think.
next prev parent reply other threads:[~2025-06-17 14:10 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-12 14:49 [PATCH v2] drm/sched: Clarify scenarios for separate workqueues Philipp Stanner
2025-06-17 13:51 ` Simona Vetter
2025-06-17 14:10 ` Danilo Krummrich [this message]
2025-06-17 14:25 ` Simona Vetter
2025-06-17 15:08 ` Danilo Krummrich
2025-06-18 14:06 ` Simona Vetter
2025-06-18 14:42 ` Danilo Krummrich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aFF3YIAFkgsAKvQV@pollux \
--to=dakr@kernel.org \
--cc=airlied@gmail.com \
--cc=ckoenig.leichtzumerken@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=matthew.brost@intel.com \
--cc=phasta@kernel.org \
--cc=simona@ffwll.ch \
--cc=sumit.semwal@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).