From: Philipp Stanner <phasta@mailbox.org>
To: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>,
dri-devel@lists.freedesktop.org
Cc: intel-xe@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
kernel-dev@igalia.com,
"Christian König" <christian.koenig@amd.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"Matthew Brost" <matthew.brost@intel.com>,
"Philipp Stanner" <phasta@kernel.org>
Subject: Re: [PATCH v6 03/15] drm/sched: Avoid double re-lock on the job free path
Date: Tue, 08 Jul 2025 13:22:48 +0200 [thread overview]
Message-ID: <1ac53305b99569707a828e8d972f23c40722dd56.camel@mailbox.org> (raw)
In-Reply-To: <20250708095147.73366-4-tvrtko.ursulin@igalia.com>
On Tue, 2025-07-08 at 10:51 +0100, Tvrtko Ursulin wrote:
> Currently the job free work item will lock sched->job_list_lock first
> time
> to see if there are any jobs, free a single job, and then lock again
> to
> decide whether to re-queue itself if there are more finished jobs.
>
> Since drm_sched_get_finished_job() already looks at the second job in
> the
> queue we can simply add the signaled check and have it return the
> presence
> of more jobs to free to the caller. That way the work item does not
> have
> to lock the list again and repeat the signaled check.
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Philipp Stanner <phasta@kernel.org>
This one can be sent separately, like the one for drm_sched_init()
recently, can't it?
P.
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 37 ++++++++++--------------
> --
> 1 file changed, 14 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> b/drivers/gpu/drm/scheduler/sched_main.c
> index 1f077782ec12..1bce0b66f89c 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -366,22 +366,6 @@ static void __drm_sched_run_free_queue(struct
> drm_gpu_scheduler *sched)
> queue_work(sched->submit_wq, &sched->work_free_job);
> }
>
> -/**
> - * drm_sched_run_free_queue - enqueue free-job work if ready
> - * @sched: scheduler instance
> - */
> -static void drm_sched_run_free_queue(struct drm_gpu_scheduler
> *sched)
> -{
> - struct drm_sched_job *job;
> -
> - spin_lock(&sched->job_list_lock);
> - job = list_first_entry_or_null(&sched->pending_list,
> - struct drm_sched_job, list);
> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
> - __drm_sched_run_free_queue(sched);
> - spin_unlock(&sched->job_list_lock);
> -}
> -
> /**
> * drm_sched_job_done - complete a job
> * @s_job: pointer to the job which is done
> @@ -1102,12 +1086,13 @@ drm_sched_select_entity(struct
> drm_gpu_scheduler *sched)
> * drm_sched_get_finished_job - fetch the next finished job to be
> destroyed
> *
> * @sched: scheduler instance
> + * @have_more: are there more finished jobs on the list
> *
> * Returns the next finished job from the pending list (if there is
> one)
> * ready for it to be destroyed.
> */
> static struct drm_sched_job *
> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool
> *have_more)
> {
> struct drm_sched_job *job, *next;
>
> @@ -1115,22 +1100,25 @@ drm_sched_get_finished_job(struct
> drm_gpu_scheduler *sched)
>
> job = list_first_entry_or_null(&sched->pending_list,
> struct drm_sched_job, list);
> -
> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
> /* remove job from pending_list */
> list_del_init(&job->list);
>
> /* cancel this job's TO timer */
> cancel_delayed_work(&sched->work_tdr);
> - /* make the scheduled timestamp more accurate */
> +
> + *have_more = false;
> next = list_first_entry_or_null(&sched-
> >pending_list,
> typeof(*next),
> list);
> -
> if (next) {
> + /* make the scheduled timestamp more
> accurate */
> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
> &next->s_fence-
> >scheduled.flags))
> next->s_fence->scheduled.timestamp =
> dma_fence_timestamp(&job-
> >s_fence->finished);
> +
> + *have_more = dma_fence_is_signaled(&next-
> >s_fence->finished);
> +
> /* start TO timer for next job */
> drm_sched_start_timeout(sched);
> }
> @@ -1189,12 +1177,15 @@ static void drm_sched_free_job_work(struct
> work_struct *w)
> struct drm_gpu_scheduler *sched =
> container_of(w, struct drm_gpu_scheduler,
> work_free_job);
> struct drm_sched_job *job;
> + bool have_more;
>
> - job = drm_sched_get_finished_job(sched);
> - if (job)
> + job = drm_sched_get_finished_job(sched, &have_more);
> + if (job) {
> sched->ops->free_job(job);
> + if (have_more)
> + __drm_sched_run_free_queue(sched);
> + }
>
> - drm_sched_run_free_queue(sched);
> drm_sched_run_job_queue(sched);
> }
>
next prev parent reply other threads:[~2025-07-08 11:22 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-08 9:51 [PATCH v6 00/15] Fair DRM scheduler Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 01/15] drm/sched: Add some scheduling quality unit tests Tvrtko Ursulin
2025-07-09 11:16 ` kernel test robot
2025-07-08 9:51 ` [PATCH v6 02/15] drm/sched: Add some more " Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 03/15] drm/sched: Avoid double re-lock on the job free path Tvrtko Ursulin
2025-07-08 11:22 ` Philipp Stanner [this message]
2025-07-08 12:23 ` Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 04/15] drm/sched: Consolidate drm_sched_job_timedout Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 05/15] drm/sched: Consolidate drm_sched_rq_select_entity_rr Tvrtko Ursulin
2025-07-08 11:31 ` Philipp Stanner
2025-07-08 12:21 ` Danilo Krummrich
2025-07-08 12:23 ` Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 06/15] drm/sched: Implement RR via FIFO Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 07/15] drm/sched: Consolidate entity run queue management Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 08/15] drm/sched: Move run queue related code into a separate file Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 09/15] drm/sched: Free all finished jobs at once Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 10/15] drm/sched: Account entity GPU time Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 11/15] drm/sched: Remove idle entity from tree Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 12/15] drm/sched: Add fair scheduling policy Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 13/15] drm/sched: Remove FIFO and RR and simplify to a single run queue Tvrtko Ursulin
2025-07-08 9:51 ` [PATCH v6 14/15] drm/sched: Queue all free credits in one worker invocation Tvrtko Ursulin
2025-07-08 12:19 ` Philipp Stanner
2025-07-08 12:28 ` Tvrtko Ursulin
2025-07-08 12:37 ` Christian König
2025-07-08 12:54 ` Tvrtko Ursulin
2025-07-08 13:02 ` Christian König
2025-07-08 15:31 ` Tvrtko Ursulin
2025-07-08 18:59 ` Matthew Brost
2025-07-09 8:57 ` Christian König
2025-07-08 9:51 ` [PATCH v6 15/15] drm/sched: Embed run queue singleton into the scheduler Tvrtko Ursulin
2025-07-08 10:40 ` [PATCH v6 00/15] Fair DRM scheduler Philipp Stanner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1ac53305b99569707a828e8d972f23c40722dd56.camel@mailbox.org \
--to=phasta@mailbox.org \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=kernel-dev@igalia.com \
--cc=matthew.brost@intel.com \
--cc=phasta@kernel.org \
--cc=tvrtko.ursulin@igalia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).