dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
To: "Maíra Canal" <mcanal@igalia.com>, dri-devel@lists.freedesktop.org
Cc: kernel-dev@igalia.com, intel-xe@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	"Christian König" <christian.koenig@amd.com>,
	"Danilo Krummrich" <dakr@kernel.org>,
	"Matthew Brost" <matthew.brost@intel.com>,
	"Philipp Stanner" <phasta@kernel.org>
Subject: Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
Date: Wed, 16 Jul 2025 15:46:44 +0100	[thread overview]
Message-ID: <b5ff1fba-0e2c-4d02-8b9d-49c3c313e65d@igalia.com> (raw)
In-Reply-To: <f535c0bf-225a-40c9-b6a1-5bfbb5ebec0d@igalia.com>


On 16/07/2025 15:30, Maíra Canal wrote:
> Hi Tvrtko,
> 
> On 16/07/25 10:49, Tvrtko Ursulin wrote:
>>
>> On 16/07/2025 14:31, Maíra Canal wrote:
>>> Hi Tvrtko,
>>>
>>> On 16/07/25 05:51, Tvrtko Ursulin wrote:
>>>> Currently the job free work item will lock sched->job_list_lock 
>>>> first time
>>>> to see if there are any jobs, free a single job, and then lock again to
>>>> decide whether to re-queue itself if there are more finished jobs.
>>>>
>>>> Since drm_sched_get_finished_job() already looks at the second job 
>>>> in the
>>>> queue we can simply add the signaled check and have it return the 
>>>> presence
>>>> of more jobs to be freed to the caller. That way the work item does not
>>>> have to lock the list again and repeat the signaled check.
>>>>
>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>>>> Cc: Christian König <christian.koenig@amd.com>
>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>> Cc: Maíra Canal <mcanal@igalia.com>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: Philipp Stanner <phasta@kernel.org>
>>>> ---
>>>> v2:
>>>>   * Improve commit text and kerneldoc. (Philipp)
>>>>   * Rename run free work helper. (Philipp)
>>>>
>>>> v3:
>>>>   * Rebase on top of Maira's changes.
>>>> ---
>>>>   drivers/gpu/drm/scheduler/sched_main.c | 53 +++++++++ 
>>>> +----------------
>>>>   1 file changed, 21 insertions(+), 32 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/ 
>>>> drm/ scheduler/sched_main.c
>>>> index e2cda28a1af4..5a550fd76bf0 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>> @@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct 
>>>> drm_gpu_scheduler *sched)
>>>>   }
>>>>   /**
>>>> - * __drm_sched_run_free_queue - enqueue free-job work
>>>> - * @sched: scheduler instance
>>>> - */
>>>> -static void __drm_sched_run_free_queue(struct drm_gpu_scheduler 
>>>> *sched)
>>>> -{
>>>> -    if (!READ_ONCE(sched->pause_submit))
>>>> -        queue_work(sched->submit_wq, &sched->work_free_job);
>>>> -}
>>>> -
>>>> -/**
>>>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>>>> + * drm_sched_run_free_queue - enqueue free-job work
>>>>    * @sched: scheduler instance
>>>>    */
>>>>   static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>>>>   {
>>>> -    struct drm_sched_job *job;
>>>> -
>>>> -    job = list_first_entry_or_null(&sched->pending_list,
>>>> -                       struct drm_sched_job, list);
>>>> -    if (job && dma_fence_is_signaled(&job->s_fence->finished))
>>>> -        __drm_sched_run_free_queue(sched);
>>>
>>> I believe we'd still need this chunk for DRM_GPU_SCHED_STAT_NO_HANG
>>> (check the comment in drm_sched_job_reinsert_on_false_timeout()). How
>>
>> You mean the "is there a signaled job in the list check" is needed for 
>> drm_sched_job_reinsert_on_false_timeout()? Hmm why? Worst case is a 
>> false positive wakeup on the free worker, no?
> 
> Correct me if I'm mistaken, we would also have a false positive wake-up
> on the run_job worker, which I believe it could be problematic in the
> cases that we skipped the reset because the job is still running.

Run job worker exits when it sees no free credits so I don't think there 
is a problem. What am I missing?

Regards,

Tvrtko

>>> about only deleting drm_sched_run_free_queue_unlocked() and keep using
>>> __drm_sched_run_free_queue()?
>>
>> You mean use __drm_sched_run_free_queue() from 
>> drm_sched_job_reinsert_on_false_timeout()? That is the same as 
>> drm_sched_run_free_queue() with this patch.
> 
> Sorry, I believe I didn't express myself clearly. I mean, using
> __drm_sched_run_free_queue() in drm_sched_free_job_work() and keep using
> drm_sched_run_free_queue() in drm_sched_job_reinsert_on_false_timeout().
> 
> Best Regards,
> - Maíra
> 
>>
>> Regards,
>>
>> Tvrtko
>>
>>>> -}
>>>> -
>>>> -static void drm_sched_run_free_queue_unlocked(struct 
>>>> drm_gpu_scheduler *sched)
>>>> -{
>>>> -    spin_lock(&sched->job_list_lock);
>>>> -    drm_sched_run_free_queue(sched);
>>>> -    spin_unlock(&sched->job_list_lock);
>>>> +    if (!READ_ONCE(sched->pause_submit))
>>>> +        queue_work(sched->submit_wq, &sched->work_free_job);
>>>>   }
>>>>   /**
>>>> @@ -398,7 +377,7 @@ static void drm_sched_job_done(struct 
>>>> drm_sched_job *s_job, int result)
>>>>       dma_fence_get(&s_fence->finished);
>>>>       drm_sched_fence_finished(s_fence, result);
>>>>       dma_fence_put(&s_fence->finished);
>>>> -    __drm_sched_run_free_queue(sched);
>>>> +    drm_sched_run_free_queue(sched);
>>>>   }
>>>>   /**
>>>> @@ -1134,12 +1113,16 @@ drm_sched_select_entity(struct 
>>>> drm_gpu_scheduler *sched)
>>>>    * drm_sched_get_finished_job - fetch the next finished job to be 
>>>> destroyed
>>>>    *
>>>>    * @sched: scheduler instance
>>>> + * @have_more: are there more finished jobs on the list
>>>> + *
>>>> + * Informs the caller through @have_more whether there are more 
>>>> finished jobs
>>>> + * besides the returned one.
>>>>    *
>>>>    * Returns the next finished job from the pending list (if there 
>>>> is one)
>>>>    * ready for it to be destroyed.
>>>>    */
>>>>   static struct drm_sched_job *
>>>> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>>>> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool 
>>>> *have_more)
>>>>   {
>>>>       struct drm_sched_job *job, *next;
>>>> @@ -1147,22 +1130,25 @@ drm_sched_get_finished_job(struct 
>>>> drm_gpu_scheduler *sched)
>>>>       job = list_first_entry_or_null(&sched->pending_list,
>>>>                          struct drm_sched_job, list);
>>>> -
>>>>       if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>>>>           /* remove job from pending_list */
>>>>           list_del_init(&job->list);
>>>>           /* cancel this job's TO timer */
>>>>           cancel_delayed_work(&sched->work_tdr);
>>>> -        /* make the scheduled timestamp more accurate */
>>>> +
>>>> +        *have_more = false;
>>>>           next = list_first_entry_or_null(&sched->pending_list,
>>>>                           typeof(*next), list);
>>>> -
>>>>           if (next) {
>>>> +            /* make the scheduled timestamp more accurate */
>>>>               if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>>                        &next->s_fence->scheduled.flags))
>>>>                   next->s_fence->scheduled.timestamp =
>>>>                       dma_fence_timestamp(&job->s_fence->finished);
>>>> +
>>>> +            *have_more = dma_fence_is_signaled(&next->s_fence- 
>>>> >finished);
>>>> +
>>>>               /* start TO timer for next job */
>>>>               drm_sched_start_timeout(sched);
>>>>           }
>>>> @@ -1221,12 +1207,15 @@ static void drm_sched_free_job_work(struct 
>>>> work_struct *w)
>>>>       struct drm_gpu_scheduler *sched =
>>>>           container_of(w, struct drm_gpu_scheduler, work_free_job);
>>>>       struct drm_sched_job *job;
>>>> +    bool have_more;
>>>> -    job = drm_sched_get_finished_job(sched);
>>>> -    if (job)
>>>> +    job = drm_sched_get_finished_job(sched, &have_more);
>>>> +    if (job) {
>>>>           sched->ops->free_job(job);
>>>> +        if (have_more)
>>>> +            drm_sched_run_free_queue(sched);
>>>> +    }
>>>> -    drm_sched_run_free_queue_unlocked(sched);
>>>>       drm_sched_run_job_queue(sched);
>>>>   }
>>>
>>
> 


  reply	other threads:[~2025-07-16 14:46 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-16  8:51 [PATCH] drm/sched: Avoid double re-lock on the job free path Tvrtko Ursulin
2025-07-16 13:31 ` Maíra Canal
2025-07-16 13:49   ` Tvrtko Ursulin
2025-07-16 14:30     ` Maíra Canal
2025-07-16 14:46       ` Tvrtko Ursulin [this message]
2025-07-16 20:44         ` Maíra Canal
2025-07-18  7:13           ` Tvrtko Ursulin
2025-07-18  9:31             ` Philipp Stanner
2025-07-18  9:35               ` Tvrtko Ursulin
2025-07-18  9:41                 ` Philipp Stanner
2025-07-18 10:18                   ` Tvrtko Ursulin
  -- strict thread matches above, loose matches on Subject: below --
2025-07-08 12:20 Tvrtko Ursulin
2025-07-09  4:45 ` Matthew Brost
2025-07-09 10:49   ` Tvrtko Ursulin
2025-07-09 17:22     ` Matthew Brost
2025-07-11 12:39       ` Tvrtko Ursulin
2025-07-11 13:04 ` Philipp Stanner
2025-07-11 15:11   ` Tvrtko Ursulin
2025-01-14 10:59 Tvrtko Ursulin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b5ff1fba-0e2c-4d02-8b9d-49c3c313e65d@igalia.com \
    --to=tvrtko.ursulin@igalia.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=kernel-dev@igalia.com \
    --cc=matthew.brost@intel.com \
    --cc=mcanal@igalia.com \
    --cc=phasta@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).