* [PATCH] drm/sched: Avoid double re-lock on the job free path
@ 2025-01-14 10:59 Tvrtko Ursulin
0 siblings, 0 replies; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-01-14 10:59 UTC (permalink / raw)
To: dri-devel
Cc: kernel-dev, Tvrtko Ursulin, Christian König,
Danilo Krummrich, Matthew Brost, Philipp Stanner
Currently the job free work item will lock sched->job_list_lock first time
to see if there are any jobs, free a single job, and then lock again to
decide whether to re-queue itself if there are more finished jobs.
Since drm_sched_get_finished_job() even already looks at the second job in
the queue we can simply make it return its presence to the caller.
That way the caller does not need to lock the list again only to peek into
the same job.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Danilo Krummrich <dakr@redhat.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Philipp Stanner <pstanner@redhat.com>
---
drivers/gpu/drm/scheduler/sched_main.c | 32 +++++++++-----------------
1 file changed, 11 insertions(+), 21 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 2d3d71e053a6..363e9f272a1b 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -383,22 +383,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
queue_work(sched->submit_wq, &sched->work_free_job);
}
-/**
- * drm_sched_run_free_queue - enqueue free-job work if ready
- * @sched: scheduler instance
- */
-static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
-{
- struct drm_sched_job *job;
-
- spin_lock(&sched->job_list_lock);
- job = list_first_entry_or_null(&sched->pending_list,
- struct drm_sched_job, list);
- if (job && dma_fence_is_signaled(&job->s_fence->finished))
- __drm_sched_run_free_queue(sched);
- spin_unlock(&sched->job_list_lock);
-}
-
/**
* drm_sched_job_done - complete a job
* @s_job: pointer to the job which is done
@@ -1078,12 +1062,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
* drm_sched_get_finished_job - fetch the next finished job to be destroyed
*
* @sched: scheduler instance
+ * @have_more: are there more finished jobs on the list
*
* Returns the next finished job from the pending list (if there is one)
* ready for it to be destroyed.
*/
static struct drm_sched_job *
-drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
+drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
{
struct drm_sched_job *job, *next;
@@ -1101,14 +1086,16 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
/* make the scheduled timestamp more accurate */
next = list_first_entry_or_null(&sched->pending_list,
typeof(*next), list);
-
if (next) {
+ *have_more = dma_fence_is_signaled(&next->s_fence->finished);
if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
&next->s_fence->scheduled.flags))
next->s_fence->scheduled.timestamp =
dma_fence_timestamp(&job->s_fence->finished);
/* start TO timer for next job */
drm_sched_start_timeout(sched);
+ } else {
+ *have_more = false;
}
} else {
job = NULL;
@@ -1165,12 +1152,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
struct drm_gpu_scheduler *sched =
container_of(w, struct drm_gpu_scheduler, work_free_job);
struct drm_sched_job *job;
+ bool have_more;
- job = drm_sched_get_finished_job(sched);
- if (job)
+ job = drm_sched_get_finished_job(sched, &have_more);
+ if (job) {
sched->ops->free_job(job);
+ if (have_more)
+ __drm_sched_run_free_queue(sched);
+ }
- drm_sched_run_free_queue(sched);
drm_sched_run_job_queue(sched);
}
--
2.47.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH] drm/sched: Avoid double re-lock on the job free path
@ 2025-07-08 12:20 Tvrtko Ursulin
2025-07-09 4:45 ` Matthew Brost
2025-07-11 13:04 ` Philipp Stanner
0 siblings, 2 replies; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-08 12:20 UTC (permalink / raw)
To: dri-devel
Cc: intel-xe, amd-gfx, kernel-dev, Tvrtko Ursulin,
Christian König, Danilo Krummrich, Matthew Brost,
Philipp Stanner
Currently the job free work item will lock sched->job_list_lock first time
to see if there are any jobs, free a single job, and then lock again to
decide whether to re-queue itself if there are more finished jobs.
Since drm_sched_get_finished_job() already looks at the second job in the
queue we can simply add the signaled check and have it return the presence
of more jobs to free to the caller. That way the work item does not have
to lock the list again and repeat the signaled check.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Philipp Stanner <phasta@kernel.org>
---
drivers/gpu/drm/scheduler/sched_main.c | 37 ++++++++++----------------
1 file changed, 14 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 1f077782ec12..1bce0b66f89c 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -366,22 +366,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
queue_work(sched->submit_wq, &sched->work_free_job);
}
-/**
- * drm_sched_run_free_queue - enqueue free-job work if ready
- * @sched: scheduler instance
- */
-static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
-{
- struct drm_sched_job *job;
-
- spin_lock(&sched->job_list_lock);
- job = list_first_entry_or_null(&sched->pending_list,
- struct drm_sched_job, list);
- if (job && dma_fence_is_signaled(&job->s_fence->finished))
- __drm_sched_run_free_queue(sched);
- spin_unlock(&sched->job_list_lock);
-}
-
/**
* drm_sched_job_done - complete a job
* @s_job: pointer to the job which is done
@@ -1102,12 +1086,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
* drm_sched_get_finished_job - fetch the next finished job to be destroyed
*
* @sched: scheduler instance
+ * @have_more: are there more finished jobs on the list
*
* Returns the next finished job from the pending list (if there is one)
* ready for it to be destroyed.
*/
static struct drm_sched_job *
-drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
+drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
{
struct drm_sched_job *job, *next;
@@ -1115,22 +1100,25 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
job = list_first_entry_or_null(&sched->pending_list,
struct drm_sched_job, list);
-
if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
/* remove job from pending_list */
list_del_init(&job->list);
/* cancel this job's TO timer */
cancel_delayed_work(&sched->work_tdr);
- /* make the scheduled timestamp more accurate */
+
+ *have_more = false;
next = list_first_entry_or_null(&sched->pending_list,
typeof(*next), list);
-
if (next) {
+ /* make the scheduled timestamp more accurate */
if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
&next->s_fence->scheduled.flags))
next->s_fence->scheduled.timestamp =
dma_fence_timestamp(&job->s_fence->finished);
+
+ *have_more = dma_fence_is_signaled(&next->s_fence->finished);
+
/* start TO timer for next job */
drm_sched_start_timeout(sched);
}
@@ -1189,12 +1177,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
struct drm_gpu_scheduler *sched =
container_of(w, struct drm_gpu_scheduler, work_free_job);
struct drm_sched_job *job;
+ bool have_more;
- job = drm_sched_get_finished_job(sched);
- if (job)
+ job = drm_sched_get_finished_job(sched, &have_more);
+ if (job) {
sched->ops->free_job(job);
+ if (have_more)
+ __drm_sched_run_free_queue(sched);
+ }
- drm_sched_run_free_queue(sched);
drm_sched_run_job_queue(sched);
}
--
2.48.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-08 12:20 [PATCH] drm/sched: Avoid double re-lock on the job free path Tvrtko Ursulin
@ 2025-07-09 4:45 ` Matthew Brost
2025-07-09 10:49 ` Tvrtko Ursulin
2025-07-11 13:04 ` Philipp Stanner
1 sibling, 1 reply; 19+ messages in thread
From: Matthew Brost @ 2025-07-09 4:45 UTC (permalink / raw)
To: Tvrtko Ursulin
Cc: dri-devel, intel-xe, amd-gfx, kernel-dev, Christian König,
Danilo Krummrich, Philipp Stanner
On Tue, Jul 08, 2025 at 01:20:32PM +0100, Tvrtko Ursulin wrote:
> Currently the job free work item will lock sched->job_list_lock first time
> to see if there are any jobs, free a single job, and then lock again to
> decide whether to re-queue itself if there are more finished jobs.
>
> Since drm_sched_get_finished_job() already looks at the second job in the
> queue we can simply add the signaled check and have it return the presence
> of more jobs to free to the caller. That way the work item does not have
> to lock the list again and repeat the signaled check.
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: Matthew Brost <matthew.brost@intel.com>
The patch looks correct, we do have CI failure in a section which
stresses scheduling though. Probably noise though. Do you have Intel
hardware? I suggest running xe_exec_threads in a loop on either LNL or
BMG and see if the CI failure pops. If you don't have hardware, let me
know and I can do this.
With a bit of investigation in the CI failure:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> Cc: Philipp Stanner <phasta@kernel.org>
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 37 ++++++++++----------------
> 1 file changed, 14 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 1f077782ec12..1bce0b66f89c 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -366,22 +366,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
> queue_work(sched->submit_wq, &sched->work_free_job);
> }
>
> -/**
> - * drm_sched_run_free_queue - enqueue free-job work if ready
> - * @sched: scheduler instance
> - */
> -static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
> -{
> - struct drm_sched_job *job;
> -
> - spin_lock(&sched->job_list_lock);
> - job = list_first_entry_or_null(&sched->pending_list,
> - struct drm_sched_job, list);
> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
> - __drm_sched_run_free_queue(sched);
> - spin_unlock(&sched->job_list_lock);
> -}
> -
> /**
> * drm_sched_job_done - complete a job
> * @s_job: pointer to the job which is done
> @@ -1102,12 +1086,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
> * drm_sched_get_finished_job - fetch the next finished job to be destroyed
> *
> * @sched: scheduler instance
> + * @have_more: are there more finished jobs on the list
> *
> * Returns the next finished job from the pending list (if there is one)
> * ready for it to be destroyed.
> */
> static struct drm_sched_job *
> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
> {
> struct drm_sched_job *job, *next;
>
> @@ -1115,22 +1100,25 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>
> job = list_first_entry_or_null(&sched->pending_list,
> struct drm_sched_job, list);
> -
> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
> /* remove job from pending_list */
> list_del_init(&job->list);
>
> /* cancel this job's TO timer */
> cancel_delayed_work(&sched->work_tdr);
> - /* make the scheduled timestamp more accurate */
> +
> + *have_more = false;
> next = list_first_entry_or_null(&sched->pending_list,
> typeof(*next), list);
> -
> if (next) {
> + /* make the scheduled timestamp more accurate */
> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
> &next->s_fence->scheduled.flags))
> next->s_fence->scheduled.timestamp =
> dma_fence_timestamp(&job->s_fence->finished);
> +
> + *have_more = dma_fence_is_signaled(&next->s_fence->finished);
> +
> /* start TO timer for next job */
> drm_sched_start_timeout(sched);
> }
> @@ -1189,12 +1177,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
> struct drm_gpu_scheduler *sched =
> container_of(w, struct drm_gpu_scheduler, work_free_job);
> struct drm_sched_job *job;
> + bool have_more;
>
> - job = drm_sched_get_finished_job(sched);
> - if (job)
> + job = drm_sched_get_finished_job(sched, &have_more);
> + if (job) {
> sched->ops->free_job(job);
> + if (have_more)
> + __drm_sched_run_free_queue(sched);
> + }
>
> - drm_sched_run_free_queue(sched);
> drm_sched_run_job_queue(sched);
> }
>
> --
> 2.48.0
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-09 4:45 ` Matthew Brost
@ 2025-07-09 10:49 ` Tvrtko Ursulin
2025-07-09 17:22 ` Matthew Brost
0 siblings, 1 reply; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-09 10:49 UTC (permalink / raw)
To: Matthew Brost
Cc: dri-devel, intel-xe, amd-gfx, kernel-dev, Christian König,
Danilo Krummrich, Philipp Stanner
On 09/07/2025 05:45, Matthew Brost wrote:
> On Tue, Jul 08, 2025 at 01:20:32PM +0100, Tvrtko Ursulin wrote:
>> Currently the job free work item will lock sched->job_list_lock first time
>> to see if there are any jobs, free a single job, and then lock again to
>> decide whether to re-queue itself if there are more finished jobs.
>>
>> Since drm_sched_get_finished_job() already looks at the second job in the
>> queue we can simply add the signaled check and have it return the presence
>> of more jobs to free to the caller. That way the work item does not have
>> to lock the list again and repeat the signaled check.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>> Cc: Christian König <christian.koenig@amd.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>
> The patch looks correct, we do have CI failure in a section which
> stresses scheduling though. Probably noise though. Do you have Intel
> hardware? I suggest running xe_exec_threads in a loop on either LNL or
> BMG and see if the CI failure pops. If you don't have hardware, let me
> know and I can do this.
>
> With a bit of investigation in the CI failure:
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Thanks! I don't have the HW but I was able to press re-test in the CI so
lets see. Although at the moment I can't imagine a failure mode like
that could be caused by this patch.
Regards,
Tvrtko
>
>> Cc: Philipp Stanner <phasta@kernel.org>
>> ---
>> drivers/gpu/drm/scheduler/sched_main.c | 37 ++++++++++----------------
>> 1 file changed, 14 insertions(+), 23 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>> index 1f077782ec12..1bce0b66f89c 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -366,22 +366,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>> queue_work(sched->submit_wq, &sched->work_free_job);
>> }
>>
>> -/**
>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>> - * @sched: scheduler instance
>> - */
>> -static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>> -{
>> - struct drm_sched_job *job;
>> -
>> - spin_lock(&sched->job_list_lock);
>> - job = list_first_entry_or_null(&sched->pending_list,
>> - struct drm_sched_job, list);
>> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
>> - __drm_sched_run_free_queue(sched);
>> - spin_unlock(&sched->job_list_lock);
>> -}
>> -
>> /**
>> * drm_sched_job_done - complete a job
>> * @s_job: pointer to the job which is done
>> @@ -1102,12 +1086,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
>> * drm_sched_get_finished_job - fetch the next finished job to be destroyed
>> *
>> * @sched: scheduler instance
>> + * @have_more: are there more finished jobs on the list
>> *
>> * Returns the next finished job from the pending list (if there is one)
>> * ready for it to be destroyed.
>> */
>> static struct drm_sched_job *
>> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
>> {
>> struct drm_sched_job *job, *next;
>>
>> @@ -1115,22 +1100,25 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>>
>> job = list_first_entry_or_null(&sched->pending_list,
>> struct drm_sched_job, list);
>> -
>> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>> /* remove job from pending_list */
>> list_del_init(&job->list);
>>
>> /* cancel this job's TO timer */
>> cancel_delayed_work(&sched->work_tdr);
>> - /* make the scheduled timestamp more accurate */
>> +
>> + *have_more = false;
>> next = list_first_entry_or_null(&sched->pending_list,
>> typeof(*next), list);
>> -
>> if (next) {
>> + /* make the scheduled timestamp more accurate */
>> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
>> &next->s_fence->scheduled.flags))
>> next->s_fence->scheduled.timestamp =
>> dma_fence_timestamp(&job->s_fence->finished);
>> +
>> + *have_more = dma_fence_is_signaled(&next->s_fence->finished);
>> +
>> /* start TO timer for next job */
>> drm_sched_start_timeout(sched);
>> }
>> @@ -1189,12 +1177,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
>> struct drm_gpu_scheduler *sched =
>> container_of(w, struct drm_gpu_scheduler, work_free_job);
>> struct drm_sched_job *job;
>> + bool have_more;
>>
>> - job = drm_sched_get_finished_job(sched);
>> - if (job)
>> + job = drm_sched_get_finished_job(sched, &have_more);
>> + if (job) {
>> sched->ops->free_job(job);
>> + if (have_more)
>> + __drm_sched_run_free_queue(sched);
>> + }
>>
>> - drm_sched_run_free_queue(sched);
>> drm_sched_run_job_queue(sched);
>> }
>>
>> --
>> 2.48.0
>>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-09 10:49 ` Tvrtko Ursulin
@ 2025-07-09 17:22 ` Matthew Brost
2025-07-11 12:39 ` Tvrtko Ursulin
0 siblings, 1 reply; 19+ messages in thread
From: Matthew Brost @ 2025-07-09 17:22 UTC (permalink / raw)
To: Tvrtko Ursulin
Cc: dri-devel, intel-xe, amd-gfx, kernel-dev, Christian König,
Danilo Krummrich, Philipp Stanner
On Wed, Jul 09, 2025 at 11:49:44AM +0100, Tvrtko Ursulin wrote:
>
> On 09/07/2025 05:45, Matthew Brost wrote:
> > On Tue, Jul 08, 2025 at 01:20:32PM +0100, Tvrtko Ursulin wrote:
> > > Currently the job free work item will lock sched->job_list_lock first time
> > > to see if there are any jobs, free a single job, and then lock again to
> > > decide whether to re-queue itself if there are more finished jobs.
> > >
> > > Since drm_sched_get_finished_job() already looks at the second job in the
> > > queue we can simply add the signaled check and have it return the presence
> > > of more jobs to free to the caller. That way the work item does not have
> > > to lock the list again and repeat the signaled check.
> > >
> > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
> > > Cc: Christian König <christian.koenig@amd.com>
> > > Cc: Danilo Krummrich <dakr@kernel.org>
> > > Cc: Matthew Brost <matthew.brost@intel.com>
> >
> > The patch looks correct, we do have CI failure in a section which
> > stresses scheduling though. Probably noise though. Do you have Intel
> > hardware? I suggest running xe_exec_threads in a loop on either LNL or
> > BMG and see if the CI failure pops. If you don't have hardware, let me
> > know and I can do this.
> >
> > With a bit of investigation in the CI failure:
> > Reviewed-by: Matthew Brost <matthew.brost@intel.com>
>
> Thanks! I don't have the HW but I was able to press re-test in the CI so
> lets see. Although at the moment I can't imagine a failure mode like that
> could be caused by this patch.
>
I ran xe_exec_threads in a loop 20x on BMG without a failure so CI issue
seem unrelated.
Matt
> Regards,
>
> Tvrtko
>
> >
> > > Cc: Philipp Stanner <phasta@kernel.org>
> > > ---
> > > drivers/gpu/drm/scheduler/sched_main.c | 37 ++++++++++----------------
> > > 1 file changed, 14 insertions(+), 23 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> > > index 1f077782ec12..1bce0b66f89c 100644
> > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > @@ -366,22 +366,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
> > > queue_work(sched->submit_wq, &sched->work_free_job);
> > > }
> > > -/**
> > > - * drm_sched_run_free_queue - enqueue free-job work if ready
> > > - * @sched: scheduler instance
> > > - */
> > > -static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
> > > -{
> > > - struct drm_sched_job *job;
> > > -
> > > - spin_lock(&sched->job_list_lock);
> > > - job = list_first_entry_or_null(&sched->pending_list,
> > > - struct drm_sched_job, list);
> > > - if (job && dma_fence_is_signaled(&job->s_fence->finished))
> > > - __drm_sched_run_free_queue(sched);
> > > - spin_unlock(&sched->job_list_lock);
> > > -}
> > > -
> > > /**
> > > * drm_sched_job_done - complete a job
> > > * @s_job: pointer to the job which is done
> > > @@ -1102,12 +1086,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
> > > * drm_sched_get_finished_job - fetch the next finished job to be destroyed
> > > *
> > > * @sched: scheduler instance
> > > + * @have_more: are there more finished jobs on the list
> > > *
> > > * Returns the next finished job from the pending list (if there is one)
> > > * ready for it to be destroyed.
> > > */
> > > static struct drm_sched_job *
> > > -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
> > > +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
> > > {
> > > struct drm_sched_job *job, *next;
> > > @@ -1115,22 +1100,25 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
> > > job = list_first_entry_or_null(&sched->pending_list,
> > > struct drm_sched_job, list);
> > > -
> > > if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
> > > /* remove job from pending_list */
> > > list_del_init(&job->list);
> > > /* cancel this job's TO timer */
> > > cancel_delayed_work(&sched->work_tdr);
> > > - /* make the scheduled timestamp more accurate */
> > > +
> > > + *have_more = false;
> > > next = list_first_entry_or_null(&sched->pending_list,
> > > typeof(*next), list);
> > > -
> > > if (next) {
> > > + /* make the scheduled timestamp more accurate */
> > > if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
> > > &next->s_fence->scheduled.flags))
> > > next->s_fence->scheduled.timestamp =
> > > dma_fence_timestamp(&job->s_fence->finished);
> > > +
> > > + *have_more = dma_fence_is_signaled(&next->s_fence->finished);
> > > +
> > > /* start TO timer for next job */
> > > drm_sched_start_timeout(sched);
> > > }
> > > @@ -1189,12 +1177,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
> > > struct drm_gpu_scheduler *sched =
> > > container_of(w, struct drm_gpu_scheduler, work_free_job);
> > > struct drm_sched_job *job;
> > > + bool have_more;
> > > - job = drm_sched_get_finished_job(sched);
> > > - if (job)
> > > + job = drm_sched_get_finished_job(sched, &have_more);
> > > + if (job) {
> > > sched->ops->free_job(job);
> > > + if (have_more)
> > > + __drm_sched_run_free_queue(sched);
> > > + }
> > > - drm_sched_run_free_queue(sched);
> > > drm_sched_run_job_queue(sched);
> > > }
> > > --
> > > 2.48.0
> > >
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-09 17:22 ` Matthew Brost
@ 2025-07-11 12:39 ` Tvrtko Ursulin
0 siblings, 0 replies; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-11 12:39 UTC (permalink / raw)
To: Matthew Brost
Cc: dri-devel, intel-xe, amd-gfx, kernel-dev, Christian König,
Danilo Krummrich, Philipp Stanner
On 09/07/2025 18:22, Matthew Brost wrote:
> On Wed, Jul 09, 2025 at 11:49:44AM +0100, Tvrtko Ursulin wrote:
>>
>> On 09/07/2025 05:45, Matthew Brost wrote:
>>> On Tue, Jul 08, 2025 at 01:20:32PM +0100, Tvrtko Ursulin wrote:
>>>> Currently the job free work item will lock sched->job_list_lock first time
>>>> to see if there are any jobs, free a single job, and then lock again to
>>>> decide whether to re-queue itself if there are more finished jobs.
>>>>
>>>> Since drm_sched_get_finished_job() already looks at the second job in the
>>>> queue we can simply add the signaled check and have it return the presence
>>>> of more jobs to free to the caller. That way the work item does not have
>>>> to lock the list again and repeat the signaled check.
>>>>
>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>>>> Cc: Christian König <christian.koenig@amd.com>
>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>
>>> The patch looks correct, we do have CI failure in a section which
>>> stresses scheduling though. Probably noise though. Do you have Intel
>>> hardware? I suggest running xe_exec_threads in a loop on either LNL or
>>> BMG and see if the CI failure pops. If you don't have hardware, let me
>>> know and I can do this.
>>>
>>> With a bit of investigation in the CI failure:
>>> Reviewed-by: Matthew Brost <matthew.brost@intel.com>
>>
>> Thanks! I don't have the HW but I was able to press re-test in the CI so
>> lets see. Although at the moment I can't imagine a failure mode like that
>> could be caused by this patch.
>>
>
> I ran xe_exec_threads in a loop 20x on BMG without a failure so CI issue
> seem unrelated.
Thanks!
Philipp - okay to merge this one?
Regards,
Tvrtko
>
> Matt
>
>> Regards,
>>
>> Tvrtko
>>
>>>
>>>> Cc: Philipp Stanner <phasta@kernel.org>
>>>> ---
>>>> drivers/gpu/drm/scheduler/sched_main.c | 37 ++++++++++----------------
>>>> 1 file changed, 14 insertions(+), 23 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>>>> index 1f077782ec12..1bce0b66f89c 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>> @@ -366,22 +366,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>>>> queue_work(sched->submit_wq, &sched->work_free_job);
>>>> }
>>>> -/**
>>>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>>>> - * @sched: scheduler instance
>>>> - */
>>>> -static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>>>> -{
>>>> - struct drm_sched_job *job;
>>>> -
>>>> - spin_lock(&sched->job_list_lock);
>>>> - job = list_first_entry_or_null(&sched->pending_list,
>>>> - struct drm_sched_job, list);
>>>> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
>>>> - __drm_sched_run_free_queue(sched);
>>>> - spin_unlock(&sched->job_list_lock);
>>>> -}
>>>> -
>>>> /**
>>>> * drm_sched_job_done - complete a job
>>>> * @s_job: pointer to the job which is done
>>>> @@ -1102,12 +1086,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
>>>> * drm_sched_get_finished_job - fetch the next finished job to be destroyed
>>>> *
>>>> * @sched: scheduler instance
>>>> + * @have_more: are there more finished jobs on the list
>>>> *
>>>> * Returns the next finished job from the pending list (if there is one)
>>>> * ready for it to be destroyed.
>>>> */
>>>> static struct drm_sched_job *
>>>> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>>>> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
>>>> {
>>>> struct drm_sched_job *job, *next;
>>>> @@ -1115,22 +1100,25 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>>>> job = list_first_entry_or_null(&sched->pending_list,
>>>> struct drm_sched_job, list);
>>>> -
>>>> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>>>> /* remove job from pending_list */
>>>> list_del_init(&job->list);
>>>> /* cancel this job's TO timer */
>>>> cancel_delayed_work(&sched->work_tdr);
>>>> - /* make the scheduled timestamp more accurate */
>>>> +
>>>> + *have_more = false;
>>>> next = list_first_entry_or_null(&sched->pending_list,
>>>> typeof(*next), list);
>>>> -
>>>> if (next) {
>>>> + /* make the scheduled timestamp more accurate */
>>>> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>> &next->s_fence->scheduled.flags))
>>>> next->s_fence->scheduled.timestamp =
>>>> dma_fence_timestamp(&job->s_fence->finished);
>>>> +
>>>> + *have_more = dma_fence_is_signaled(&next->s_fence->finished);
>>>> +
>>>> /* start TO timer for next job */
>>>> drm_sched_start_timeout(sched);
>>>> }
>>>> @@ -1189,12 +1177,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
>>>> struct drm_gpu_scheduler *sched =
>>>> container_of(w, struct drm_gpu_scheduler, work_free_job);
>>>> struct drm_sched_job *job;
>>>> + bool have_more;
>>>> - job = drm_sched_get_finished_job(sched);
>>>> - if (job)
>>>> + job = drm_sched_get_finished_job(sched, &have_more);
>>>> + if (job) {
>>>> sched->ops->free_job(job);
>>>> + if (have_more)
>>>> + __drm_sched_run_free_queue(sched);
>>>> + }
>>>> - drm_sched_run_free_queue(sched);
>>>> drm_sched_run_job_queue(sched);
>>>> }
>>>> --
>>>> 2.48.0
>>>>
>>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-08 12:20 [PATCH] drm/sched: Avoid double re-lock on the job free path Tvrtko Ursulin
2025-07-09 4:45 ` Matthew Brost
@ 2025-07-11 13:04 ` Philipp Stanner
2025-07-11 15:11 ` Tvrtko Ursulin
1 sibling, 1 reply; 19+ messages in thread
From: Philipp Stanner @ 2025-07-11 13:04 UTC (permalink / raw)
To: Tvrtko Ursulin, dri-devel
Cc: intel-xe, amd-gfx, kernel-dev, Christian König,
Danilo Krummrich, Matthew Brost, Philipp Stanner
Late to the party; had overlooked that the discussion with Matt is
resolved. Some comments below
On Tue, 2025-07-08 at 13:20 +0100, Tvrtko Ursulin wrote:
> Currently the job free work item will lock sched->job_list_lock first time
> to see if there are any jobs, free a single job, and then lock again to
> decide whether to re-queue itself if there are more finished jobs.
>
> Since drm_sched_get_finished_job() already looks at the second job in the
> queue we can simply add the signaled check and have it return the presence
> of more jobs to free to the caller. That way the work item does not
optional nit:
s/to free/to be freed
Reads a bit more cleanly.
> have
> to lock the list again and repeat the signaled check.
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Philipp Stanner <phasta@kernel.org>
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 37 ++++++++++----------------
> 1 file changed, 14 insertions(+), 23 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index 1f077782ec12..1bce0b66f89c 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -366,22 +366,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
> queue_work(sched->submit_wq, &sched->work_free_job);
> }
>
> -/**
> - * drm_sched_run_free_queue - enqueue free-job work if ready
> - * @sched: scheduler instance
> - */
> -static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
The function name is now free. See my comment at the bottom.
> -{
> - struct drm_sched_job *job;
> -
> - spin_lock(&sched->job_list_lock);
> - job = list_first_entry_or_null(&sched->pending_list,
> - struct drm_sched_job, list);
> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
> - __drm_sched_run_free_queue(sched);
> - spin_unlock(&sched->job_list_lock);
> -}
> -
> /**
> * drm_sched_job_done - complete a job
> * @s_job: pointer to the job which is done
> @@ -1102,12 +1086,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
> * drm_sched_get_finished_job - fetch the next finished job to be destroyed
> *
> * @sched: scheduler instance
> + * @have_more: are there more finished jobs on the list
I'd like a very brief sentence below here like:
"Informs the caller through @have_more whether there are more finished
jobs besides the returned one."
Reason being that it's relatively rare in the kernel that status is not
transmitted through a return value, so we want that to be very obvious.
> *
> * Returns the next finished job from the pending list (if there is one)
> * ready for it to be destroyed.
> */
> static struct drm_sched_job *
> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
> {
> struct drm_sched_job *job, *next;
>
> @@ -1115,22 +1100,25 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>
> job = list_first_entry_or_null(&sched->pending_list,
> struct drm_sched_job, list);
> -
> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
> /* remove job from pending_list */
> list_del_init(&job->list);
>
> /* cancel this job's TO timer */
> cancel_delayed_work(&sched->work_tdr);
> - /* make the scheduled timestamp more accurate */
> +
> + *have_more = false;
Don't we want that bool initialized to false at the very beginning of
the function? That way it can never be forgotten if the code gets
reworked.
> next = list_first_entry_or_null(&sched->pending_list,
> typeof(*next), list);
> -
> if (next) {
> + /* make the scheduled timestamp more accurate */
> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
> &next->s_fence->scheduled.flags))
> next->s_fence->scheduled.timestamp =
> dma_fence_timestamp(&job->s_fence->finished);
> +
> + *have_more = dma_fence_is_signaled(&next->s_fence->finished);
> +
> /* start TO timer for next job */
> drm_sched_start_timeout(sched);
> }
> @@ -1189,12 +1177,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
> struct drm_gpu_scheduler *sched =
> container_of(w, struct drm_gpu_scheduler, work_free_job);
> struct drm_sched_job *job;
> + bool have_more;
>
> - job = drm_sched_get_finished_job(sched);
> - if (job)
> + job = drm_sched_get_finished_job(sched, &have_more);
> + if (job) {
> sched->ops->free_job(job);
> + if (have_more)
> + __drm_sched_run_free_queue(sched);
Now that drm_sched_run_free_queue() is dead, it's an excellent
opportunity to give its name to __drm_sched_run_free_queue() \o/
Cleaner namespace, and reads better with the below
drm_sched_run_job_queue().
Besides, cool patch!
P.
> + }
>
> - drm_sched_run_free_queue(sched);
> drm_sched_run_job_queue(sched);
> }
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-11 13:04 ` Philipp Stanner
@ 2025-07-11 15:11 ` Tvrtko Ursulin
0 siblings, 0 replies; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-11 15:11 UTC (permalink / raw)
To: phasta, dri-devel
Cc: intel-xe, amd-gfx, kernel-dev, Christian König,
Danilo Krummrich, Matthew Brost
On 11/07/2025 14:04, Philipp Stanner wrote:
> Late to the party; had overlooked that the discussion with Matt is
> resolved. Some comments below
>
> On Tue, 2025-07-08 at 13:20 +0100, Tvrtko Ursulin wrote:
>> Currently the job free work item will lock sched->job_list_lock first time
>> to see if there are any jobs, free a single job, and then lock again to
>> decide whether to re-queue itself if there are more finished jobs.
>>
>> Since drm_sched_get_finished_job() already looks at the second job in the
>> queue we can simply add the signaled check and have it return the presence
>> of more jobs to free to the caller. That way the work item does not
>
> optional nit:
> s/to free/to be freed
>
> Reads a bit more cleanly.
Done.
>
>> have
>> to lock the list again and repeat the signaled check.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>> Cc: Christian König <christian.koenig@amd.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Philipp Stanner <phasta@kernel.org>
>> ---
>> drivers/gpu/drm/scheduler/sched_main.c | 37 ++++++++++----------------
>> 1 file changed, 14 insertions(+), 23 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>> index 1f077782ec12..1bce0b66f89c 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -366,22 +366,6 @@ static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>> queue_work(sched->submit_wq, &sched->work_free_job);
>> }
>>
>> -/**
>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>> - * @sched: scheduler instance
>> - */
>> -static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>
> The function name is now free. See my comment at the bottom.
>
>> -{
>> - struct drm_sched_job *job;
>> -
>> - spin_lock(&sched->job_list_lock);
>> - job = list_first_entry_or_null(&sched->pending_list,
>> - struct drm_sched_job, list);
>> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
>> - __drm_sched_run_free_queue(sched);
>> - spin_unlock(&sched->job_list_lock);
>> -}
>> -
>> /**
>> * drm_sched_job_done - complete a job
>> * @s_job: pointer to the job which is done
>> @@ -1102,12 +1086,13 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
>> * drm_sched_get_finished_job - fetch the next finished job to be destroyed
>> *
>> * @sched: scheduler instance
>> + * @have_more: are there more finished jobs on the list
>
> I'd like a very brief sentence below here like:
>
> "Informs the caller through @have_more whether there are more finished
> jobs besides the returned one."
>
> Reason being that it's relatively rare in the kernel that status is not
> transmitted through a return value, so we want that to be very obvious.
Done.
>
>> *
>> * Returns the next finished job from the pending list (if there is one)
>> * ready for it to be destroyed.
>> */
>> static struct drm_sched_job *
>> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
>> {
>> struct drm_sched_job *job, *next;
>>
>> @@ -1115,22 +1100,25 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>>
>> job = list_first_entry_or_null(&sched->pending_list,
>> struct drm_sched_job, list);
>> -
>> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>> /* remove job from pending_list */
>> list_del_init(&job->list);
>>
>> /* cancel this job's TO timer */
>> cancel_delayed_work(&sched->work_tdr);
>> - /* make the scheduled timestamp more accurate */
>> +
>> + *have_more = false;
>
> Don't we want that bool initialized to false at the very beginning of
> the function? That way it can never be forgotten if the code gets
> reworked.
I opted to leave this as is, given how kerneldoc is clear this is only
valid if a job was returned.
>
>> next = list_first_entry_or_null(&sched->pending_list,
>> typeof(*next), list);
>> -
>> if (next) {
>> + /* make the scheduled timestamp more accurate */
>> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
>> &next->s_fence->scheduled.flags))
>> next->s_fence->scheduled.timestamp =
>> dma_fence_timestamp(&job->s_fence->finished);
>> +
>> + *have_more = dma_fence_is_signaled(&next->s_fence->finished);
>> +
>> /* start TO timer for next job */
>> drm_sched_start_timeout(sched);
>> }
>> @@ -1189,12 +1177,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
>> struct drm_gpu_scheduler *sched =
>> container_of(w, struct drm_gpu_scheduler, work_free_job);
>> struct drm_sched_job *job;
>> + bool have_more;
>>
>> - job = drm_sched_get_finished_job(sched);
>> - if (job)
>> + job = drm_sched_get_finished_job(sched, &have_more);
>> + if (job) {
>> sched->ops->free_job(job);
>> + if (have_more)
>> + __drm_sched_run_free_queue(sched);
>
> Now that drm_sched_run_free_queue() is dead, it's an excellent
> opportunity to give its name to __drm_sched_run_free_queue() \o/
>
> Cleaner namespace, and reads better with the below
> drm_sched_run_job_queue().
Well spotted - done.
> Besides, cool patch!
Thanks!
Regards,
Tvrtko
>> + }
>>
>> - drm_sched_run_free_queue(sched);
>> drm_sched_run_job_queue(sched);
>> }
>>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH] drm/sched: Avoid double re-lock on the job free path
@ 2025-07-16 8:51 Tvrtko Ursulin
2025-07-16 13:31 ` Maíra Canal
0 siblings, 1 reply; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-16 8:51 UTC (permalink / raw)
To: dri-devel
Cc: kernel-dev, intel-xe, amd-gfx, Tvrtko Ursulin,
Christian König, Danilo Krummrich, Maíra Canal,
Matthew Brost, Philipp Stanner
Currently the job free work item will lock sched->job_list_lock first time
to see if there are any jobs, free a single job, and then lock again to
decide whether to re-queue itself if there are more finished jobs.
Since drm_sched_get_finished_job() already looks at the second job in the
queue we can simply add the signaled check and have it return the presence
of more jobs to be freed to the caller. That way the work item does not
have to lock the list again and repeat the signaled check.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Danilo Krummrich <dakr@kernel.org>
Cc: Maíra Canal <mcanal@igalia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Philipp Stanner <phasta@kernel.org>
---
v2:
* Improve commit text and kerneldoc. (Philipp)
* Rename run free work helper. (Philipp)
v3:
* Rebase on top of Maira's changes.
---
drivers/gpu/drm/scheduler/sched_main.c | 53 ++++++++++----------------
1 file changed, 21 insertions(+), 32 deletions(-)
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index e2cda28a1af4..5a550fd76bf0 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct drm_gpu_scheduler *sched)
}
/**
- * __drm_sched_run_free_queue - enqueue free-job work
- * @sched: scheduler instance
- */
-static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
-{
- if (!READ_ONCE(sched->pause_submit))
- queue_work(sched->submit_wq, &sched->work_free_job);
-}
-
-/**
- * drm_sched_run_free_queue - enqueue free-job work if ready
+ * drm_sched_run_free_queue - enqueue free-job work
* @sched: scheduler instance
*/
static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
{
- struct drm_sched_job *job;
-
- job = list_first_entry_or_null(&sched->pending_list,
- struct drm_sched_job, list);
- if (job && dma_fence_is_signaled(&job->s_fence->finished))
- __drm_sched_run_free_queue(sched);
-}
-
-static void drm_sched_run_free_queue_unlocked(struct drm_gpu_scheduler *sched)
-{
- spin_lock(&sched->job_list_lock);
- drm_sched_run_free_queue(sched);
- spin_unlock(&sched->job_list_lock);
+ if (!READ_ONCE(sched->pause_submit))
+ queue_work(sched->submit_wq, &sched->work_free_job);
}
/**
@@ -398,7 +377,7 @@ static void drm_sched_job_done(struct drm_sched_job *s_job, int result)
dma_fence_get(&s_fence->finished);
drm_sched_fence_finished(s_fence, result);
dma_fence_put(&s_fence->finished);
- __drm_sched_run_free_queue(sched);
+ drm_sched_run_free_queue(sched);
}
/**
@@ -1134,12 +1113,16 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
* drm_sched_get_finished_job - fetch the next finished job to be destroyed
*
* @sched: scheduler instance
+ * @have_more: are there more finished jobs on the list
+ *
+ * Informs the caller through @have_more whether there are more finished jobs
+ * besides the returned one.
*
* Returns the next finished job from the pending list (if there is one)
* ready for it to be destroyed.
*/
static struct drm_sched_job *
-drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
+drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
{
struct drm_sched_job *job, *next;
@@ -1147,22 +1130,25 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
job = list_first_entry_or_null(&sched->pending_list,
struct drm_sched_job, list);
-
if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
/* remove job from pending_list */
list_del_init(&job->list);
/* cancel this job's TO timer */
cancel_delayed_work(&sched->work_tdr);
- /* make the scheduled timestamp more accurate */
+
+ *have_more = false;
next = list_first_entry_or_null(&sched->pending_list,
typeof(*next), list);
-
if (next) {
+ /* make the scheduled timestamp more accurate */
if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
&next->s_fence->scheduled.flags))
next->s_fence->scheduled.timestamp =
dma_fence_timestamp(&job->s_fence->finished);
+
+ *have_more = dma_fence_is_signaled(&next->s_fence->finished);
+
/* start TO timer for next job */
drm_sched_start_timeout(sched);
}
@@ -1221,12 +1207,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
struct drm_gpu_scheduler *sched =
container_of(w, struct drm_gpu_scheduler, work_free_job);
struct drm_sched_job *job;
+ bool have_more;
- job = drm_sched_get_finished_job(sched);
- if (job)
+ job = drm_sched_get_finished_job(sched, &have_more);
+ if (job) {
sched->ops->free_job(job);
+ if (have_more)
+ drm_sched_run_free_queue(sched);
+ }
- drm_sched_run_free_queue_unlocked(sched);
drm_sched_run_job_queue(sched);
}
--
2.48.0
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-16 8:51 Tvrtko Ursulin
@ 2025-07-16 13:31 ` Maíra Canal
2025-07-16 13:49 ` Tvrtko Ursulin
0 siblings, 1 reply; 19+ messages in thread
From: Maíra Canal @ 2025-07-16 13:31 UTC (permalink / raw)
To: Tvrtko Ursulin, dri-devel
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Philipp Stanner
Hi Tvrtko,
On 16/07/25 05:51, Tvrtko Ursulin wrote:
> Currently the job free work item will lock sched->job_list_lock first time
> to see if there are any jobs, free a single job, and then lock again to
> decide whether to re-queue itself if there are more finished jobs.
>
> Since drm_sched_get_finished_job() already looks at the second job in the
> queue we can simply add the signaled check and have it return the presence
> of more jobs to be freed to the caller. That way the work item does not
> have to lock the list again and repeat the signaled check.
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
> Cc: Christian König <christian.koenig@amd.com>
> Cc: Danilo Krummrich <dakr@kernel.org>
> Cc: Maíra Canal <mcanal@igalia.com>
> Cc: Matthew Brost <matthew.brost@intel.com>
> Cc: Philipp Stanner <phasta@kernel.org>
> ---
> v2:
> * Improve commit text and kerneldoc. (Philipp)
> * Rename run free work helper. (Philipp)
>
> v3:
> * Rebase on top of Maira's changes.
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 53 ++++++++++----------------
> 1 file changed, 21 insertions(+), 32 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index e2cda28a1af4..5a550fd76bf0 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct drm_gpu_scheduler *sched)
> }
>
> /**
> - * __drm_sched_run_free_queue - enqueue free-job work
> - * @sched: scheduler instance
> - */
> -static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
> -{
> - if (!READ_ONCE(sched->pause_submit))
> - queue_work(sched->submit_wq, &sched->work_free_job);
> -}
> -
> -/**
> - * drm_sched_run_free_queue - enqueue free-job work if ready
> + * drm_sched_run_free_queue - enqueue free-job work
> * @sched: scheduler instance
> */
> static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
> {
> - struct drm_sched_job *job;
> -
> - job = list_first_entry_or_null(&sched->pending_list,
> - struct drm_sched_job, list);
> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
> - __drm_sched_run_free_queue(sched);
I believe we'd still need this chunk for DRM_GPU_SCHED_STAT_NO_HANG
(check the comment in drm_sched_job_reinsert_on_false_timeout()). How
about only deleting drm_sched_run_free_queue_unlocked() and keep using
__drm_sched_run_free_queue()?
Best Regards,
- Maíra
> -}
> -
> -static void drm_sched_run_free_queue_unlocked(struct drm_gpu_scheduler *sched)
> -{
> - spin_lock(&sched->job_list_lock);
> - drm_sched_run_free_queue(sched);
> - spin_unlock(&sched->job_list_lock);
> + if (!READ_ONCE(sched->pause_submit))
> + queue_work(sched->submit_wq, &sched->work_free_job);
> }
>
> /**
> @@ -398,7 +377,7 @@ static void drm_sched_job_done(struct drm_sched_job *s_job, int result)
> dma_fence_get(&s_fence->finished);
> drm_sched_fence_finished(s_fence, result);
> dma_fence_put(&s_fence->finished);
> - __drm_sched_run_free_queue(sched);
> + drm_sched_run_free_queue(sched);
> }
>
> /**
> @@ -1134,12 +1113,16 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
> * drm_sched_get_finished_job - fetch the next finished job to be destroyed
> *
> * @sched: scheduler instance
> + * @have_more: are there more finished jobs on the list
> + *
> + * Informs the caller through @have_more whether there are more finished jobs
> + * besides the returned one.
> *
> * Returns the next finished job from the pending list (if there is one)
> * ready for it to be destroyed.
> */
> static struct drm_sched_job *
> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more)
> {
> struct drm_sched_job *job, *next;
>
> @@ -1147,22 +1130,25 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>
> job = list_first_entry_or_null(&sched->pending_list,
> struct drm_sched_job, list);
> -
> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
> /* remove job from pending_list */
> list_del_init(&job->list);
>
> /* cancel this job's TO timer */
> cancel_delayed_work(&sched->work_tdr);
> - /* make the scheduled timestamp more accurate */
> +
> + *have_more = false;
> next = list_first_entry_or_null(&sched->pending_list,
> typeof(*next), list);
> -
> if (next) {
> + /* make the scheduled timestamp more accurate */
> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
> &next->s_fence->scheduled.flags))
> next->s_fence->scheduled.timestamp =
> dma_fence_timestamp(&job->s_fence->finished);
> +
> + *have_more = dma_fence_is_signaled(&next->s_fence->finished);
> +
> /* start TO timer for next job */
> drm_sched_start_timeout(sched);
> }
> @@ -1221,12 +1207,15 @@ static void drm_sched_free_job_work(struct work_struct *w)
> struct drm_gpu_scheduler *sched =
> container_of(w, struct drm_gpu_scheduler, work_free_job);
> struct drm_sched_job *job;
> + bool have_more;
>
> - job = drm_sched_get_finished_job(sched);
> - if (job)
> + job = drm_sched_get_finished_job(sched, &have_more);
> + if (job) {
> sched->ops->free_job(job);
> + if (have_more)
> + drm_sched_run_free_queue(sched);
> + }
>
> - drm_sched_run_free_queue_unlocked(sched);
> drm_sched_run_job_queue(sched);
> }
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-16 13:31 ` Maíra Canal
@ 2025-07-16 13:49 ` Tvrtko Ursulin
2025-07-16 14:30 ` Maíra Canal
0 siblings, 1 reply; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-16 13:49 UTC (permalink / raw)
To: Maíra Canal, dri-devel
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Philipp Stanner
On 16/07/2025 14:31, Maíra Canal wrote:
> Hi Tvrtko,
>
> On 16/07/25 05:51, Tvrtko Ursulin wrote:
>> Currently the job free work item will lock sched->job_list_lock first
>> time
>> to see if there are any jobs, free a single job, and then lock again to
>> decide whether to re-queue itself if there are more finished jobs.
>>
>> Since drm_sched_get_finished_job() already looks at the second job in the
>> queue we can simply add the signaled check and have it return the
>> presence
>> of more jobs to be freed to the caller. That way the work item does not
>> have to lock the list again and repeat the signaled check.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>> Cc: Christian König <christian.koenig@amd.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: Maíra Canal <mcanal@igalia.com>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Philipp Stanner <phasta@kernel.org>
>> ---
>> v2:
>> * Improve commit text and kerneldoc. (Philipp)
>> * Rename run free work helper. (Philipp)
>>
>> v3:
>> * Rebase on top of Maira's changes.
>> ---
>> drivers/gpu/drm/scheduler/sched_main.c | 53 ++++++++++----------------
>> 1 file changed, 21 insertions(+), 32 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/
>> scheduler/sched_main.c
>> index e2cda28a1af4..5a550fd76bf0 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct
>> drm_gpu_scheduler *sched)
>> }
>> /**
>> - * __drm_sched_run_free_queue - enqueue free-job work
>> - * @sched: scheduler instance
>> - */
>> -static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>> -{
>> - if (!READ_ONCE(sched->pause_submit))
>> - queue_work(sched->submit_wq, &sched->work_free_job);
>> -}
>> -
>> -/**
>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>> + * drm_sched_run_free_queue - enqueue free-job work
>> * @sched: scheduler instance
>> */
>> static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>> {
>> - struct drm_sched_job *job;
>> -
>> - job = list_first_entry_or_null(&sched->pending_list,
>> - struct drm_sched_job, list);
>> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
>> - __drm_sched_run_free_queue(sched);
>
> I believe we'd still need this chunk for DRM_GPU_SCHED_STAT_NO_HANG
> (check the comment in drm_sched_job_reinsert_on_false_timeout()). How
You mean the "is there a signaled job in the list check" is needed for
drm_sched_job_reinsert_on_false_timeout()? Hmm why? Worst case is a
false positive wakeup on the free worker, no?
> about only deleting drm_sched_run_free_queue_unlocked() and keep using
> __drm_sched_run_free_queue()?
You mean use __drm_sched_run_free_queue() from
drm_sched_job_reinsert_on_false_timeout()? That is the same as
drm_sched_run_free_queue() with this patch.
Regards,
Tvrtko
>> -}
>> -
>> -static void drm_sched_run_free_queue_unlocked(struct
>> drm_gpu_scheduler *sched)
>> -{
>> - spin_lock(&sched->job_list_lock);
>> - drm_sched_run_free_queue(sched);
>> - spin_unlock(&sched->job_list_lock);
>> + if (!READ_ONCE(sched->pause_submit))
>> + queue_work(sched->submit_wq, &sched->work_free_job);
>> }
>> /**
>> @@ -398,7 +377,7 @@ static void drm_sched_job_done(struct
>> drm_sched_job *s_job, int result)
>> dma_fence_get(&s_fence->finished);
>> drm_sched_fence_finished(s_fence, result);
>> dma_fence_put(&s_fence->finished);
>> - __drm_sched_run_free_queue(sched);
>> + drm_sched_run_free_queue(sched);
>> }
>> /**
>> @@ -1134,12 +1113,16 @@ drm_sched_select_entity(struct
>> drm_gpu_scheduler *sched)
>> * drm_sched_get_finished_job - fetch the next finished job to be
>> destroyed
>> *
>> * @sched: scheduler instance
>> + * @have_more: are there more finished jobs on the list
>> + *
>> + * Informs the caller through @have_more whether there are more
>> finished jobs
>> + * besides the returned one.
>> *
>> * Returns the next finished job from the pending list (if there is
>> one)
>> * ready for it to be destroyed.
>> */
>> static struct drm_sched_job *
>> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool
>> *have_more)
>> {
>> struct drm_sched_job *job, *next;
>> @@ -1147,22 +1130,25 @@ drm_sched_get_finished_job(struct
>> drm_gpu_scheduler *sched)
>> job = list_first_entry_or_null(&sched->pending_list,
>> struct drm_sched_job, list);
>> -
>> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>> /* remove job from pending_list */
>> list_del_init(&job->list);
>> /* cancel this job's TO timer */
>> cancel_delayed_work(&sched->work_tdr);
>> - /* make the scheduled timestamp more accurate */
>> +
>> + *have_more = false;
>> next = list_first_entry_or_null(&sched->pending_list,
>> typeof(*next), list);
>> -
>> if (next) {
>> + /* make the scheduled timestamp more accurate */
>> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
>> &next->s_fence->scheduled.flags))
>> next->s_fence->scheduled.timestamp =
>> dma_fence_timestamp(&job->s_fence->finished);
>> +
>> + *have_more = dma_fence_is_signaled(&next->s_fence-
>> >finished);
>> +
>> /* start TO timer for next job */
>> drm_sched_start_timeout(sched);
>> }
>> @@ -1221,12 +1207,15 @@ static void drm_sched_free_job_work(struct
>> work_struct *w)
>> struct drm_gpu_scheduler *sched =
>> container_of(w, struct drm_gpu_scheduler, work_free_job);
>> struct drm_sched_job *job;
>> + bool have_more;
>> - job = drm_sched_get_finished_job(sched);
>> - if (job)
>> + job = drm_sched_get_finished_job(sched, &have_more);
>> + if (job) {
>> sched->ops->free_job(job);
>> + if (have_more)
>> + drm_sched_run_free_queue(sched);
>> + }
>> - drm_sched_run_free_queue_unlocked(sched);
>> drm_sched_run_job_queue(sched);
>> }
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-16 13:49 ` Tvrtko Ursulin
@ 2025-07-16 14:30 ` Maíra Canal
2025-07-16 14:46 ` Tvrtko Ursulin
0 siblings, 1 reply; 19+ messages in thread
From: Maíra Canal @ 2025-07-16 14:30 UTC (permalink / raw)
To: Tvrtko Ursulin, dri-devel
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Philipp Stanner
Hi Tvrtko,
On 16/07/25 10:49, Tvrtko Ursulin wrote:
>
> On 16/07/2025 14:31, Maíra Canal wrote:
>> Hi Tvrtko,
>>
>> On 16/07/25 05:51, Tvrtko Ursulin wrote:
>>> Currently the job free work item will lock sched->job_list_lock first
>>> time
>>> to see if there are any jobs, free a single job, and then lock again to
>>> decide whether to re-queue itself if there are more finished jobs.
>>>
>>> Since drm_sched_get_finished_job() already looks at the second job in
>>> the
>>> queue we can simply add the signaled check and have it return the
>>> presence
>>> of more jobs to be freed to the caller. That way the work item does not
>>> have to lock the list again and repeat the signaled check.
>>>
>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>>> Cc: Christian König <christian.koenig@amd.com>
>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>> Cc: Maíra Canal <mcanal@igalia.com>
>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>> Cc: Philipp Stanner <phasta@kernel.org>
>>> ---
>>> v2:
>>> * Improve commit text and kerneldoc. (Philipp)
>>> * Rename run free work helper. (Philipp)
>>>
>>> v3:
>>> * Rebase on top of Maira's changes.
>>> ---
>>> drivers/gpu/drm/scheduler/sched_main.c | 53 ++++++++++----------------
>>> 1 file changed, 21 insertions(+), 32 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/
>>> drm/ scheduler/sched_main.c
>>> index e2cda28a1af4..5a550fd76bf0 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct
>>> drm_gpu_scheduler *sched)
>>> }
>>> /**
>>> - * __drm_sched_run_free_queue - enqueue free-job work
>>> - * @sched: scheduler instance
>>> - */
>>> -static void __drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>>> -{
>>> - if (!READ_ONCE(sched->pause_submit))
>>> - queue_work(sched->submit_wq, &sched->work_free_job);
>>> -}
>>> -
>>> -/**
>>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>>> + * drm_sched_run_free_queue - enqueue free-job work
>>> * @sched: scheduler instance
>>> */
>>> static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>>> {
>>> - struct drm_sched_job *job;
>>> -
>>> - job = list_first_entry_or_null(&sched->pending_list,
>>> - struct drm_sched_job, list);
>>> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
>>> - __drm_sched_run_free_queue(sched);
>>
>> I believe we'd still need this chunk for DRM_GPU_SCHED_STAT_NO_HANG
>> (check the comment in drm_sched_job_reinsert_on_false_timeout()). How
>
> You mean the "is there a signaled job in the list check" is needed for
> drm_sched_job_reinsert_on_false_timeout()? Hmm why? Worst case is a
> false positive wakeup on the free worker, no?
Correct me if I'm mistaken, we would also have a false positive wake-up
on the run_job worker, which I believe it could be problematic in the
cases that we skipped the reset because the job is still running.
>
>> about only deleting drm_sched_run_free_queue_unlocked() and keep using
>> __drm_sched_run_free_queue()?
>
> You mean use __drm_sched_run_free_queue() from
> drm_sched_job_reinsert_on_false_timeout()? That is the same as
> drm_sched_run_free_queue() with this patch.
Sorry, I believe I didn't express myself clearly. I mean, using
__drm_sched_run_free_queue() in drm_sched_free_job_work() and keep using
drm_sched_run_free_queue() in drm_sched_job_reinsert_on_false_timeout().
Best Regards,
- Maíra
>
> Regards,
>
> Tvrtko
>
>>> -}
>>> -
>>> -static void drm_sched_run_free_queue_unlocked(struct
>>> drm_gpu_scheduler *sched)
>>> -{
>>> - spin_lock(&sched->job_list_lock);
>>> - drm_sched_run_free_queue(sched);
>>> - spin_unlock(&sched->job_list_lock);
>>> + if (!READ_ONCE(sched->pause_submit))
>>> + queue_work(sched->submit_wq, &sched->work_free_job);
>>> }
>>> /**
>>> @@ -398,7 +377,7 @@ static void drm_sched_job_done(struct
>>> drm_sched_job *s_job, int result)
>>> dma_fence_get(&s_fence->finished);
>>> drm_sched_fence_finished(s_fence, result);
>>> dma_fence_put(&s_fence->finished);
>>> - __drm_sched_run_free_queue(sched);
>>> + drm_sched_run_free_queue(sched);
>>> }
>>> /**
>>> @@ -1134,12 +1113,16 @@ drm_sched_select_entity(struct
>>> drm_gpu_scheduler *sched)
>>> * drm_sched_get_finished_job - fetch the next finished job to be
>>> destroyed
>>> *
>>> * @sched: scheduler instance
>>> + * @have_more: are there more finished jobs on the list
>>> + *
>>> + * Informs the caller through @have_more whether there are more
>>> finished jobs
>>> + * besides the returned one.
>>> *
>>> * Returns the next finished job from the pending list (if there is
>>> one)
>>> * ready for it to be destroyed.
>>> */
>>> static struct drm_sched_job *
>>> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>>> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool
>>> *have_more)
>>> {
>>> struct drm_sched_job *job, *next;
>>> @@ -1147,22 +1130,25 @@ drm_sched_get_finished_job(struct
>>> drm_gpu_scheduler *sched)
>>> job = list_first_entry_or_null(&sched->pending_list,
>>> struct drm_sched_job, list);
>>> -
>>> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>>> /* remove job from pending_list */
>>> list_del_init(&job->list);
>>> /* cancel this job's TO timer */
>>> cancel_delayed_work(&sched->work_tdr);
>>> - /* make the scheduled timestamp more accurate */
>>> +
>>> + *have_more = false;
>>> next = list_first_entry_or_null(&sched->pending_list,
>>> typeof(*next), list);
>>> -
>>> if (next) {
>>> + /* make the scheduled timestamp more accurate */
>>> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>> &next->s_fence->scheduled.flags))
>>> next->s_fence->scheduled.timestamp =
>>> dma_fence_timestamp(&job->s_fence->finished);
>>> +
>>> + *have_more = dma_fence_is_signaled(&next->s_fence-
>>> >finished);
>>> +
>>> /* start TO timer for next job */
>>> drm_sched_start_timeout(sched);
>>> }
>>> @@ -1221,12 +1207,15 @@ static void drm_sched_free_job_work(struct
>>> work_struct *w)
>>> struct drm_gpu_scheduler *sched =
>>> container_of(w, struct drm_gpu_scheduler, work_free_job);
>>> struct drm_sched_job *job;
>>> + bool have_more;
>>> - job = drm_sched_get_finished_job(sched);
>>> - if (job)
>>> + job = drm_sched_get_finished_job(sched, &have_more);
>>> + if (job) {
>>> sched->ops->free_job(job);
>>> + if (have_more)
>>> + drm_sched_run_free_queue(sched);
>>> + }
>>> - drm_sched_run_free_queue_unlocked(sched);
>>> drm_sched_run_job_queue(sched);
>>> }
>>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-16 14:30 ` Maíra Canal
@ 2025-07-16 14:46 ` Tvrtko Ursulin
2025-07-16 20:44 ` Maíra Canal
0 siblings, 1 reply; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-16 14:46 UTC (permalink / raw)
To: Maíra Canal, dri-devel
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Philipp Stanner
On 16/07/2025 15:30, Maíra Canal wrote:
> Hi Tvrtko,
>
> On 16/07/25 10:49, Tvrtko Ursulin wrote:
>>
>> On 16/07/2025 14:31, Maíra Canal wrote:
>>> Hi Tvrtko,
>>>
>>> On 16/07/25 05:51, Tvrtko Ursulin wrote:
>>>> Currently the job free work item will lock sched->job_list_lock
>>>> first time
>>>> to see if there are any jobs, free a single job, and then lock again to
>>>> decide whether to re-queue itself if there are more finished jobs.
>>>>
>>>> Since drm_sched_get_finished_job() already looks at the second job
>>>> in the
>>>> queue we can simply add the signaled check and have it return the
>>>> presence
>>>> of more jobs to be freed to the caller. That way the work item does not
>>>> have to lock the list again and repeat the signaled check.
>>>>
>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>>>> Cc: Christian König <christian.koenig@amd.com>
>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>> Cc: Maíra Canal <mcanal@igalia.com>
>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>> Cc: Philipp Stanner <phasta@kernel.org>
>>>> ---
>>>> v2:
>>>> * Improve commit text and kerneldoc. (Philipp)
>>>> * Rename run free work helper. (Philipp)
>>>>
>>>> v3:
>>>> * Rebase on top of Maira's changes.
>>>> ---
>>>> drivers/gpu/drm/scheduler/sched_main.c | 53 +++++++++
>>>> +----------------
>>>> 1 file changed, 21 insertions(+), 32 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/
>>>> drm/ scheduler/sched_main.c
>>>> index e2cda28a1af4..5a550fd76bf0 100644
>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>> @@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct
>>>> drm_gpu_scheduler *sched)
>>>> }
>>>> /**
>>>> - * __drm_sched_run_free_queue - enqueue free-job work
>>>> - * @sched: scheduler instance
>>>> - */
>>>> -static void __drm_sched_run_free_queue(struct drm_gpu_scheduler
>>>> *sched)
>>>> -{
>>>> - if (!READ_ONCE(sched->pause_submit))
>>>> - queue_work(sched->submit_wq, &sched->work_free_job);
>>>> -}
>>>> -
>>>> -/**
>>>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>>>> + * drm_sched_run_free_queue - enqueue free-job work
>>>> * @sched: scheduler instance
>>>> */
>>>> static void drm_sched_run_free_queue(struct drm_gpu_scheduler *sched)
>>>> {
>>>> - struct drm_sched_job *job;
>>>> -
>>>> - job = list_first_entry_or_null(&sched->pending_list,
>>>> - struct drm_sched_job, list);
>>>> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
>>>> - __drm_sched_run_free_queue(sched);
>>>
>>> I believe we'd still need this chunk for DRM_GPU_SCHED_STAT_NO_HANG
>>> (check the comment in drm_sched_job_reinsert_on_false_timeout()). How
>>
>> You mean the "is there a signaled job in the list check" is needed for
>> drm_sched_job_reinsert_on_false_timeout()? Hmm why? Worst case is a
>> false positive wakeup on the free worker, no?
>
> Correct me if I'm mistaken, we would also have a false positive wake-up
> on the run_job worker, which I believe it could be problematic in the
> cases that we skipped the reset because the job is still running.
Run job worker exits when it sees no free credits so I don't think there
is a problem. What am I missing?
Regards,
Tvrtko
>>> about only deleting drm_sched_run_free_queue_unlocked() and keep using
>>> __drm_sched_run_free_queue()?
>>
>> You mean use __drm_sched_run_free_queue() from
>> drm_sched_job_reinsert_on_false_timeout()? That is the same as
>> drm_sched_run_free_queue() with this patch.
>
> Sorry, I believe I didn't express myself clearly. I mean, using
> __drm_sched_run_free_queue() in drm_sched_free_job_work() and keep using
> drm_sched_run_free_queue() in drm_sched_job_reinsert_on_false_timeout().
>
> Best Regards,
> - Maíra
>
>>
>> Regards,
>>
>> Tvrtko
>>
>>>> -}
>>>> -
>>>> -static void drm_sched_run_free_queue_unlocked(struct
>>>> drm_gpu_scheduler *sched)
>>>> -{
>>>> - spin_lock(&sched->job_list_lock);
>>>> - drm_sched_run_free_queue(sched);
>>>> - spin_unlock(&sched->job_list_lock);
>>>> + if (!READ_ONCE(sched->pause_submit))
>>>> + queue_work(sched->submit_wq, &sched->work_free_job);
>>>> }
>>>> /**
>>>> @@ -398,7 +377,7 @@ static void drm_sched_job_done(struct
>>>> drm_sched_job *s_job, int result)
>>>> dma_fence_get(&s_fence->finished);
>>>> drm_sched_fence_finished(s_fence, result);
>>>> dma_fence_put(&s_fence->finished);
>>>> - __drm_sched_run_free_queue(sched);
>>>> + drm_sched_run_free_queue(sched);
>>>> }
>>>> /**
>>>> @@ -1134,12 +1113,16 @@ drm_sched_select_entity(struct
>>>> drm_gpu_scheduler *sched)
>>>> * drm_sched_get_finished_job - fetch the next finished job to be
>>>> destroyed
>>>> *
>>>> * @sched: scheduler instance
>>>> + * @have_more: are there more finished jobs on the list
>>>> + *
>>>> + * Informs the caller through @have_more whether there are more
>>>> finished jobs
>>>> + * besides the returned one.
>>>> *
>>>> * Returns the next finished job from the pending list (if there
>>>> is one)
>>>> * ready for it to be destroyed.
>>>> */
>>>> static struct drm_sched_job *
>>>> -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched)
>>>> +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool
>>>> *have_more)
>>>> {
>>>> struct drm_sched_job *job, *next;
>>>> @@ -1147,22 +1130,25 @@ drm_sched_get_finished_job(struct
>>>> drm_gpu_scheduler *sched)
>>>> job = list_first_entry_or_null(&sched->pending_list,
>>>> struct drm_sched_job, list);
>>>> -
>>>> if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>>>> /* remove job from pending_list */
>>>> list_del_init(&job->list);
>>>> /* cancel this job's TO timer */
>>>> cancel_delayed_work(&sched->work_tdr);
>>>> - /* make the scheduled timestamp more accurate */
>>>> +
>>>> + *have_more = false;
>>>> next = list_first_entry_or_null(&sched->pending_list,
>>>> typeof(*next), list);
>>>> -
>>>> if (next) {
>>>> + /* make the scheduled timestamp more accurate */
>>>> if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
>>>> &next->s_fence->scheduled.flags))
>>>> next->s_fence->scheduled.timestamp =
>>>> dma_fence_timestamp(&job->s_fence->finished);
>>>> +
>>>> + *have_more = dma_fence_is_signaled(&next->s_fence-
>>>> >finished);
>>>> +
>>>> /* start TO timer for next job */
>>>> drm_sched_start_timeout(sched);
>>>> }
>>>> @@ -1221,12 +1207,15 @@ static void drm_sched_free_job_work(struct
>>>> work_struct *w)
>>>> struct drm_gpu_scheduler *sched =
>>>> container_of(w, struct drm_gpu_scheduler, work_free_job);
>>>> struct drm_sched_job *job;
>>>> + bool have_more;
>>>> - job = drm_sched_get_finished_job(sched);
>>>> - if (job)
>>>> + job = drm_sched_get_finished_job(sched, &have_more);
>>>> + if (job) {
>>>> sched->ops->free_job(job);
>>>> + if (have_more)
>>>> + drm_sched_run_free_queue(sched);
>>>> + }
>>>> - drm_sched_run_free_queue_unlocked(sched);
>>>> drm_sched_run_job_queue(sched);
>>>> }
>>>
>>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-16 14:46 ` Tvrtko Ursulin
@ 2025-07-16 20:44 ` Maíra Canal
2025-07-18 7:13 ` Tvrtko Ursulin
0 siblings, 1 reply; 19+ messages in thread
From: Maíra Canal @ 2025-07-16 20:44 UTC (permalink / raw)
To: Tvrtko Ursulin, dri-devel
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Philipp Stanner
Hi Tvrtko,
On 16/07/25 11:46, Tvrtko Ursulin wrote:
>
> On 16/07/2025 15:30, Maíra Canal wrote:
>> Hi Tvrtko,
>>
>> On 16/07/25 10:49, Tvrtko Ursulin wrote:
>>>
>>> On 16/07/2025 14:31, Maíra Canal wrote:
>>>> Hi Tvrtko,
>>>>
>>>> On 16/07/25 05:51, Tvrtko Ursulin wrote:
>>>>> Currently the job free work item will lock sched->job_list_lock
>>>>> first time
>>>>> to see if there are any jobs, free a single job, and then lock
>>>>> again to
>>>>> decide whether to re-queue itself if there are more finished jobs.
>>>>>
>>>>> Since drm_sched_get_finished_job() already looks at the second job
>>>>> in the
>>>>> queue we can simply add the signaled check and have it return the
>>>>> presence
>>>>> of more jobs to be freed to the caller. That way the work item does
>>>>> not
>>>>> have to lock the list again and repeat the signaled check.
>>>>>
>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>> Cc: Maíra Canal <mcanal@igalia.com>
>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>> Cc: Philipp Stanner <phasta@kernel.org>
>>>>> ---
>>>>> v2:
>>>>> * Improve commit text and kerneldoc. (Philipp)
>>>>> * Rename run free work helper. (Philipp)
>>>>>
>>>>> v3:
>>>>> * Rebase on top of Maira's changes.
>>>>> ---
>>>>> drivers/gpu/drm/scheduler/sched_main.c | 53 +++++++++
>>>>> +----------------
>>>>> 1 file changed, 21 insertions(+), 32 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/
>>>>> drm/ scheduler/sched_main.c
>>>>> index e2cda28a1af4..5a550fd76bf0 100644
>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>> @@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct
>>>>> drm_gpu_scheduler *sched)
>>>>> }
>>>>> /**
>>>>> - * __drm_sched_run_free_queue - enqueue free-job work
>>>>> - * @sched: scheduler instance
>>>>> - */
>>>>> -static void __drm_sched_run_free_queue(struct drm_gpu_scheduler
>>>>> *sched)
>>>>> -{
>>>>> - if (!READ_ONCE(sched->pause_submit))
>>>>> - queue_work(sched->submit_wq, &sched->work_free_job);
>>>>> -}
>>>>> -
>>>>> -/**
>>>>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>>>>> + * drm_sched_run_free_queue - enqueue free-job work
>>>>> * @sched: scheduler instance
>>>>> */
>>>>> static void drm_sched_run_free_queue(struct drm_gpu_scheduler
>>>>> *sched)
>>>>> {
>>>>> - struct drm_sched_job *job;
>>>>> -
>>>>> - job = list_first_entry_or_null(&sched->pending_list,
>>>>> - struct drm_sched_job, list);
>>>>> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
>>>>> - __drm_sched_run_free_queue(sched);
>>>>
>>>> I believe we'd still need this chunk for DRM_GPU_SCHED_STAT_NO_HANG
>>>> (check the comment in drm_sched_job_reinsert_on_false_timeout()). How
>>>
>>> You mean the "is there a signaled job in the list check" is needed
>>> for drm_sched_job_reinsert_on_false_timeout()? Hmm why? Worst case is
>>> a false positive wakeup on the free worker, no?
>>
>> Correct me if I'm mistaken, we would also have a false positive wake-up
>> on the run_job worker, which I believe it could be problematic in the
>> cases that we skipped the reset because the job is still running.
>
> Run job worker exits when it sees no free credits so I don't think there
> is a problem. What am I missing?
>
I was the one missing the code in `drm_sched_can_queue()`. Sorry for the
misleading comments. This is:
Reviewed-by: Maíra Canal <mcanal@igalia.com>
Best Regards,
- Maíra
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-16 20:44 ` Maíra Canal
@ 2025-07-18 7:13 ` Tvrtko Ursulin
2025-07-18 9:31 ` Philipp Stanner
0 siblings, 1 reply; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-18 7:13 UTC (permalink / raw)
To: dri-devel, Philipp Stanner
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Maíra Canal
On 16/07/2025 21:44, Maíra Canal wrote:
> Hi Tvrtko,
>
> On 16/07/25 11:46, Tvrtko Ursulin wrote:
>>
>> On 16/07/2025 15:30, Maíra Canal wrote:
>>> Hi Tvrtko,
>>>
>>> On 16/07/25 10:49, Tvrtko Ursulin wrote:
>>>>
>>>> On 16/07/2025 14:31, Maíra Canal wrote:
>>>>> Hi Tvrtko,
>>>>>
>>>>> On 16/07/25 05:51, Tvrtko Ursulin wrote:
>>>>>> Currently the job free work item will lock sched->job_list_lock
>>>>>> first time
>>>>>> to see if there are any jobs, free a single job, and then lock
>>>>>> again to
>>>>>> decide whether to re-queue itself if there are more finished jobs.
>>>>>>
>>>>>> Since drm_sched_get_finished_job() already looks at the second job
>>>>>> in the
>>>>>> queue we can simply add the signaled check and have it return the
>>>>>> presence
>>>>>> of more jobs to be freed to the caller. That way the work item
>>>>>> does not
>>>>>> have to lock the list again and repeat the signaled check.
>>>>>>
>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>>> Cc: Maíra Canal <mcanal@igalia.com>
>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>> Cc: Philipp Stanner <phasta@kernel.org>
>>>>>> ---
>>>>>> v2:
>>>>>> * Improve commit text and kerneldoc. (Philipp)
>>>>>> * Rename run free work helper. (Philipp)
>>>>>>
>>>>>> v3:
>>>>>> * Rebase on top of Maira's changes.
>>>>>> ---
>>>>>> drivers/gpu/drm/scheduler/sched_main.c | 53 +++++++++
>>>>>> +----------------
>>>>>> 1 file changed, 21 insertions(+), 32 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/
>>>>>> drm/ scheduler/sched_main.c
>>>>>> index e2cda28a1af4..5a550fd76bf0 100644
>>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>> @@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct
>>>>>> drm_gpu_scheduler *sched)
>>>>>> }
>>>>>> /**
>>>>>> - * __drm_sched_run_free_queue - enqueue free-job work
>>>>>> - * @sched: scheduler instance
>>>>>> - */
>>>>>> -static void __drm_sched_run_free_queue(struct drm_gpu_scheduler
>>>>>> *sched)
>>>>>> -{
>>>>>> - if (!READ_ONCE(sched->pause_submit))
>>>>>> - queue_work(sched->submit_wq, &sched->work_free_job);
>>>>>> -}
>>>>>> -
>>>>>> -/**
>>>>>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>>>>>> + * drm_sched_run_free_queue - enqueue free-job work
>>>>>> * @sched: scheduler instance
>>>>>> */
>>>>>> static void drm_sched_run_free_queue(struct drm_gpu_scheduler
>>>>>> *sched)
>>>>>> {
>>>>>> - struct drm_sched_job *job;
>>>>>> -
>>>>>> - job = list_first_entry_or_null(&sched->pending_list,
>>>>>> - struct drm_sched_job, list);
>>>>>> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
>>>>>> - __drm_sched_run_free_queue(sched);
>>>>>
>>>>> I believe we'd still need this chunk for DRM_GPU_SCHED_STAT_NO_HANG
>>>>> (check the comment in drm_sched_job_reinsert_on_false_timeout()). How
>>>>
>>>> You mean the "is there a signaled job in the list check" is needed
>>>> for drm_sched_job_reinsert_on_false_timeout()? Hmm why? Worst case
>>>> is a false positive wakeup on the free worker, no?
>>>
>>> Correct me if I'm mistaken, we would also have a false positive wake-up
>>> on the run_job worker, which I believe it could be problematic in the
>>> cases that we skipped the reset because the job is still running.
>>
>> Run job worker exits when it sees no free credits so I don't think
>> there is a problem. What am I missing?
>>
>
> I was the one missing the code in `drm_sched_can_queue()`. Sorry for the
> misleading comments. This is:
>
> Reviewed-by: Maíra Canal <mcanal@igalia.com>
No worries, and thanks!
Philipp - are you okay with this version? V2 was done to address your
feedback so that should be good now.
Regards,
Tvrtko
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-18 7:13 ` Tvrtko Ursulin
@ 2025-07-18 9:31 ` Philipp Stanner
2025-07-18 9:35 ` Tvrtko Ursulin
0 siblings, 1 reply; 19+ messages in thread
From: Philipp Stanner @ 2025-07-18 9:31 UTC (permalink / raw)
To: Tvrtko Ursulin, dri-devel, Philipp Stanner
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Maíra Canal
On Fri, 2025-07-18 at 08:13 +0100, Tvrtko Ursulin wrote:
>
> On 16/07/2025 21:44, Maíra Canal wrote:
> > Hi Tvrtko,
> >
> > On 16/07/25 11:46, Tvrtko Ursulin wrote:
> > >
> > > On 16/07/2025 15:30, Maíra Canal wrote:
> > > > Hi Tvrtko,
> > > >
> > > > On 16/07/25 10:49, Tvrtko Ursulin wrote:
> > > > >
> > > > > On 16/07/2025 14:31, Maíra Canal wrote:
> > > > > > Hi Tvrtko,
> > > > > >
> > > > > > On 16/07/25 05:51, Tvrtko Ursulin wrote:
> > > > > > > Currently the job free work item will lock sched->job_list_lock
> > > > > > > first time
> > > > > > > to see if there are any jobs, free a single job, and then lock
> > > > > > > again to
> > > > > > > decide whether to re-queue itself if there are more finished jobs.
> > > > > > >
> > > > > > > Since drm_sched_get_finished_job() already looks at the second job
> > > > > > > in the
> > > > > > > queue we can simply add the signaled check and have it return the
> > > > > > > presence
> > > > > > > of more jobs to be freed to the caller. That way the work item
> > > > > > > does not
> > > > > > > have to lock the list again and repeat the signaled check.
> > > > > > >
> > > > > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
> > > > > > > Cc: Christian König <christian.koenig@amd.com>
> > > > > > > Cc: Danilo Krummrich <dakr@kernel.org>
> > > > > > > Cc: Maíra Canal <mcanal@igalia.com>
> > > > > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > > > > Cc: Philipp Stanner <phasta@kernel.org>
> > > > > > > ---
> > > > > > > v2:
> > > > > > > * Improve commit text and kerneldoc. (Philipp)
> > > > > > > * Rename run free work helper. (Philipp)
> > > > > > >
> > > > > > > v3:
> > > > > > > * Rebase on top of Maira's changes.
> > > > > > > ---
> > > > > > > drivers/gpu/drm/scheduler/sched_main.c | 53 +++++++++
> > > > > > > +----------------
> > > > > > > 1 file changed, 21 insertions(+), 32 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/
> > > > > > > drm/ scheduler/sched_main.c
> > > > > > > index e2cda28a1af4..5a550fd76bf0 100644
> > > > > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > > @@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct
> > > > > > > drm_gpu_scheduler *sched)
> > > > > > > }
> > > > > > > /**
> > > > > > > - * __drm_sched_run_free_queue - enqueue free-job work
> > > > > > > - * @sched: scheduler instance
> > > > > > > - */
> > > > > > > -static void __drm_sched_run_free_queue(struct drm_gpu_scheduler
> > > > > > > *sched)
> > > > > > > -{
> > > > > > > - if (!READ_ONCE(sched->pause_submit))
> > > > > > > - queue_work(sched->submit_wq, &sched->work_free_job);
> > > > > > > -}
> > > > > > > -
> > > > > > > -/**
> > > > > > > - * drm_sched_run_free_queue - enqueue free-job work if ready
> > > > > > > + * drm_sched_run_free_queue - enqueue free-job work
> > > > > > > * @sched: scheduler instance
> > > > > > > */
> > > > > > > static void drm_sched_run_free_queue(struct drm_gpu_scheduler
> > > > > > > *sched)
> > > > > > > {
> > > > > > > - struct drm_sched_job *job;
> > > > > > > -
> > > > > > > - job = list_first_entry_or_null(&sched->pending_list,
> > > > > > > - struct drm_sched_job, list);
> > > > > > > - if (job && dma_fence_is_signaled(&job->s_fence->finished))
> > > > > > > - __drm_sched_run_free_queue(sched);
> > > > > >
> > > > > > I believe we'd still need this chunk for DRM_GPU_SCHED_STAT_NO_HANG
> > > > > > (check the comment in drm_sched_job_reinsert_on_false_timeout()). How
> > > > >
> > > > > You mean the "is there a signaled job in the list check" is needed
> > > > > for drm_sched_job_reinsert_on_false_timeout()? Hmm why? Worst case
> > > > > is a false positive wakeup on the free worker, no?
> > > >
> > > > Correct me if I'm mistaken, we would also have a false positive wake-up
> > > > on the run_job worker, which I believe it could be problematic in the
> > > > cases that we skipped the reset because the job is still running.
> > >
> > > Run job worker exits when it sees no free credits so I don't think
> > > there is a problem. What am I missing?
> > >
> >
> > I was the one missing the code in `drm_sched_can_queue()`. Sorry for the
> > misleading comments. This is:
> >
> > Reviewed-by: Maíra Canal <mcanal@igalia.com>
>
> No worries, and thanks!
>
> Philipp - are you okay with this version? V2 was done to address your
> feedback so that should be good now.
Was just giving it another spin when you wrote. (a [PATCH v3] would've
been neat for identification, though – I almost pulled the wrong patch
from the archive *wink*)
LGTM, improves things, can be merged.
However, we had to merge Lin Cao's bug fix [1] recently. That one is
now in drm-misc-fixes, and your patch should go to drm-misc-next. This
would cause a conflict once the two branches meet.
So I suggest that we wait with this non-urgent patch until drm-misc-
fixes / Linus's -rc gets merged into drm-misc-next, and then we apply
it. Should be next week or the week after AFAIK.
Unless somebody has a better idea, of course?
Remind me in case I forget.
P.
[1] https://gitlab.freedesktop.org/drm/misc/kernel/-/commit/15f77764e90a713ee3916ca424757688e4f565b9
>
> Regards,
>
> Tvrtko
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-18 9:31 ` Philipp Stanner
@ 2025-07-18 9:35 ` Tvrtko Ursulin
2025-07-18 9:41 ` Philipp Stanner
0 siblings, 1 reply; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-18 9:35 UTC (permalink / raw)
To: phasta, dri-devel
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Maíra Canal
On 18/07/2025 10:31, Philipp Stanner wrote:
> On Fri, 2025-07-18 at 08:13 +0100, Tvrtko Ursulin wrote:
>>
>> On 16/07/2025 21:44, Maíra Canal wrote:
>>> Hi Tvrtko,
>>>
>>> On 16/07/25 11:46, Tvrtko Ursulin wrote:
>>>>
>>>> On 16/07/2025 15:30, Maíra Canal wrote:
>>>>> Hi Tvrtko,
>>>>>
>>>>> On 16/07/25 10:49, Tvrtko Ursulin wrote:
>>>>>>
>>>>>> On 16/07/2025 14:31, Maíra Canal wrote:
>>>>>>> Hi Tvrtko,
>>>>>>>
>>>>>>> On 16/07/25 05:51, Tvrtko Ursulin wrote:
>>>>>>>> Currently the job free work item will lock sched->job_list_lock
>>>>>>>> first time
>>>>>>>> to see if there are any jobs, free a single job, and then lock
>>>>>>>> again to
>>>>>>>> decide whether to re-queue itself if there are more finished jobs.
>>>>>>>>
>>>>>>>> Since drm_sched_get_finished_job() already looks at the second job
>>>>>>>> in the
>>>>>>>> queue we can simply add the signaled check and have it return the
>>>>>>>> presence
>>>>>>>> of more jobs to be freed to the caller. That way the work item
>>>>>>>> does not
>>>>>>>> have to lock the list again and repeat the signaled check.
>>>>>>>>
>>>>>>>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
>>>>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>>>>> Cc: Maíra Canal <mcanal@igalia.com>
>>>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>>>> Cc: Philipp Stanner <phasta@kernel.org>
>>>>>>>> ---
>>>>>>>> v2:
>>>>>>>> * Improve commit text and kerneldoc. (Philipp)
>>>>>>>> * Rename run free work helper. (Philipp)
>>>>>>>>
>>>>>>>> v3:
>>>>>>>> * Rebase on top of Maira's changes.
>>>>>>>> ---
>>>>>>>> drivers/gpu/drm/scheduler/sched_main.c | 53 +++++++++
>>>>>>>> +----------------
>>>>>>>> 1 file changed, 21 insertions(+), 32 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/
>>>>>>>> drm/ scheduler/sched_main.c
>>>>>>>> index e2cda28a1af4..5a550fd76bf0 100644
>>>>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>>> @@ -349,34 +349,13 @@ static void drm_sched_run_job_queue(struct
>>>>>>>> drm_gpu_scheduler *sched)
>>>>>>>> }
>>>>>>>> /**
>>>>>>>> - * __drm_sched_run_free_queue - enqueue free-job work
>>>>>>>> - * @sched: scheduler instance
>>>>>>>> - */
>>>>>>>> -static void __drm_sched_run_free_queue(struct drm_gpu_scheduler
>>>>>>>> *sched)
>>>>>>>> -{
>>>>>>>> - if (!READ_ONCE(sched->pause_submit))
>>>>>>>> - queue_work(sched->submit_wq, &sched->work_free_job);
>>>>>>>> -}
>>>>>>>> -
>>>>>>>> -/**
>>>>>>>> - * drm_sched_run_free_queue - enqueue free-job work if ready
>>>>>>>> + * drm_sched_run_free_queue - enqueue free-job work
>>>>>>>> * @sched: scheduler instance
>>>>>>>> */
>>>>>>>> static void drm_sched_run_free_queue(struct drm_gpu_scheduler
>>>>>>>> *sched)
>>>>>>>> {
>>>>>>>> - struct drm_sched_job *job;
>>>>>>>> -
>>>>>>>> - job = list_first_entry_or_null(&sched->pending_list,
>>>>>>>> - struct drm_sched_job, list);
>>>>>>>> - if (job && dma_fence_is_signaled(&job->s_fence->finished))
>>>>>>>> - __drm_sched_run_free_queue(sched);
>>>>>>>
>>>>>>> I believe we'd still need this chunk for DRM_GPU_SCHED_STAT_NO_HANG
>>>>>>> (check the comment in drm_sched_job_reinsert_on_false_timeout()). How
>>>>>>
>>>>>> You mean the "is there a signaled job in the list check" is needed
>>>>>> for drm_sched_job_reinsert_on_false_timeout()? Hmm why? Worst case
>>>>>> is a false positive wakeup on the free worker, no?
>>>>>
>>>>> Correct me if I'm mistaken, we would also have a false positive wake-up
>>>>> on the run_job worker, which I believe it could be problematic in the
>>>>> cases that we skipped the reset because the job is still running.
>>>>
>>>> Run job worker exits when it sees no free credits so I don't think
>>>> there is a problem. What am I missing?
>>>>
>>>
>>> I was the one missing the code in `drm_sched_can_queue()`. Sorry for the
>>> misleading comments. This is:
>>>
>>> Reviewed-by: Maíra Canal <mcanal@igalia.com>
>>
>> No worries, and thanks!
>>
>> Philipp - are you okay with this version? V2 was done to address your
>> feedback so that should be good now.
>
> Was just giving it another spin when you wrote. (a [PATCH v3] would've
> been neat for identification, though – I almost pulled the wrong patch
> from the archive *wink*)
Oops, my bad.
> LGTM, improves things, can be merged.
>
> However, we had to merge Lin Cao's bug fix [1] recently. That one is
> now in drm-misc-fixes, and your patch should go to drm-misc-next. This
> would cause a conflict once the two branches meet.
>
> So I suggest that we wait with this non-urgent patch until drm-misc-
> fixes / Linus's -rc gets merged into drm-misc-next, and then we apply
> it. Should be next week or the week after AFAIK.
>
> Unless somebody has a better idea, of course?
Lin's patch touches sched_entity.c only and mine only sched_main.c - ie.
no conflict AFAICT?
Regards,
Tvrtko
>
> Remind me in case I forget.
>
>
> P.
>
> [1] https://gitlab.freedesktop.org/drm/misc/kernel/-/commit/15f77764e90a713ee3916ca424757688e4f565b9
>
>
>>
>> Regards,
>>
>> Tvrtko
>>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-18 9:35 ` Tvrtko Ursulin
@ 2025-07-18 9:41 ` Philipp Stanner
2025-07-18 10:18 ` Tvrtko Ursulin
0 siblings, 1 reply; 19+ messages in thread
From: Philipp Stanner @ 2025-07-18 9:41 UTC (permalink / raw)
To: Tvrtko Ursulin, phasta, dri-devel
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Maíra Canal
On Fri, 2025-07-18 at 10:35 +0100, Tvrtko Ursulin wrote:
>
> On 18/07/2025 10:31, Philipp Stanner wrote:
> > On Fri, 2025-07-18 at 08:13 +0100, Tvrtko Ursulin wrote:
> > >
> > > On 16/07/2025 21:44, Maíra Canal wrote:
> > > > Hi Tvrtko,
> > > >
> > > > On 16/07/25 11:46, Tvrtko Ursulin wrote:
> > > > >
> > > > > On 16/07/2025 15:30, Maíra Canal wrote:
> > > > > > Hi Tvrtko,
> > > > > >
> > > > > > On 16/07/25 10:49, Tvrtko Ursulin wrote:
> > > > > > >
> > > > > > > On 16/07/2025 14:31, Maíra Canal wrote:
> > > > > > > > Hi Tvrtko,
> > > > > > > >
> > > > > > > > On 16/07/25 05:51, Tvrtko Ursulin wrote:
> > > > > > > > > Currently the job free work item will lock sched-
> > > > > > > > > >job_list_lock
> > > > > > > > > first time
> > > > > > > > > to see if there are any jobs, free a single job, and
> > > > > > > > > then lock
> > > > > > > > > again to
> > > > > > > > > decide whether to re-queue itself if there are more
> > > > > > > > > finished jobs.
> > > > > > > > >
> > > > > > > > > Since drm_sched_get_finished_job() already looks at
> > > > > > > > > the second job
> > > > > > > > > in the
> > > > > > > > > queue we can simply add the signaled check and have
> > > > > > > > > it return the
> > > > > > > > > presence
> > > > > > > > > of more jobs to be freed to the caller. That way the
> > > > > > > > > work item
> > > > > > > > > does not
> > > > > > > > > have to lock the list again and repeat the signaled
> > > > > > > > > check.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Tvrtko Ursulin
> > > > > > > > > <tvrtko.ursulin@igalia.com>
> > > > > > > > > Cc: Christian König <christian.koenig@amd.com>
> > > > > > > > > Cc: Danilo Krummrich <dakr@kernel.org>
> > > > > > > > > Cc: Maíra Canal <mcanal@igalia.com>
> > > > > > > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > > > > > > Cc: Philipp Stanner <phasta@kernel.org>
> > > > > > > > > ---
> > > > > > > > > v2:
> > > > > > > > > * Improve commit text and kerneldoc. (Philipp)
> > > > > > > > > * Rename run free work helper. (Philipp)
> > > > > > > > >
> > > > > > > > > v3:
> > > > > > > > > * Rebase on top of Maira's changes.
> > > > > > > > > ---
> > > > > > > > > drivers/gpu/drm/scheduler/sched_main.c | 53
> > > > > > > > > +++++++++
> > > > > > > > > +----------------
> > > > > > > > > 1 file changed, 21 insertions(+), 32 deletions(-)
> > > > > > > > >
> > > > > > > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > > > > b/drivers/gpu/
> > > > > > > > > drm/ scheduler/sched_main.c
> > > > > > > > > index e2cda28a1af4..5a550fd76bf0 100644
> > > > > > > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > > > > @@ -349,34 +349,13 @@ static void
> > > > > > > > > drm_sched_run_job_queue(struct
> > > > > > > > > drm_gpu_scheduler *sched)
> > > > > > > > > }
> > > > > > > > > /**
> > > > > > > > > - * __drm_sched_run_free_queue - enqueue free-job
> > > > > > > > > work
> > > > > > > > > - * @sched: scheduler instance
> > > > > > > > > - */
> > > > > > > > > -static void __drm_sched_run_free_queue(struct
> > > > > > > > > drm_gpu_scheduler
> > > > > > > > > *sched)
> > > > > > > > > -{
> > > > > > > > > - if (!READ_ONCE(sched->pause_submit))
> > > > > > > > > - queue_work(sched->submit_wq, &sched-
> > > > > > > > > >work_free_job);
> > > > > > > > > -}
> > > > > > > > > -
> > > > > > > > > -/**
> > > > > > > > > - * drm_sched_run_free_queue - enqueue free-job work
> > > > > > > > > if ready
> > > > > > > > > + * drm_sched_run_free_queue - enqueue free-job work
> > > > > > > > > * @sched: scheduler instance
> > > > > > > > > */
> > > > > > > > > static void drm_sched_run_free_queue(struct
> > > > > > > > > drm_gpu_scheduler
> > > > > > > > > *sched)
> > > > > > > > > {
> > > > > > > > > - struct drm_sched_job *job;
> > > > > > > > > -
> > > > > > > > > - job = list_first_entry_or_null(&sched-
> > > > > > > > > >pending_list,
> > > > > > > > > - struct drm_sched_job, list);
> > > > > > > > > - if (job && dma_fence_is_signaled(&job->s_fence-
> > > > > > > > > >finished))
> > > > > > > > > - __drm_sched_run_free_queue(sched);
> > > > > > > >
> > > > > > > > I believe we'd still need this chunk for
> > > > > > > > DRM_GPU_SCHED_STAT_NO_HANG
> > > > > > > > (check the comment in
> > > > > > > > drm_sched_job_reinsert_on_false_timeout()). How
> > > > > > >
> > > > > > > You mean the "is there a signaled job in the list check"
> > > > > > > is needed
> > > > > > > for drm_sched_job_reinsert_on_false_timeout()? Hmm why?
> > > > > > > Worst case
> > > > > > > is a false positive wakeup on the free worker, no?
> > > > > >
> > > > > > Correct me if I'm mistaken, we would also have a false
> > > > > > positive wake-up
> > > > > > on the run_job worker, which I believe it could be
> > > > > > problematic in the
> > > > > > cases that we skipped the reset because the job is still
> > > > > > running.
> > > > >
> > > > > Run job worker exits when it sees no free credits so I don't
> > > > > think
> > > > > there is a problem. What am I missing?
> > > > >
> > > >
> > > > I was the one missing the code in `drm_sched_can_queue()`.
> > > > Sorry for the
> > > > misleading comments. This is:
> > > >
> > > > Reviewed-by: Maíra Canal <mcanal@igalia.com>
> > >
> > > No worries, and thanks!
> > >
> > > Philipp - are you okay with this version? V2 was done to address
> > > your
> > > feedback so that should be good now.
> >
> > Was just giving it another spin when you wrote. (a [PATCH v3]
> > would've
> > been neat for identification, though – I almost pulled the wrong
> > patch
> > from the archive *wink*)
>
> Oops, my bad.
>
> > LGTM, improves things, can be merged.
> >
> > However, we had to merge Lin Cao's bug fix [1] recently. That one
> > is
> > now in drm-misc-fixes, and your patch should go to drm-misc-next.
> > This
> > would cause a conflict once the two branches meet.
> >
> > So I suggest that we wait with this non-urgent patch until drm-
> > misc-
> > fixes / Linus's -rc gets merged into drm-misc-next, and then we
> > apply
> > it. Should be next week or the week after AFAIK.
> >
> > Unless somebody has a better idea, of course?
>
> Lin's patch touches sched_entity.c only and mine only sched_main.c -
> ie.
> no conflict AFAICT?
Aaahhh, I had a hallucination ^^'
It doesn't apply to drm-misc-fixes, but that is because fixes misses
changes that yours is based on. Because Lin's patch was the last thing
I touched on that branch I seem to have jumped to that conclusion.
Should be fine, then. My bad.
Will apply.
P.
>
> Regards,
>
> Tvrtko
>
> >
> > Remind me in case I forget.
> >
> >
> > P.
> >
> > [1]
> > https://gitlab.freedesktop.org/drm/misc/kernel/-/commit/15f77764e90a713ee3916ca424757688e4f565b9
> >
> >
> > >
> > > Regards,
> > >
> > > Tvrtko
> > >
> >
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] drm/sched: Avoid double re-lock on the job free path
2025-07-18 9:41 ` Philipp Stanner
@ 2025-07-18 10:18 ` Tvrtko Ursulin
0 siblings, 0 replies; 19+ messages in thread
From: Tvrtko Ursulin @ 2025-07-18 10:18 UTC (permalink / raw)
To: phasta, dri-devel
Cc: kernel-dev, intel-xe, amd-gfx, Christian König,
Danilo Krummrich, Matthew Brost, Maíra Canal
On 18/07/2025 10:41, Philipp Stanner wrote:
> On Fri, 2025-07-18 at 10:35 +0100, Tvrtko Ursulin wrote:
>>
>> On 18/07/2025 10:31, Philipp Stanner wrote:
>>> On Fri, 2025-07-18 at 08:13 +0100, Tvrtko Ursulin wrote:
>>>>
>>>> On 16/07/2025 21:44, Maíra Canal wrote:
>>>>> Hi Tvrtko,
>>>>>
>>>>> On 16/07/25 11:46, Tvrtko Ursulin wrote:
>>>>>>
>>>>>> On 16/07/2025 15:30, Maíra Canal wrote:
>>>>>>> Hi Tvrtko,
>>>>>>>
>>>>>>> On 16/07/25 10:49, Tvrtko Ursulin wrote:
>>>>>>>>
>>>>>>>> On 16/07/2025 14:31, Maíra Canal wrote:
>>>>>>>>> Hi Tvrtko,
>>>>>>>>>
>>>>>>>>> On 16/07/25 05:51, Tvrtko Ursulin wrote:
>>>>>>>>>> Currently the job free work item will lock sched-
>>>>>>>>>>> job_list_lock
>>>>>>>>>> first time
>>>>>>>>>> to see if there are any jobs, free a single job, and
>>>>>>>>>> then lock
>>>>>>>>>> again to
>>>>>>>>>> decide whether to re-queue itself if there are more
>>>>>>>>>> finished jobs.
>>>>>>>>>>
>>>>>>>>>> Since drm_sched_get_finished_job() already looks at
>>>>>>>>>> the second job
>>>>>>>>>> in the
>>>>>>>>>> queue we can simply add the signaled check and have
>>>>>>>>>> it return the
>>>>>>>>>> presence
>>>>>>>>>> of more jobs to be freed to the caller. That way the
>>>>>>>>>> work item
>>>>>>>>>> does not
>>>>>>>>>> have to lock the list again and repeat the signaled
>>>>>>>>>> check.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Tvrtko Ursulin
>>>>>>>>>> <tvrtko.ursulin@igalia.com>
>>>>>>>>>> Cc: Christian König <christian.koenig@amd.com>
>>>>>>>>>> Cc: Danilo Krummrich <dakr@kernel.org>
>>>>>>>>>> Cc: Maíra Canal <mcanal@igalia.com>
>>>>>>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>>>>>>> Cc: Philipp Stanner <phasta@kernel.org>
>>>>>>>>>> ---
>>>>>>>>>> v2:
>>>>>>>>>> * Improve commit text and kerneldoc. (Philipp)
>>>>>>>>>> * Rename run free work helper. (Philipp)
>>>>>>>>>>
>>>>>>>>>> v3:
>>>>>>>>>> * Rebase on top of Maira's changes.
>>>>>>>>>> ---
>>>>>>>>>> drivers/gpu/drm/scheduler/sched_main.c | 53
>>>>>>>>>> +++++++++
>>>>>>>>>> +----------------
>>>>>>>>>> 1 file changed, 21 insertions(+), 32 deletions(-)
>>>>>>>>>>
>>>>>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>>>>> b/drivers/gpu/
>>>>>>>>>> drm/ scheduler/sched_main.c
>>>>>>>>>> index e2cda28a1af4..5a550fd76bf0 100644
>>>>>>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>>>>> @@ -349,34 +349,13 @@ static void
>>>>>>>>>> drm_sched_run_job_queue(struct
>>>>>>>>>> drm_gpu_scheduler *sched)
>>>>>>>>>> }
>>>>>>>>>> /**
>>>>>>>>>> - * __drm_sched_run_free_queue - enqueue free-job
>>>>>>>>>> work
>>>>>>>>>> - * @sched: scheduler instance
>>>>>>>>>> - */
>>>>>>>>>> -static void __drm_sched_run_free_queue(struct
>>>>>>>>>> drm_gpu_scheduler
>>>>>>>>>> *sched)
>>>>>>>>>> -{
>>>>>>>>>> - if (!READ_ONCE(sched->pause_submit))
>>>>>>>>>> - queue_work(sched->submit_wq, &sched-
>>>>>>>>>>> work_free_job);
>>>>>>>>>> -}
>>>>>>>>>> -
>>>>>>>>>> -/**
>>>>>>>>>> - * drm_sched_run_free_queue - enqueue free-job work
>>>>>>>>>> if ready
>>>>>>>>>> + * drm_sched_run_free_queue - enqueue free-job work
>>>>>>>>>> * @sched: scheduler instance
>>>>>>>>>> */
>>>>>>>>>> static void drm_sched_run_free_queue(struct
>>>>>>>>>> drm_gpu_scheduler
>>>>>>>>>> *sched)
>>>>>>>>>> {
>>>>>>>>>> - struct drm_sched_job *job;
>>>>>>>>>> -
>>>>>>>>>> - job = list_first_entry_or_null(&sched-
>>>>>>>>>>> pending_list,
>>>>>>>>>> - struct drm_sched_job, list);
>>>>>>>>>> - if (job && dma_fence_is_signaled(&job->s_fence-
>>>>>>>>>>> finished))
>>>>>>>>>> - __drm_sched_run_free_queue(sched);
>>>>>>>>>
>>>>>>>>> I believe we'd still need this chunk for
>>>>>>>>> DRM_GPU_SCHED_STAT_NO_HANG
>>>>>>>>> (check the comment in
>>>>>>>>> drm_sched_job_reinsert_on_false_timeout()). How
>>>>>>>>
>>>>>>>> You mean the "is there a signaled job in the list check"
>>>>>>>> is needed
>>>>>>>> for drm_sched_job_reinsert_on_false_timeout()? Hmm why?
>>>>>>>> Worst case
>>>>>>>> is a false positive wakeup on the free worker, no?
>>>>>>>
>>>>>>> Correct me if I'm mistaken, we would also have a false
>>>>>>> positive wake-up
>>>>>>> on the run_job worker, which I believe it could be
>>>>>>> problematic in the
>>>>>>> cases that we skipped the reset because the job is still
>>>>>>> running.
>>>>>>
>>>>>> Run job worker exits when it sees no free credits so I don't
>>>>>> think
>>>>>> there is a problem. What am I missing?
>>>>>>
>>>>>
>>>>> I was the one missing the code in `drm_sched_can_queue()`.
>>>>> Sorry for the
>>>>> misleading comments. This is:
>>>>>
>>>>> Reviewed-by: Maíra Canal <mcanal@igalia.com>
>>>>
>>>> No worries, and thanks!
>>>>
>>>> Philipp - are you okay with this version? V2 was done to address
>>>> your
>>>> feedback so that should be good now.
>>>
>>> Was just giving it another spin when you wrote. (a [PATCH v3]
>>> would've
>>> been neat for identification, though – I almost pulled the wrong
>>> patch
>>> from the archive *wink*)
>>
>> Oops, my bad.
>>
>>> LGTM, improves things, can be merged.
>>>
>>> However, we had to merge Lin Cao's bug fix [1] recently. That one
>>> is
>>> now in drm-misc-fixes, and your patch should go to drm-misc-next.
>>> This
>>> would cause a conflict once the two branches meet.
>>>
>>> So I suggest that we wait with this non-urgent patch until drm-
>>> misc-
>>> fixes / Linus's -rc gets merged into drm-misc-next, and then we
>>> apply
>>> it. Should be next week or the week after AFAIK.
>>>
>>> Unless somebody has a better idea, of course?
>>
>> Lin's patch touches sched_entity.c only and mine only sched_main.c -
>> ie.
>> no conflict AFAICT?
>
> Aaahhh, I had a hallucination ^^'
>
> It doesn't apply to drm-misc-fixes, but that is because fixes misses
> changes that yours is based on. Because Lin's patch was the last thing
> I touched on that branch I seem to have jumped to that conclusion.
>
> Should be fine, then. My bad.
>
> Will apply.
Thank you!
This enables me to send out a rebase of the fair DRM scheduler series soon.
Regards,
Tvrtko
>>> Remind me in case I forget.
>>>
>>>
>>> P.
>>>
>>> [1]
>>> https://gitlab.freedesktop.org/drm/misc/kernel/-/commit/15f77764e90a713ee3916ca424757688e4f565b9
>>>
>>>
>>>>
>>>> Regards,
>>>>
>>>> Tvrtko
>>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2025-07-18 10:19 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-08 12:20 [PATCH] drm/sched: Avoid double re-lock on the job free path Tvrtko Ursulin
2025-07-09 4:45 ` Matthew Brost
2025-07-09 10:49 ` Tvrtko Ursulin
2025-07-09 17:22 ` Matthew Brost
2025-07-11 12:39 ` Tvrtko Ursulin
2025-07-11 13:04 ` Philipp Stanner
2025-07-11 15:11 ` Tvrtko Ursulin
-- strict thread matches above, loose matches on Subject: below --
2025-07-16 8:51 Tvrtko Ursulin
2025-07-16 13:31 ` Maíra Canal
2025-07-16 13:49 ` Tvrtko Ursulin
2025-07-16 14:30 ` Maíra Canal
2025-07-16 14:46 ` Tvrtko Ursulin
2025-07-16 20:44 ` Maíra Canal
2025-07-18 7:13 ` Tvrtko Ursulin
2025-07-18 9:31 ` Philipp Stanner
2025-07-18 9:35 ` Tvrtko Ursulin
2025-07-18 9:41 ` Philipp Stanner
2025-07-18 10:18 ` Tvrtko Ursulin
2025-01-14 10:59 Tvrtko Ursulin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).