Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Danilo Krummrich <dakr@redhat.com>,
	Matthew Brost <matthew.brost@intel.com>
Cc: robdclark@chromium.org, sarah.walker@imgtec.com,
	ketil.johnsen@arm.com, lina@asahilina.net, mcanal@igalia.com,
	Liviu.Dudau@arm.com, dri-devel@lists.freedesktop.org,
	luben.tuikov@amd.com, donald.robson@imgtec.com,
	boris.brezillon@collabora.com, intel-xe@lists.freedesktop.org,
	faith.ekstrand@collabora.com
Subject: Re: [Intel-xe] [PATCH v3 11/13] drm/sched: Waiting for pending jobs to complete in scheduler kill
Date: Tue, 19 Sep 2023 07:55:57 +0200	[thread overview]
Message-ID: <2ec93a2a-0d4a-be90-3420-61c1782b9a72@amd.com> (raw)
In-Reply-To: <7aced2f9-1db1-8e22-f635-842e300d420c@redhat.com>

Am 18.09.23 um 16:57 schrieb Danilo Krummrich:
> [SNIP]
>> What this component should do is to push jobs to the hardware and not 
>> overview their execution, that's the job of the driver.
>
> While, generally, I'd agree, I think we can't really get around having 
> something that
> frees the job once it's fence got signaled. This "something" could be 
> the driver, but
> once it ends up being the same code over and over again for every 
> driver, we're probably
> back letting the scheduler do it instead in a common way.

We already have a driver private void* in the scheduler fence. What we 
could .do is to let the scheduler provide a functionality to call a 
function when it signals.

>
>>
>> In other words drivers should be able to call drm_sched_fini() while 
>> there are jobs still pending on the hardware.
>
> Unless we have a better idea on how to do this, I'd, as mentioned, 
> suggest to have something
> like drm_sched_teardown() and/or drm_sched_teardown_timeout() waiting 
> for pending jobs.

Yeah, something like that. But I think the better functionality would be 
provide an interator to go over the pending fences in the scheduler.

This could then be used for quite a bunch of use cases, e.g. even for 
signaling the hardware fences etc...

Waiting for the last one is then just a "drm_sched_for_each_pending(...) 
dma_fence_wait_timeout(pending->finished....);".

>
>>
>> Also keep in mind that you *can't* wait for all hw operations to 
>> finish in your flush or file descriptor close callback or you create 
>> un-killable processes.
>
> Right, that's why in Nouveau I try to wait for the channel (ring) 
> being idle and if this didn't
> work in a "reasonable" amount of time, I kill the fence context, 
> signalling all fences with an
> error code, and wait for the scheduler being idle, which comes down to 
> only wait for all free_job()
> callbacks to finish, since all jobs are signaled already.

Exactly that's the right thing to do. Can we please document that somewhere?

Regards,
Christian.

>
>>
>> Regards,
>> Christian.
>>
>>>
>>>>
>>>> Matt
>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>>
>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>>>> ---
>>>>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  2 +-
>>>>>>    drivers/gpu/drm/scheduler/sched_entity.c    |  7 ++-
>>>>>>    drivers/gpu/drm/scheduler/sched_main.c      | 50 
>>>>>> ++++++++++++++++++---
>>>>>>    include/drm/gpu_scheduler.h                 | 18 ++++++++
>>>>>>    4 files changed, 70 insertions(+), 7 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c 
>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>>>>> index fb5dad687168..7835c0da65c5 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
>>>>>> @@ -1873,7 +1873,7 @@ static void 
>>>>>> amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
>>>>>>        list_for_each_entry_safe(s_job, tmp, &sched->pending_list, 
>>>>>> list) {
>>>>>>            if (dma_fence_is_signaled(&s_job->s_fence->finished)) {
>>>>>>                /* remove job from ring_mirror_list */
>>>>>> -            list_del_init(&s_job->list);
>>>>>> +            drm_sched_remove_pending_job(s_job);
>>>>>>                sched->ops->free_job(s_job);
>>>>>>                continue;
>>>>>>            }
>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c 
>>>>>> b/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>> index 1dec97caaba3..37557fbb96d0 100644
>>>>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>> @@ -104,9 +104,11 @@ int drm_sched_entity_init(struct 
>>>>>> drm_sched_entity *entity,
>>>>>>        }
>>>>>>        init_completion(&entity->entity_idle);
>>>>>> +    init_completion(&entity->jobs_done);
>>>>>> -    /* We start in an idle state. */
>>>>>> +    /* We start in an idle and jobs done state. */
>>>>>>        complete_all(&entity->entity_idle);
>>>>>> +    complete_all(&entity->jobs_done);
>>>>>>        spin_lock_init(&entity->rq_lock);
>>>>>>        spsc_queue_init(&entity->job_queue);
>>>>>> @@ -256,6 +258,9 @@ static void drm_sched_entity_kill(struct 
>>>>>> drm_sched_entity *entity)
>>>>>>        /* Make sure this entity is not used by the scheduler at 
>>>>>> the moment */
>>>>>>        wait_for_completion(&entity->entity_idle);
>>>>>> +    /* Make sure all pending jobs are done */
>>>>>> +    wait_for_completion(&entity->jobs_done);
>>>>>> +
>>>>>>        /* The entity is guaranteed to not be used by the 
>>>>>> scheduler */
>>>>>>        prev = rcu_dereference_check(entity->last_scheduled, true);
>>>>>>        dma_fence_get(prev);
>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>>>>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>> index 689fb6686e01..ed6f5680793a 100644
>>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>> @@ -510,12 +510,52 @@ void drm_sched_resume_timeout(struct 
>>>>>> drm_gpu_scheduler *sched,
>>>>>>    }
>>>>>>    EXPORT_SYMBOL(drm_sched_resume_timeout);
>>>>>> +/**
>>>>>> + * drm_sched_add_pending_job - Add pending job to scheduler
>>>>>> + *
>>>>>> + * @job: scheduler job to add
>>>>>> + * @tail: add to tail of pending list
>>>>>> + */
>>>>>> +void drm_sched_add_pending_job(struct drm_sched_job *job, bool 
>>>>>> tail)
>>>>>> +{
>>>>>> +    struct drm_gpu_scheduler *sched = job->sched;
>>>>>> +    struct drm_sched_entity *entity = job->entity;
>>>>>> +
>>>>>> +    lockdep_assert_held(&sched->job_list_lock);
>>>>>> +
>>>>>> +    if (tail)
>>>>>> +        list_add_tail(&job->list, &sched->pending_list);
>>>>>> +    else
>>>>>> +        list_add(&job->list, &sched->pending_list);
>>>>>> +    if (!entity->pending_job_count++)
>>>>>> +        reinit_completion(&entity->jobs_done);
>>>>>> +}
>>>>>> +EXPORT_SYMBOL(drm_sched_add_pending_job);
>>>>>> +
>>>>>> +/**
>>>>>> + * drm_sched_remove_pending_job - Remove pending job from` 
>>>>>> scheduler
>>>>>> + *
>>>>>> + * @job: scheduler job to remove
>>>>>> + */
>>>>>> +void drm_sched_remove_pending_job(struct drm_sched_job *job)
>>>>>> +{
>>>>>> +    struct drm_gpu_scheduler *sched = job->sched;
>>>>>> +    struct drm_sched_entity *entity = job->entity;
>>>>>> +
>>>>>> +    lockdep_assert_held(&sched->job_list_lock);
>>>>>> +
>>>>>> +    list_del_init(&job->list);
>>>>>> +    if (!--entity->pending_job_count)
>>>>>> +        complete_all(&entity->jobs_done);
>>>>>> +}
>>>>>> +EXPORT_SYMBOL(drm_sched_remove_pending_job);
>>>>>> +
>>>>>>    static void drm_sched_job_begin(struct drm_sched_job *s_job)
>>>>>>    {
>>>>>>        struct drm_gpu_scheduler *sched = s_job->sched;
>>>>>>        spin_lock(&sched->job_list_lock);
>>>>>> -    list_add_tail(&s_job->list, &sched->pending_list);
>>>>>> +    drm_sched_add_pending_job(s_job, true);
>>>>>>        spin_unlock(&sched->job_list_lock);
>>>>>>    }
>>>>>> @@ -538,7 +578,7 @@ static void drm_sched_job_timedout(struct 
>>>>>> work_struct *work)
>>>>>>             * drm_sched_cleanup_jobs. It will be reinserted back 
>>>>>> after sched->thread
>>>>>>             * is parked at which point it's safe.
>>>>>>             */
>>>>>> -        list_del_init(&job->list);
>>>>>> +        drm_sched_remove_pending_job(job);
>>>>>>            spin_unlock(&sched->job_list_lock);
>>>>>>            status = job->sched->ops->timedout_job(job);
>>>>>> @@ -589,7 +629,7 @@ void drm_sched_stop(struct drm_gpu_scheduler 
>>>>>> *sched, struct drm_sched_job *bad)
>>>>>>             * Add at the head of the queue to reflect it was the 
>>>>>> earliest
>>>>>>             * job extracted.
>>>>>>             */
>>>>>> -        list_add(&bad->list, &sched->pending_list);
>>>>>> +        drm_sched_add_pending_job(bad, false);
>>>>>>        /*
>>>>>>         * Iterate the job list from later to  earlier one and 
>>>>>> either deactive
>>>>>> @@ -611,7 +651,7 @@ void drm_sched_stop(struct drm_gpu_scheduler 
>>>>>> *sched, struct drm_sched_job *bad)
>>>>>>                 * Locking here is for concurrent resume timeout
>>>>>>                 */
>>>>>>                spin_lock(&sched->job_list_lock);
>>>>>> -            list_del_init(&s_job->list);
>>>>>> +            drm_sched_remove_pending_job(s_job);
>>>>>> spin_unlock(&sched->job_list_lock);
>>>>>>                /*
>>>>>> @@ -1066,7 +1106,7 @@ drm_sched_get_cleanup_job(struct 
>>>>>> drm_gpu_scheduler *sched)
>>>>>>        if (job && dma_fence_is_signaled(&job->s_fence->finished)) {
>>>>>>            /* remove job from pending_list */
>>>>>> -        list_del_init(&job->list);
>>>>>> +        drm_sched_remove_pending_job(job);
>>>>>>            /* cancel this job's TO timer */
>>>>>>            cancel_delayed_work(&sched->work_tdr);
>>>>>> diff --git a/include/drm/gpu_scheduler.h 
>>>>>> b/include/drm/gpu_scheduler.h
>>>>>> index b7b818cd81b6..7c628f36fe78 100644
>>>>>> --- a/include/drm/gpu_scheduler.h
>>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>>> @@ -233,6 +233,21 @@ struct drm_sched_entity {
>>>>>>         */
>>>>>>        struct completion        entity_idle;
>>>>>> +    /**
>>>>>> +     * @pending_job_count:
>>>>>> +     *
>>>>>> +     * Number of pending jobs.
>>>>>> +     */
>>>>>> +    unsigned int                    pending_job_count;
>>>>>> +
>>>>>> +    /**
>>>>>> +     * @jobs_done:
>>>>>> +     *
>>>>>> +     * Signals when entity has no pending jobs, used to sequence 
>>>>>> entity
>>>>>> +     * cleanup in drm_sched_entity_fini().
>>>>>> +     */
>>>>>> +    struct completion        jobs_done;
>>>>>> +
>>>>>>        /**
>>>>>>         * @oldest_job_waiting:
>>>>>>         *
>>>>>> @@ -656,4 +671,7 @@ struct drm_gpu_scheduler *
>>>>>>    drm_sched_pick_best(struct drm_gpu_scheduler **sched_list,
>>>>>>                 unsigned int num_sched_list);
>>>>>> +void drm_sched_add_pending_job(struct drm_sched_job *job, bool 
>>>>>> tail);
>>>>>> +void drm_sched_remove_pending_job(struct drm_sched_job *job);
>>>>>> +
>>>>>>    #endif
>>>>>
>>>
>>
>


  reply	other threads:[~2023-09-19  5:56 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-12  2:16 [Intel-xe] [PATCH v3 00/13] DRM scheduler changes for Xe Matthew Brost
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 01/13] drm/sched: Add drm_sched_submit_* helpers Matthew Brost
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 02/13] drm/sched: Convert drm scheduler to use a work queue rather than kthread Matthew Brost
2023-09-12  7:29   ` Boris Brezillon
2023-09-12 15:02     ` Matthew Brost
2023-09-14  3:41       ` Luben Tuikov
2023-09-14  3:35   ` Luben Tuikov
2023-09-16 17:07   ` Danilo Krummrich
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 03/13] drm/sched: Move schedule policy to scheduler / entity Matthew Brost
2023-09-12  7:37   ` Boris Brezillon
2023-09-12 15:14     ` Matthew Brost
2023-09-12 14:11   ` kernel test robot
2023-09-12 15:17     ` Matthew Brost
2023-09-14  4:18   ` Luben Tuikov
2023-09-14  4:23     ` Luben Tuikov
2023-09-14 15:48       ` Matthew Brost
2023-09-14 15:49     ` Matthew Brost
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 04/13] drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy Matthew Brost
2023-09-13 12:30   ` kernel test robot
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 05/13] drm/sched: Split free_job into own work item Matthew Brost
2023-09-12  8:08   ` Boris Brezillon
2023-09-12 14:37     ` Matthew Brost
2023-09-12 14:53       ` Boris Brezillon
2023-09-12 14:55         ` Matthew Brost
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 06/13] drm/sched: Add generic scheduler message interface Matthew Brost
2023-09-12  8:23   ` Boris Brezillon
2023-09-12 14:50     ` Matthew Brost
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 07/13] drm/sched: Add drm_sched_start_timeout_unlocked helper Matthew Brost
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 08/13] drm/sched: Start run wq before TDR in drm_sched_start Matthew Brost
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 09/13] drm/sched: Submit job before starting TDR Matthew Brost
2023-09-14  2:56   ` Luben Tuikov
2023-09-14 17:48     ` Matthew Brost
2023-09-21  3:35       ` Luben Tuikov
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 10/13] drm/sched: Add helper to set TDR timeout Matthew Brost
2023-09-14  2:38   ` Luben Tuikov
2023-09-14 17:36     ` Matthew Brost
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 11/13] drm/sched: Waiting for pending jobs to complete in scheduler kill Matthew Brost
2023-09-12  8:44   ` Boris Brezillon
2023-09-12  9:57   ` Christian König
2023-09-12 14:47     ` Matthew Brost
2023-09-16 17:52       ` Danilo Krummrich
2023-09-18 11:03         ` Christian König
2023-09-18 14:57           ` Danilo Krummrich
2023-09-19  5:55             ` Christian König [this message]
2023-09-12 10:28   ` Boris Brezillon
2023-09-12 14:54     ` Matthew Brost
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 12/13] drm/sched/doc: Add Entity teardown documentaion Matthew Brost
2023-09-13 15:04   ` Christian König
2023-09-14  2:06   ` Luben Tuikov
2023-09-16 18:06   ` Danilo Krummrich
2023-09-12  2:16 ` [Intel-xe] [PATCH v3 13/13] drm/sched: Update maintainers of GPU scheduler Matthew Brost
2023-09-12  2:20 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev5) Patchwork
2023-09-14  1:45 ` [Intel-xe] [PATCH v3 00/13] DRM scheduler changes for Xe Luben Tuikov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2ec93a2a-0d4a-be90-3420-61c1782b9a72@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Liviu.Dudau@arm.com \
    --cc=boris.brezillon@collabora.com \
    --cc=dakr@redhat.com \
    --cc=donald.robson@imgtec.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=faith.ekstrand@collabora.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=ketil.johnsen@arm.com \
    --cc=lina@asahilina.net \
    --cc=luben.tuikov@amd.com \
    --cc=matthew.brost@intel.com \
    --cc=mcanal@igalia.com \
    --cc=robdclark@chromium.org \
    --cc=sarah.walker@imgtec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox