From: Matthew Brost <matthew.brost@intel.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: robdclark@chromium.org, sarah.walker@imgtec.com,
ketil.johnsen@arm.com, Liviu.Dudau@arm.com,
frank.binns@imgtec.com, dri-devel@lists.freedesktop.org,
luben.tuikov@amd.com, lina@asahilina.net,
donald.robson@imgtec.com, daniel@ffwll.ch,
boris.brezillon@collabora.com, airlied@gmail.com,
intel-xe@lists.freedesktop.org, faith.ekstrand@collabora.com
Subject: Re: [Intel-xe] [PATCH v2 4/9] drm/sched: Split free_job into own work item
Date: Fri, 18 Aug 2023 13:13:34 +0000 [thread overview]
Message-ID: <ZN9ufogOQm4xblxm@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <9f6554fd-8748-2613-1c90-921b670ad4d9@amd.com>
On Fri, Aug 18, 2023 at 07:27:33AM +0200, Christian König wrote:
> Am 17.08.23 um 19:54 schrieb Matthew Brost:
> > On Thu, Aug 17, 2023 at 03:39:40PM +0200, Christian König wrote:
> > > Am 11.08.23 um 04:31 schrieb Matthew Brost:
> > > > Rather than call free_job and run_job in same work item have a dedicated
> > > > work item for each. This aligns with the design and intended use of work
> > > > queues.
> > > I would rather say we should get completely rid of the free_job callback.
> > >
> > Would we still have work item? e.g. Would we still want to call
> > drm_sched_get_cleanup_job which removes the job from the pending list
> > and adjusts the TDR? Trying to figure out out what this looks like. We
> > probably can't do all of this from an IRQ context.
> >
> > > Essentially the job is just the container which carries the information
> > > which are necessary before you push it to the hw. The real representation of
> > > the submission is actually the scheduler fence.
> > >
> > Most of the free_jobs call plus drm_sched_job_cleanup + a put on job. In
> > Xe this cannot be called from an IRQ context either.
> >
> > I'm just confused what exactly you are suggesting here.
>
> To summarize on one sentence: Instead of the job we keep the scheduler and
> hardware fences around after pushing the job to the hw.
>
> The free_job callback would then be replaced by dropping the reference on
> the scheduler and hw fence.
>
> Would that work for you?
>
I don't think so for a few reasons.
The job and hw fence are different structures (also different allocs too)
for a reason. The job referenced until it is complete (hw fence is
signaled) and the free_job is called. This reference is needed for the
TDR to work properly and also some reset flows too. Also in Xe some of
things done in free_job cannot be from an IRQ context, hence calling
this from the scheduler worker is rather helpful.
The HW fence can live for longer as it can be installed in dma-resv
slots, syncobjs, etc... If the job and hw fence are combined now we
holding on the memory for the longer and perhaps at the mercy of the
user. We also run the risk of the final put being done from an IRQ
context which again wont work in Xe as it is currently coded. Lastly 2
jobs from the same scheduler could do the final put in parallel, so
rather than having free_job serialized by the worker now multiple jobs
are freeing themselves at the same time. This might not be an issue but
adds another level of raceyness that needs to be accounted for. None of
this sounds desirable to me.
FWIW what you suggesting sounds like how the i915 did things
(i915_request and hw fence in 1 memory alloc) and that turned out to be
a huge mess. As rule of thumb I generally do the opposite of whatever
the i915 did.
Matt
> Christian.
>
> >
> > Matt
> >
> > > All the lifetime issues we had came from ignoring this fact and I think we
> > > should push for fixing this design up again.
> > >
> > > Regards,
> > > Christian.
> > >
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > ---
> > > > drivers/gpu/drm/scheduler/sched_main.c | 137 ++++++++++++++++++-------
> > > > include/drm/gpu_scheduler.h | 8 +-
> > > > 2 files changed, 106 insertions(+), 39 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> > > > index cede47afc800..b67469eac179 100644
> > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > @@ -213,11 +213,12 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq,
> > > > * drm_sched_rq_select_entity_rr - Select an entity which could provide a job to run
> > > > *
> > > > * @rq: scheduler run queue to check.
> > > > + * @dequeue: dequeue selected entity
> > > > *
> > > > * Try to find a ready entity, returns NULL if none found.
> > > > */
> > > > static struct drm_sched_entity *
> > > > -drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq)
> > > > +drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq, bool dequeue)
> > > > {
> > > > struct drm_sched_entity *entity;
> > > > @@ -227,8 +228,10 @@ drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq)
> > > > if (entity) {
> > > > list_for_each_entry_continue(entity, &rq->entities, list) {
> > > > if (drm_sched_entity_is_ready(entity)) {
> > > > - rq->current_entity = entity;
> > > > - reinit_completion(&entity->entity_idle);
> > > > + if (dequeue) {
> > > > + rq->current_entity = entity;
> > > > + reinit_completion(&entity->entity_idle);
> > > > + }
> > > > spin_unlock(&rq->lock);
> > > > return entity;
> > > > }
> > > > @@ -238,8 +241,10 @@ drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq)
> > > > list_for_each_entry(entity, &rq->entities, list) {
> > > > if (drm_sched_entity_is_ready(entity)) {
> > > > - rq->current_entity = entity;
> > > > - reinit_completion(&entity->entity_idle);
> > > > + if (dequeue) {
> > > > + rq->current_entity = entity;
> > > > + reinit_completion(&entity->entity_idle);
> > > > + }
> > > > spin_unlock(&rq->lock);
> > > > return entity;
> > > > }
> > > > @@ -257,11 +262,12 @@ drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq)
> > > > * drm_sched_rq_select_entity_fifo - Select an entity which provides a job to run
> > > > *
> > > > * @rq: scheduler run queue to check.
> > > > + * @dequeue: dequeue selected entity
> > > > *
> > > > * Find oldest waiting ready entity, returns NULL if none found.
> > > > */
> > > > static struct drm_sched_entity *
> > > > -drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq)
> > > > +drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq, bool dequeue)
> > > > {
> > > > struct rb_node *rb;
> > > > @@ -271,8 +277,10 @@ drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq)
> > > > entity = rb_entry(rb, struct drm_sched_entity, rb_tree_node);
> > > > if (drm_sched_entity_is_ready(entity)) {
> > > > - rq->current_entity = entity;
> > > > - reinit_completion(&entity->entity_idle);
> > > > + if (dequeue) {
> > > > + rq->current_entity = entity;
> > > > + reinit_completion(&entity->entity_idle);
> > > > + }
> > > > break;
> > > > }
> > > > }
> > > > @@ -282,13 +290,54 @@ drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq)
> > > > }
> > > > /**
> > > > - * drm_sched_submit_queue - scheduler queue submission
> > > > + * drm_sched_run_job_queue - queue job submission
> > > > * @sched: scheduler instance
> > > > */
> > > > -static void drm_sched_submit_queue(struct drm_gpu_scheduler *sched)
> > > > +static void drm_sched_run_job_queue(struct drm_gpu_scheduler *sched)
> > > > {
> > > > if (!READ_ONCE(sched->pause_submit))
> > > > - queue_work(sched->submit_wq, &sched->work_submit);
> > > > + queue_work(sched->submit_wq, &sched->work_run_job);
> > > > +}
> > > > +
> > > > +static struct drm_sched_entity *
> > > > +drm_sched_select_entity(struct drm_gpu_scheduler *sched, bool dequeue);
> > > > +
> > > > +/**
> > > > + * drm_sched_run_job_queue_if_ready - queue job submission if ready
> > > > + * @sched: scheduler instance
> > > > + */
> > > > +static void drm_sched_run_job_queue_if_ready(struct drm_gpu_scheduler *sched)
> > > > +{
> > > > + if (drm_sched_select_entity(sched, false))
> > > > + drm_sched_run_job_queue(sched);
> > > > +}
> > > > +
> > > > +/**
> > > > + * drm_sched_free_job_queue - queue free job
> > > > + *
> > > > + * @sched: scheduler instance to queue free job
> > > > + */
> > > > +static void drm_sched_free_job_queue(struct drm_gpu_scheduler *sched)
> > > > +{
> > > > + if (!READ_ONCE(sched->pause_submit))
> > > > + queue_work(sched->submit_wq, &sched->work_free_job);
> > > > +}
> > > > +
> > > > +/**
> > > > + * drm_sched_free_job_queue_if_ready - queue free job if ready
> > > > + *
> > > > + * @sched: scheduler instance to queue free job
> > > > + */
> > > > +static void drm_sched_free_job_queue_if_ready(struct drm_gpu_scheduler *sched)
> > > > +{
> > > > + struct drm_sched_job *job;
> > > > +
> > > > + spin_lock(&sched->job_list_lock);
> > > > + job = list_first_entry_or_null(&sched->pending_list,
> > > > + struct drm_sched_job, list);
> > > > + if (job && dma_fence_is_signaled(&job->s_fence->finished))
> > > > + drm_sched_free_job_queue(sched);
> > > > + spin_unlock(&sched->job_list_lock);
> > > > }
> > > > /**
> > > > @@ -310,7 +359,7 @@ static void drm_sched_job_done(struct drm_sched_job *s_job, int result)
> > > > dma_fence_get(&s_fence->finished);
> > > > drm_sched_fence_finished(s_fence, result);
> > > > dma_fence_put(&s_fence->finished);
> > > > - drm_sched_submit_queue(sched);
> > > > + drm_sched_free_job_queue(sched);
> > > > }
> > > > /**
> > > > @@ -906,18 +955,19 @@ static bool drm_sched_can_queue(struct drm_gpu_scheduler *sched)
> > > > void drm_sched_wakeup_if_can_queue(struct drm_gpu_scheduler *sched)
> > > > {
> > > > if (drm_sched_can_queue(sched))
> > > > - drm_sched_submit_queue(sched);
> > > > + drm_sched_run_job_queue(sched);
> > > > }
> > > > /**
> > > > * drm_sched_select_entity - Select next entity to process
> > > > *
> > > > * @sched: scheduler instance
> > > > + * @dequeue: dequeue selected entity
> > > > *
> > > > * Returns the entity to process or NULL if none are found.
> > > > */
> > > > static struct drm_sched_entity *
> > > > -drm_sched_select_entity(struct drm_gpu_scheduler *sched)
> > > > +drm_sched_select_entity(struct drm_gpu_scheduler *sched, bool dequeue)
> > > > {
> > > > struct drm_sched_entity *entity;
> > > > int i;
> > > > @@ -935,8 +985,10 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
> > > > /* Kernel run queue has higher priority than normal run queue*/
> > > > for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> > > > entity = sched->sched_policy == DRM_SCHED_POLICY_FIFO ?
> > > > - drm_sched_rq_select_entity_fifo(&sched->sched_rq[i]) :
> > > > - drm_sched_rq_select_entity_rr(&sched->sched_rq[i]);
> > > > + drm_sched_rq_select_entity_fifo(&sched->sched_rq[i],
> > > > + dequeue) :
> > > > + drm_sched_rq_select_entity_rr(&sched->sched_rq[i],
> > > > + dequeue);
> > > > if (entity)
> > > > break;
> > > > }
> > > > @@ -1024,30 +1076,44 @@ drm_sched_pick_best(struct drm_gpu_scheduler **sched_list,
> > > > EXPORT_SYMBOL(drm_sched_pick_best);
> > > > /**
> > > > - * drm_sched_main - main scheduler thread
> > > > + * drm_sched_free_job_work - worker to call free_job
> > > > *
> > > > - * @param: scheduler instance
> > > > + * @w: free job work
> > > > */
> > > > -static void drm_sched_main(struct work_struct *w)
> > > > +static void drm_sched_free_job_work(struct work_struct *w)
> > > > {
> > > > struct drm_gpu_scheduler *sched =
> > > > - container_of(w, struct drm_gpu_scheduler, work_submit);
> > > > - struct drm_sched_entity *entity;
> > > > + container_of(w, struct drm_gpu_scheduler, work_free_job);
> > > > struct drm_sched_job *cleanup_job;
> > > > - int r;
> > > > if (READ_ONCE(sched->pause_submit))
> > > > return;
> > > > cleanup_job = drm_sched_get_cleanup_job(sched);
> > > > - entity = drm_sched_select_entity(sched);
> > > > + if (cleanup_job) {
> > > > + sched->ops->free_job(cleanup_job);
> > > > +
> > > > + drm_sched_free_job_queue_if_ready(sched);
> > > > + drm_sched_run_job_queue_if_ready(sched);
> > > > + }
> > > > +}
> > > > - if (!entity && !cleanup_job)
> > > > - return; /* No more work */
> > > > +/**
> > > > + * drm_sched_run_job_work - worker to call run_job
> > > > + *
> > > > + * @w: run job work
> > > > + */
> > > > +static void drm_sched_run_job_work(struct work_struct *w)
> > > > +{
> > > > + struct drm_gpu_scheduler *sched =
> > > > + container_of(w, struct drm_gpu_scheduler, work_run_job);
> > > > + struct drm_sched_entity *entity;
> > > > + int r;
> > > > - if (cleanup_job)
> > > > - sched->ops->free_job(cleanup_job);
> > > > + if (READ_ONCE(sched->pause_submit))
> > > > + return;
> > > > + entity = drm_sched_select_entity(sched, true);
> > > > if (entity) {
> > > > struct dma_fence *fence;
> > > > struct drm_sched_fence *s_fence;
> > > > @@ -1056,9 +1122,7 @@ static void drm_sched_main(struct work_struct *w)
> > > > sched_job = drm_sched_entity_pop_job(entity);
> > > > if (!sched_job) {
> > > > complete_all(&entity->entity_idle);
> > > > - if (!cleanup_job)
> > > > - return; /* No more work */
> > > > - goto again;
> > > > + return; /* No more work */
> > > > }
> > > > s_fence = sched_job->s_fence;
> > > > @@ -1088,10 +1152,8 @@ static void drm_sched_main(struct work_struct *w)
> > > > }
> > > > wake_up(&sched->job_scheduled);
> > > > + drm_sched_run_job_queue_if_ready(sched);
> > > > }
> > > > -
> > > > -again:
> > > > - drm_sched_submit_queue(sched);
> > > > }
> > > > /**
> > > > @@ -1150,7 +1212,8 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > > spin_lock_init(&sched->job_list_lock);
> > > > atomic_set(&sched->hw_rq_count, 0);
> > > > INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout);
> > > > - INIT_WORK(&sched->work_submit, drm_sched_main);
> > > > + INIT_WORK(&sched->work_run_job, drm_sched_run_job_work);
> > > > + INIT_WORK(&sched->work_free_job, drm_sched_free_job_work);
> > > > atomic_set(&sched->_score, 0);
> > > > atomic64_set(&sched->job_id_count, 0);
> > > > sched->pause_submit = false;
> > > > @@ -1275,7 +1338,8 @@ EXPORT_SYMBOL(drm_sched_submit_ready);
> > > > void drm_sched_submit_stop(struct drm_gpu_scheduler *sched)
> > > > {
> > > > WRITE_ONCE(sched->pause_submit, true);
> > > > - cancel_work_sync(&sched->work_submit);
> > > > + cancel_work_sync(&sched->work_run_job);
> > > > + cancel_work_sync(&sched->work_free_job);
> > > > }
> > > > EXPORT_SYMBOL(drm_sched_submit_stop);
> > > > @@ -1287,6 +1351,7 @@ EXPORT_SYMBOL(drm_sched_submit_stop);
> > > > void drm_sched_submit_start(struct drm_gpu_scheduler *sched)
> > > > {
> > > > WRITE_ONCE(sched->pause_submit, false);
> > > > - queue_work(sched->submit_wq, &sched->work_submit);
> > > > + queue_work(sched->submit_wq, &sched->work_run_job);
> > > > + queue_work(sched->submit_wq, &sched->work_free_job);
> > > > }
> > > > EXPORT_SYMBOL(drm_sched_submit_start);
> > > > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> > > > index 04eec2d7635f..fbc083a92757 100644
> > > > --- a/include/drm/gpu_scheduler.h
> > > > +++ b/include/drm/gpu_scheduler.h
> > > > @@ -487,9 +487,10 @@ struct drm_sched_backend_ops {
> > > > * finished.
> > > > * @hw_rq_count: the number of jobs currently in the hardware queue.
> > > > * @job_id_count: used to assign unique id to the each job.
> > > > - * @submit_wq: workqueue used to queue @work_submit
> > > > + * @submit_wq: workqueue used to queue @work_run_job and @work_free_job
> > > > * @timeout_wq: workqueue used to queue @work_tdr
> > > > - * @work_submit: schedules jobs and cleans up entities
> > > > + * @work_run_job: schedules jobs
> > > > + * @work_free_job: cleans up jobs
> > > > * @work_tdr: schedules a delayed call to @drm_sched_job_timedout after the
> > > > * timeout interval is over.
> > > > * @pending_list: the list of jobs which are currently in the job queue.
> > > > @@ -518,7 +519,8 @@ struct drm_gpu_scheduler {
> > > > atomic64_t job_id_count;
> > > > struct workqueue_struct *submit_wq;
> > > > struct workqueue_struct *timeout_wq;
> > > > - struct work_struct work_submit;
> > > > + struct work_struct work_run_job;
> > > > + struct work_struct work_free_job;
> > > > struct delayed_work work_tdr;
> > > > struct list_head pending_list;
> > > > spinlock_t job_list_lock;
>
WARNING: multiple messages have this Message-ID (diff)
From: Matthew Brost <matthew.brost@intel.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com,
sarah.walker@imgtec.com, ketil.johnsen@arm.com,
Liviu.Dudau@arm.com, dri-devel@lists.freedesktop.org,
luben.tuikov@amd.com, lina@asahilina.net,
donald.robson@imgtec.com, boris.brezillon@collabora.com,
intel-xe@lists.freedesktop.org, faith.ekstrand@collabora.com
Subject: Re: [PATCH v2 4/9] drm/sched: Split free_job into own work item
Date: Fri, 18 Aug 2023 13:13:34 +0000 [thread overview]
Message-ID: <ZN9ufogOQm4xblxm@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <9f6554fd-8748-2613-1c90-921b670ad4d9@amd.com>
On Fri, Aug 18, 2023 at 07:27:33AM +0200, Christian König wrote:
> Am 17.08.23 um 19:54 schrieb Matthew Brost:
> > On Thu, Aug 17, 2023 at 03:39:40PM +0200, Christian König wrote:
> > > Am 11.08.23 um 04:31 schrieb Matthew Brost:
> > > > Rather than call free_job and run_job in same work item have a dedicated
> > > > work item for each. This aligns with the design and intended use of work
> > > > queues.
> > > I would rather say we should get completely rid of the free_job callback.
> > >
> > Would we still have work item? e.g. Would we still want to call
> > drm_sched_get_cleanup_job which removes the job from the pending list
> > and adjusts the TDR? Trying to figure out out what this looks like. We
> > probably can't do all of this from an IRQ context.
> >
> > > Essentially the job is just the container which carries the information
> > > which are necessary before you push it to the hw. The real representation of
> > > the submission is actually the scheduler fence.
> > >
> > Most of the free_jobs call plus drm_sched_job_cleanup + a put on job. In
> > Xe this cannot be called from an IRQ context either.
> >
> > I'm just confused what exactly you are suggesting here.
>
> To summarize on one sentence: Instead of the job we keep the scheduler and
> hardware fences around after pushing the job to the hw.
>
> The free_job callback would then be replaced by dropping the reference on
> the scheduler and hw fence.
>
> Would that work for you?
>
I don't think so for a few reasons.
The job and hw fence are different structures (also different allocs too)
for a reason. The job referenced until it is complete (hw fence is
signaled) and the free_job is called. This reference is needed for the
TDR to work properly and also some reset flows too. Also in Xe some of
things done in free_job cannot be from an IRQ context, hence calling
this from the scheduler worker is rather helpful.
The HW fence can live for longer as it can be installed in dma-resv
slots, syncobjs, etc... If the job and hw fence are combined now we
holding on the memory for the longer and perhaps at the mercy of the
user. We also run the risk of the final put being done from an IRQ
context which again wont work in Xe as it is currently coded. Lastly 2
jobs from the same scheduler could do the final put in parallel, so
rather than having free_job serialized by the worker now multiple jobs
are freeing themselves at the same time. This might not be an issue but
adds another level of raceyness that needs to be accounted for. None of
this sounds desirable to me.
FWIW what you suggesting sounds like how the i915 did things
(i915_request and hw fence in 1 memory alloc) and that turned out to be
a huge mess. As rule of thumb I generally do the opposite of whatever
the i915 did.
Matt
> Christian.
>
> >
> > Matt
> >
> > > All the lifetime issues we had came from ignoring this fact and I think we
> > > should push for fixing this design up again.
> > >
> > > Regards,
> > > Christian.
> > >
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > ---
> > > > drivers/gpu/drm/scheduler/sched_main.c | 137 ++++++++++++++++++-------
> > > > include/drm/gpu_scheduler.h | 8 +-
> > > > 2 files changed, 106 insertions(+), 39 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> > > > index cede47afc800..b67469eac179 100644
> > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > @@ -213,11 +213,12 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq,
> > > > * drm_sched_rq_select_entity_rr - Select an entity which could provide a job to run
> > > > *
> > > > * @rq: scheduler run queue to check.
> > > > + * @dequeue: dequeue selected entity
> > > > *
> > > > * Try to find a ready entity, returns NULL if none found.
> > > > */
> > > > static struct drm_sched_entity *
> > > > -drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq)
> > > > +drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq, bool dequeue)
> > > > {
> > > > struct drm_sched_entity *entity;
> > > > @@ -227,8 +228,10 @@ drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq)
> > > > if (entity) {
> > > > list_for_each_entry_continue(entity, &rq->entities, list) {
> > > > if (drm_sched_entity_is_ready(entity)) {
> > > > - rq->current_entity = entity;
> > > > - reinit_completion(&entity->entity_idle);
> > > > + if (dequeue) {
> > > > + rq->current_entity = entity;
> > > > + reinit_completion(&entity->entity_idle);
> > > > + }
> > > > spin_unlock(&rq->lock);
> > > > return entity;
> > > > }
> > > > @@ -238,8 +241,10 @@ drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq)
> > > > list_for_each_entry(entity, &rq->entities, list) {
> > > > if (drm_sched_entity_is_ready(entity)) {
> > > > - rq->current_entity = entity;
> > > > - reinit_completion(&entity->entity_idle);
> > > > + if (dequeue) {
> > > > + rq->current_entity = entity;
> > > > + reinit_completion(&entity->entity_idle);
> > > > + }
> > > > spin_unlock(&rq->lock);
> > > > return entity;
> > > > }
> > > > @@ -257,11 +262,12 @@ drm_sched_rq_select_entity_rr(struct drm_sched_rq *rq)
> > > > * drm_sched_rq_select_entity_fifo - Select an entity which provides a job to run
> > > > *
> > > > * @rq: scheduler run queue to check.
> > > > + * @dequeue: dequeue selected entity
> > > > *
> > > > * Find oldest waiting ready entity, returns NULL if none found.
> > > > */
> > > > static struct drm_sched_entity *
> > > > -drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq)
> > > > +drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq, bool dequeue)
> > > > {
> > > > struct rb_node *rb;
> > > > @@ -271,8 +277,10 @@ drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq)
> > > > entity = rb_entry(rb, struct drm_sched_entity, rb_tree_node);
> > > > if (drm_sched_entity_is_ready(entity)) {
> > > > - rq->current_entity = entity;
> > > > - reinit_completion(&entity->entity_idle);
> > > > + if (dequeue) {
> > > > + rq->current_entity = entity;
> > > > + reinit_completion(&entity->entity_idle);
> > > > + }
> > > > break;
> > > > }
> > > > }
> > > > @@ -282,13 +290,54 @@ drm_sched_rq_select_entity_fifo(struct drm_sched_rq *rq)
> > > > }
> > > > /**
> > > > - * drm_sched_submit_queue - scheduler queue submission
> > > > + * drm_sched_run_job_queue - queue job submission
> > > > * @sched: scheduler instance
> > > > */
> > > > -static void drm_sched_submit_queue(struct drm_gpu_scheduler *sched)
> > > > +static void drm_sched_run_job_queue(struct drm_gpu_scheduler *sched)
> > > > {
> > > > if (!READ_ONCE(sched->pause_submit))
> > > > - queue_work(sched->submit_wq, &sched->work_submit);
> > > > + queue_work(sched->submit_wq, &sched->work_run_job);
> > > > +}
> > > > +
> > > > +static struct drm_sched_entity *
> > > > +drm_sched_select_entity(struct drm_gpu_scheduler *sched, bool dequeue);
> > > > +
> > > > +/**
> > > > + * drm_sched_run_job_queue_if_ready - queue job submission if ready
> > > > + * @sched: scheduler instance
> > > > + */
> > > > +static void drm_sched_run_job_queue_if_ready(struct drm_gpu_scheduler *sched)
> > > > +{
> > > > + if (drm_sched_select_entity(sched, false))
> > > > + drm_sched_run_job_queue(sched);
> > > > +}
> > > > +
> > > > +/**
> > > > + * drm_sched_free_job_queue - queue free job
> > > > + *
> > > > + * @sched: scheduler instance to queue free job
> > > > + */
> > > > +static void drm_sched_free_job_queue(struct drm_gpu_scheduler *sched)
> > > > +{
> > > > + if (!READ_ONCE(sched->pause_submit))
> > > > + queue_work(sched->submit_wq, &sched->work_free_job);
> > > > +}
> > > > +
> > > > +/**
> > > > + * drm_sched_free_job_queue_if_ready - queue free job if ready
> > > > + *
> > > > + * @sched: scheduler instance to queue free job
> > > > + */
> > > > +static void drm_sched_free_job_queue_if_ready(struct drm_gpu_scheduler *sched)
> > > > +{
> > > > + struct drm_sched_job *job;
> > > > +
> > > > + spin_lock(&sched->job_list_lock);
> > > > + job = list_first_entry_or_null(&sched->pending_list,
> > > > + struct drm_sched_job, list);
> > > > + if (job && dma_fence_is_signaled(&job->s_fence->finished))
> > > > + drm_sched_free_job_queue(sched);
> > > > + spin_unlock(&sched->job_list_lock);
> > > > }
> > > > /**
> > > > @@ -310,7 +359,7 @@ static void drm_sched_job_done(struct drm_sched_job *s_job, int result)
> > > > dma_fence_get(&s_fence->finished);
> > > > drm_sched_fence_finished(s_fence, result);
> > > > dma_fence_put(&s_fence->finished);
> > > > - drm_sched_submit_queue(sched);
> > > > + drm_sched_free_job_queue(sched);
> > > > }
> > > > /**
> > > > @@ -906,18 +955,19 @@ static bool drm_sched_can_queue(struct drm_gpu_scheduler *sched)
> > > > void drm_sched_wakeup_if_can_queue(struct drm_gpu_scheduler *sched)
> > > > {
> > > > if (drm_sched_can_queue(sched))
> > > > - drm_sched_submit_queue(sched);
> > > > + drm_sched_run_job_queue(sched);
> > > > }
> > > > /**
> > > > * drm_sched_select_entity - Select next entity to process
> > > > *
> > > > * @sched: scheduler instance
> > > > + * @dequeue: dequeue selected entity
> > > > *
> > > > * Returns the entity to process or NULL if none are found.
> > > > */
> > > > static struct drm_sched_entity *
> > > > -drm_sched_select_entity(struct drm_gpu_scheduler *sched)
> > > > +drm_sched_select_entity(struct drm_gpu_scheduler *sched, bool dequeue)
> > > > {
> > > > struct drm_sched_entity *entity;
> > > > int i;
> > > > @@ -935,8 +985,10 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched)
> > > > /* Kernel run queue has higher priority than normal run queue*/
> > > > for (i = DRM_SCHED_PRIORITY_COUNT - 1; i >= DRM_SCHED_PRIORITY_MIN; i--) {
> > > > entity = sched->sched_policy == DRM_SCHED_POLICY_FIFO ?
> > > > - drm_sched_rq_select_entity_fifo(&sched->sched_rq[i]) :
> > > > - drm_sched_rq_select_entity_rr(&sched->sched_rq[i]);
> > > > + drm_sched_rq_select_entity_fifo(&sched->sched_rq[i],
> > > > + dequeue) :
> > > > + drm_sched_rq_select_entity_rr(&sched->sched_rq[i],
> > > > + dequeue);
> > > > if (entity)
> > > > break;
> > > > }
> > > > @@ -1024,30 +1076,44 @@ drm_sched_pick_best(struct drm_gpu_scheduler **sched_list,
> > > > EXPORT_SYMBOL(drm_sched_pick_best);
> > > > /**
> > > > - * drm_sched_main - main scheduler thread
> > > > + * drm_sched_free_job_work - worker to call free_job
> > > > *
> > > > - * @param: scheduler instance
> > > > + * @w: free job work
> > > > */
> > > > -static void drm_sched_main(struct work_struct *w)
> > > > +static void drm_sched_free_job_work(struct work_struct *w)
> > > > {
> > > > struct drm_gpu_scheduler *sched =
> > > > - container_of(w, struct drm_gpu_scheduler, work_submit);
> > > > - struct drm_sched_entity *entity;
> > > > + container_of(w, struct drm_gpu_scheduler, work_free_job);
> > > > struct drm_sched_job *cleanup_job;
> > > > - int r;
> > > > if (READ_ONCE(sched->pause_submit))
> > > > return;
> > > > cleanup_job = drm_sched_get_cleanup_job(sched);
> > > > - entity = drm_sched_select_entity(sched);
> > > > + if (cleanup_job) {
> > > > + sched->ops->free_job(cleanup_job);
> > > > +
> > > > + drm_sched_free_job_queue_if_ready(sched);
> > > > + drm_sched_run_job_queue_if_ready(sched);
> > > > + }
> > > > +}
> > > > - if (!entity && !cleanup_job)
> > > > - return; /* No more work */
> > > > +/**
> > > > + * drm_sched_run_job_work - worker to call run_job
> > > > + *
> > > > + * @w: run job work
> > > > + */
> > > > +static void drm_sched_run_job_work(struct work_struct *w)
> > > > +{
> > > > + struct drm_gpu_scheduler *sched =
> > > > + container_of(w, struct drm_gpu_scheduler, work_run_job);
> > > > + struct drm_sched_entity *entity;
> > > > + int r;
> > > > - if (cleanup_job)
> > > > - sched->ops->free_job(cleanup_job);
> > > > + if (READ_ONCE(sched->pause_submit))
> > > > + return;
> > > > + entity = drm_sched_select_entity(sched, true);
> > > > if (entity) {
> > > > struct dma_fence *fence;
> > > > struct drm_sched_fence *s_fence;
> > > > @@ -1056,9 +1122,7 @@ static void drm_sched_main(struct work_struct *w)
> > > > sched_job = drm_sched_entity_pop_job(entity);
> > > > if (!sched_job) {
> > > > complete_all(&entity->entity_idle);
> > > > - if (!cleanup_job)
> > > > - return; /* No more work */
> > > > - goto again;
> > > > + return; /* No more work */
> > > > }
> > > > s_fence = sched_job->s_fence;
> > > > @@ -1088,10 +1152,8 @@ static void drm_sched_main(struct work_struct *w)
> > > > }
> > > > wake_up(&sched->job_scheduled);
> > > > + drm_sched_run_job_queue_if_ready(sched);
> > > > }
> > > > -
> > > > -again:
> > > > - drm_sched_submit_queue(sched);
> > > > }
> > > > /**
> > > > @@ -1150,7 +1212,8 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
> > > > spin_lock_init(&sched->job_list_lock);
> > > > atomic_set(&sched->hw_rq_count, 0);
> > > > INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout);
> > > > - INIT_WORK(&sched->work_submit, drm_sched_main);
> > > > + INIT_WORK(&sched->work_run_job, drm_sched_run_job_work);
> > > > + INIT_WORK(&sched->work_free_job, drm_sched_free_job_work);
> > > > atomic_set(&sched->_score, 0);
> > > > atomic64_set(&sched->job_id_count, 0);
> > > > sched->pause_submit = false;
> > > > @@ -1275,7 +1338,8 @@ EXPORT_SYMBOL(drm_sched_submit_ready);
> > > > void drm_sched_submit_stop(struct drm_gpu_scheduler *sched)
> > > > {
> > > > WRITE_ONCE(sched->pause_submit, true);
> > > > - cancel_work_sync(&sched->work_submit);
> > > > + cancel_work_sync(&sched->work_run_job);
> > > > + cancel_work_sync(&sched->work_free_job);
> > > > }
> > > > EXPORT_SYMBOL(drm_sched_submit_stop);
> > > > @@ -1287,6 +1351,7 @@ EXPORT_SYMBOL(drm_sched_submit_stop);
> > > > void drm_sched_submit_start(struct drm_gpu_scheduler *sched)
> > > > {
> > > > WRITE_ONCE(sched->pause_submit, false);
> > > > - queue_work(sched->submit_wq, &sched->work_submit);
> > > > + queue_work(sched->submit_wq, &sched->work_run_job);
> > > > + queue_work(sched->submit_wq, &sched->work_free_job);
> > > > }
> > > > EXPORT_SYMBOL(drm_sched_submit_start);
> > > > diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> > > > index 04eec2d7635f..fbc083a92757 100644
> > > > --- a/include/drm/gpu_scheduler.h
> > > > +++ b/include/drm/gpu_scheduler.h
> > > > @@ -487,9 +487,10 @@ struct drm_sched_backend_ops {
> > > > * finished.
> > > > * @hw_rq_count: the number of jobs currently in the hardware queue.
> > > > * @job_id_count: used to assign unique id to the each job.
> > > > - * @submit_wq: workqueue used to queue @work_submit
> > > > + * @submit_wq: workqueue used to queue @work_run_job and @work_free_job
> > > > * @timeout_wq: workqueue used to queue @work_tdr
> > > > - * @work_submit: schedules jobs and cleans up entities
> > > > + * @work_run_job: schedules jobs
> > > > + * @work_free_job: cleans up jobs
> > > > * @work_tdr: schedules a delayed call to @drm_sched_job_timedout after the
> > > > * timeout interval is over.
> > > > * @pending_list: the list of jobs which are currently in the job queue.
> > > > @@ -518,7 +519,8 @@ struct drm_gpu_scheduler {
> > > > atomic64_t job_id_count;
> > > > struct workqueue_struct *submit_wq;
> > > > struct workqueue_struct *timeout_wq;
> > > > - struct work_struct work_submit;
> > > > + struct work_struct work_run_job;
> > > > + struct work_struct work_free_job;
> > > > struct delayed_work work_tdr;
> > > > struct list_head pending_list;
> > > > spinlock_t job_list_lock;
>
next prev parent reply other threads:[~2023-08-18 13:14 UTC|newest]
Thread overview: 163+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-11 2:31 [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 1/9] drm/sched: Convert drm scheduler to use a work queue rather than kthread Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-16 11:30 ` [Intel-xe] " Danilo Krummrich
2023-08-16 11:30 ` Danilo Krummrich
2023-08-16 14:05 ` [Intel-xe] " Christian König
2023-08-16 14:05 ` Christian König
2023-08-16 12:30 ` [Intel-xe] " Danilo Krummrich
2023-08-16 12:30 ` Danilo Krummrich
2023-08-16 14:38 ` [Intel-xe] " Matthew Brost
2023-08-16 14:38 ` Matthew Brost
2023-08-16 15:40 ` [Intel-xe] " Danilo Krummrich
2023-08-16 15:40 ` Danilo Krummrich
2023-08-16 14:59 ` [Intel-xe] " Christian König
2023-08-16 14:59 ` Christian König
2023-08-16 16:33 ` [Intel-xe] " Danilo Krummrich
2023-08-16 16:33 ` Danilo Krummrich
2023-08-17 5:33 ` [Intel-xe] " Christian König
2023-08-17 5:33 ` Christian König
2023-08-17 11:13 ` [Intel-xe] " Danilo Krummrich
2023-08-17 11:13 ` Danilo Krummrich
2023-08-17 13:35 ` [Intel-xe] " Christian König
2023-08-17 13:35 ` Christian König
2023-08-17 12:48 ` [Intel-xe] " Danilo Krummrich
2023-08-17 12:48 ` Danilo Krummrich
2023-08-17 16:17 ` [Intel-xe] " Christian König
2023-08-17 16:17 ` Christian König
2023-08-18 11:58 ` [Intel-xe] " Danilo Krummrich
2023-08-18 11:58 ` Danilo Krummrich
2023-08-21 14:07 ` [Intel-xe] " Christian König
2023-08-21 14:07 ` Christian König
2023-08-21 18:01 ` [Intel-xe] " Danilo Krummrich
2023-08-21 18:01 ` Danilo Krummrich
2023-08-21 18:12 ` [Intel-xe] " Christian König
2023-08-21 18:12 ` Christian König
2023-08-21 19:07 ` [Intel-xe] " Danilo Krummrich
2023-08-21 19:07 ` Danilo Krummrich
2023-08-22 9:35 ` [Intel-xe] " Christian König
2023-08-22 9:35 ` Christian König
2023-08-21 19:46 ` [Intel-xe] " Faith Ekstrand
2023-08-21 19:46 ` Faith Ekstrand
2023-08-22 9:51 ` [Intel-xe] " Christian König
2023-08-22 9:51 ` Christian König
2023-08-22 16:55 ` [Intel-xe] " Faith Ekstrand
2023-08-22 16:55 ` Faith Ekstrand
2023-08-24 11:50 ` [Intel-xe] " Bas Nieuwenhuizen
2023-08-24 11:50 ` Bas Nieuwenhuizen
2023-08-18 3:08 ` [Intel-xe] " Matthew Brost
2023-08-18 3:08 ` Matthew Brost
2023-08-18 5:40 ` [Intel-xe] " Christian König
2023-08-18 5:40 ` Christian König
2023-08-18 12:49 ` [Intel-xe] " Matthew Brost
2023-08-18 12:49 ` Matthew Brost
2023-08-18 12:06 ` [Intel-xe] " Danilo Krummrich
2023-08-18 12:06 ` Danilo Krummrich
2023-09-12 14:28 ` [Intel-xe] " Boris Brezillon
2023-09-12 14:28 ` Boris Brezillon
2023-09-12 14:33 ` [Intel-xe] " Danilo Krummrich
2023-09-12 14:33 ` Danilo Krummrich
2023-09-12 14:49 ` [Intel-xe] " Boris Brezillon
2023-09-12 14:49 ` Boris Brezillon
2023-09-12 15:13 ` [Intel-xe] " Boris Brezillon
2023-09-12 15:13 ` Boris Brezillon
2023-09-12 16:58 ` [Intel-xe] " Danilo Krummrich
2023-09-12 16:58 ` Danilo Krummrich
2023-09-12 16:52 ` [Intel-xe] " Danilo Krummrich
2023-09-12 16:52 ` Danilo Krummrich
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 2/9] drm/sched: Move schedule policy to scheduler / entity Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 21:43 ` [Intel-xe] " Maira Canal
2023-08-11 21:43 ` Maira Canal
2023-08-12 3:20 ` [Intel-xe] " Matthew Brost
2023-08-12 3:20 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 3/9] drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-29 17:37 ` [Intel-xe] " Danilo Krummrich
2023-08-29 17:37 ` Danilo Krummrich
2023-09-05 11:10 ` [Intel-xe] " Danilo Krummrich
2023-09-05 11:10 ` Danilo Krummrich
2023-09-11 19:44 ` [Intel-xe] " Matthew Brost
2023-09-11 19:44 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 4/9] drm/sched: Split free_job into own work item Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-17 13:39 ` [Intel-xe] " Christian König
2023-08-17 13:39 ` Christian König
2023-08-17 17:54 ` [Intel-xe] " Matthew Brost
2023-08-17 17:54 ` Matthew Brost
2023-08-18 5:27 ` [Intel-xe] " Christian König
2023-08-18 5:27 ` Christian König
2023-08-18 13:13 ` Matthew Brost [this message]
2023-08-18 13:13 ` Matthew Brost
2023-08-21 13:17 ` [Intel-xe] " Christian König
2023-08-21 13:17 ` Christian König
2023-08-23 3:27 ` [Intel-xe] " Matthew Brost
2023-08-23 3:27 ` Matthew Brost
2023-08-23 7:10 ` [Intel-xe] " Christian König
2023-08-23 7:10 ` Christian König
2023-08-23 15:24 ` [Intel-xe] " Matthew Brost
2023-08-23 15:24 ` Matthew Brost
2023-08-23 15:41 ` [Intel-xe] " Alex Deucher
2023-08-23 15:41 ` Alex Deucher
2023-08-23 17:26 ` [Intel-xe] " Rodrigo Vivi
2023-08-23 17:26 ` Rodrigo Vivi
2023-08-23 23:12 ` Matthew Brost
2023-08-23 23:12 ` Matthew Brost
2023-08-24 11:44 ` Christian König
2023-08-24 11:44 ` Christian König
2023-08-24 14:30 ` Matthew Brost
2023-08-24 14:30 ` Matthew Brost
2023-08-24 23:04 ` Danilo Krummrich
2023-08-24 23:04 ` Danilo Krummrich
2023-08-25 2:58 ` [Intel-xe] " Matthew Brost
2023-08-25 2:58 ` Matthew Brost
2023-08-25 8:02 ` [Intel-xe] " Christian König
2023-08-25 8:02 ` Christian König
2023-08-25 13:36 ` [Intel-xe] " Matthew Brost
2023-08-25 13:36 ` Matthew Brost
2023-08-25 13:45 ` [Intel-xe] " Christian König
2023-08-25 13:45 ` Christian König
2023-09-12 10:13 ` [Intel-xe] " Boris Brezillon
2023-09-12 10:13 ` Boris Brezillon
2023-09-12 10:46 ` [Intel-xe] " Danilo Krummrich
2023-09-12 10:46 ` Danilo Krummrich
2023-09-12 12:18 ` [Intel-xe] " Boris Brezillon
2023-09-12 12:18 ` Boris Brezillon
2023-09-12 12:56 ` [Intel-xe] " Danilo Krummrich
2023-09-12 12:56 ` Danilo Krummrich
2023-09-12 13:52 ` [Intel-xe] " Boris Brezillon
2023-09-12 13:52 ` Boris Brezillon
2023-09-12 14:10 ` [Intel-xe] " Danilo Krummrich
2023-09-12 14:10 ` Danilo Krummrich
2023-09-12 13:27 ` [Intel-xe] " Boris Brezillon
2023-09-12 13:27 ` Boris Brezillon
2023-09-12 13:34 ` [Intel-xe] " Danilo Krummrich
2023-09-12 13:34 ` Danilo Krummrich
2023-09-12 13:53 ` [Intel-xe] " Boris Brezillon
2023-09-12 13:53 ` Boris Brezillon
2023-08-28 18:04 ` [Intel-xe] " Danilo Krummrich
2023-08-28 18:04 ` Danilo Krummrich
2023-08-28 18:41 ` [Intel-xe] " Matthew Brost
2023-08-28 18:41 ` Matthew Brost
2023-08-29 1:20 ` [Intel-xe] " Danilo Krummrich
2023-08-29 1:20 ` Danilo Krummrich
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 5/9] drm/sched: Add generic scheduler message interface Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 6/9] drm/sched: Add drm_sched_start_timeout_unlocked helper Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 7/9] drm/sched: Start run wq before TDR in drm_sched_start Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 8/9] drm/sched: Submit job before starting TDR Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 9/9] drm/sched: Add helper to set TDR timeout Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:34 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev2) Patchwork
2023-08-24 0:08 ` [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Danilo Krummrich
2023-08-24 0:08 ` Danilo Krummrich
2023-08-24 3:23 ` [Intel-xe] " Matthew Brost
2023-08-24 3:23 ` Matthew Brost
2023-08-24 14:51 ` [Intel-xe] " Danilo Krummrich
2023-08-24 14:51 ` Danilo Krummrich
2023-08-25 3:01 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev3) Patchwork
2023-09-05 11:13 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev4) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZN9ufogOQm4xblxm@DUT025-TGLU.fm.intel.com \
--to=matthew.brost@intel.com \
--cc=Liviu.Dudau@arm.com \
--cc=airlied@gmail.com \
--cc=boris.brezillon@collabora.com \
--cc=christian.koenig@amd.com \
--cc=daniel@ffwll.ch \
--cc=donald.robson@imgtec.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=faith.ekstrand@collabora.com \
--cc=frank.binns@imgtec.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=ketil.johnsen@arm.com \
--cc=lina@asahilina.net \
--cc=luben.tuikov@amd.com \
--cc=robdclark@chromium.org \
--cc=sarah.walker@imgtec.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.