All of lore.kernel.org
 help / color / mirror / Atom feed
From: Danilo Krummrich <dakr@redhat.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: robdclark@chromium.org, sarah.walker@imgtec.com,
	ketil.johnsen@arm.com, lina@asahilina.net, Liviu.Dudau@arm.com,
	dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
	luben.tuikov@amd.com, donald.robson@imgtec.com,
	boris.brezillon@collabora.com, christian.koenig@amd.com,
	faith.ekstrand@collabora.com
Subject: Re: [Intel-xe] [PATCH v2 4/9] drm/sched: Split free_job into own work item
Date: Tue, 29 Aug 2023 03:20:37 +0200	[thread overview]
Message-ID: <fe3e6d7b-8915-e6c5-43db-ddb778bfea9c@redhat.com> (raw)
In-Reply-To: <ZOzqUtgXj0J4muYQ@DUT025-TGLU.fm.intel.com>

On 8/28/23 20:41, Matthew Brost wrote:
> On Mon, Aug 28, 2023 at 08:04:31PM +0200, Danilo Krummrich wrote:
>> On 8/11/23 04:31, Matthew Brost wrote:
>>> Rather than call free_job and run_job in same work item have a dedicated
>>> work item for each. This aligns with the design and intended use of work
>>> queues.
>>>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>>    drivers/gpu/drm/scheduler/sched_main.c | 137 ++++++++++++++++++-------
>>>    include/drm/gpu_scheduler.h            |   8 +-
>>>    2 files changed, 106 insertions(+), 39 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>>> index cede47afc800..b67469eac179 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -1275,7 +1338,8 @@ EXPORT_SYMBOL(drm_sched_submit_ready);
>>>    void drm_sched_submit_stop(struct drm_gpu_scheduler *sched)
>>
>> I was wondering what the scheduler teardown sequence looks like for
>> DRM_SCHED_POLICY_SINGLE_ENTITY and how XE does that.
>>
>> In Nouveau, userspace can ask the kernel to create a channel (or multiple),
>> where each channel represents a ring feeding the firmware scheduler. Userspace
>> can forcefully close channels via either a dedicated IOCTL or by just closing
>> the FD which subsequently closes all channels opened through this FD.
>>
>> When this happens the scheduler needs to be teared down. Without keeping track of
>> things in a driver specific way, the only thing I could really come up with is the
>> following.
>>
>> /* Make sure no more jobs are fetched from the entity. */
>> drm_sched_submit_stop();
>>
>> /* Wait for the channel to be idle, namely jobs in flight to complete. */
>> nouveau_channel_idle();
>>
>> /* Stop the scheduler to free jobs from the pending_list. Ring must be idle at this
>>   * point, otherwise me might leak jobs. Feels more like a workaround to free
>>   * finished jobs.
>>   */
>> drm_sched_stop();
>>
>> /* Free jobs from the entity queue. */
>> drm_sched_entity_fini();
>>
>> /* Probably not even needed in this case. */
>> drm_sched_fini();
>>
>> This doesn't look very straightforward though. I wonder if other drivers feeding
>> firmware schedulers have similar cases. Maybe something like drm_sched_teardown(),
>> which would stop job submission, wait for pending jobs to finish and subsequently
>> free them up would makes sense?
>>
> 
> exec queue == gpu scheduler + entity in Xe
> 
> We kinda invented our own flow with reference counting + use the TDR for
> cleanup.

Thanks for the detailed explanation. In case of making it driver specific
I thought about something similar, pretty much the same reference counting,
but instead of the TDR, let jobs from the entity just return -ECANCELED from
job_run() and also signal pending jobs with the same error code.

On the other hand, I don't really want scheduler and job structures to
potentially outlive the channel. Which is where I think I'd be nice to avoid
consuming all the queued up jobs from the entity in the first place, stop the
schdeduler with drm_sched_submit_stop(), signal all pending_jobs with
-ECANCELED and call the free_job() callbacks right away.

The latter I could probably do in Nouveau as well, however, it kinda feels
wrong to do all that within the driver.

Also, I was wondering how existing drivers using the GPU scheduler handle
that. It seems like they just rely on the pending_list of the scheduler being
empty once drm_sched_fini() is called. Admittedly, that's pretty likely (never
to happen) since it's typically called on driver remove, but I don't see how
that's actually ensured. Am I missing something?
> 
> We have a creation ref for the exec queue plus each job takes a ref to
> the exec queue. On exec queue close [1][2] (whether that be IOCTL or FD
> close) we drop the creation reference and call a vfunc for killing thr
> exec queue. The firmware implementation is here [3].
> 
> If you read through it just sets the TDR to the minimum value [4], the
> TDR will kick any running jobs the off the hardware, signals the jobs
> fences, any jobs waiting on dependencies eventually flush out via
> run_job + TDR for cleanup without going on the hardware, the exec queue
> reference count goes to zero once all jobs are flushed out, we trigger
> the exec queue clean up flow and finally free all memory for the exec
> queue.
> 
> Using the TDR in this way is how we teardown an exec queue for other
> reasons too (user page fault, user job times out, user job hang detected
> by firmware, device reset, etc...).
> 
> This all works rather nicely and is a single code path for all of these
> cases. I'm no sure if this can be made any more generic nor do I really
> see the need too (at least I don't see Xe needing a generic solution).
> 
> Hope this helps,
> Matt
> 
> [1] https://gitlab.freedesktop.org/drm/xe/kernel/-/blob/drm-xe-next/drivers/gpu/drm/xe/xe_exec_queue.c#L911
> [2] https://gitlab.freedesktop.org/drm/xe/kernel/-/blob/drm-xe-next/drivers/gpu/drm/xe/xe_device.c#L77
> [3] https://gitlab.freedesktop.org/drm/xe/kernel/-/tree/drm-xe-next/drivers/gpu/drm/xe#L1184
> [4] https://gitlab.freedesktop.org/drm/xe/kernel/-/tree/drm-xe-next/drivers/gpu/drm/xe#L789
> 
>> - Danilo
>>


WARNING: multiple messages have this Message-ID (diff)
From: Danilo Krummrich <dakr@redhat.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com,
	sarah.walker@imgtec.com, ketil.johnsen@arm.com,
	lina@asahilina.net, Liviu.Dudau@arm.com,
	dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
	luben.tuikov@amd.com, donald.robson@imgtec.com,
	boris.brezillon@collabora.com, christian.koenig@amd.com,
	faith.ekstrand@collabora.com
Subject: Re: [PATCH v2 4/9] drm/sched: Split free_job into own work item
Date: Tue, 29 Aug 2023 03:20:37 +0200	[thread overview]
Message-ID: <fe3e6d7b-8915-e6c5-43db-ddb778bfea9c@redhat.com> (raw)
In-Reply-To: <ZOzqUtgXj0J4muYQ@DUT025-TGLU.fm.intel.com>

On 8/28/23 20:41, Matthew Brost wrote:
> On Mon, Aug 28, 2023 at 08:04:31PM +0200, Danilo Krummrich wrote:
>> On 8/11/23 04:31, Matthew Brost wrote:
>>> Rather than call free_job and run_job in same work item have a dedicated
>>> work item for each. This aligns with the design and intended use of work
>>> queues.
>>>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>>    drivers/gpu/drm/scheduler/sched_main.c | 137 ++++++++++++++++++-------
>>>    include/drm/gpu_scheduler.h            |   8 +-
>>>    2 files changed, 106 insertions(+), 39 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>>> index cede47afc800..b67469eac179 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -1275,7 +1338,8 @@ EXPORT_SYMBOL(drm_sched_submit_ready);
>>>    void drm_sched_submit_stop(struct drm_gpu_scheduler *sched)
>>
>> I was wondering what the scheduler teardown sequence looks like for
>> DRM_SCHED_POLICY_SINGLE_ENTITY and how XE does that.
>>
>> In Nouveau, userspace can ask the kernel to create a channel (or multiple),
>> where each channel represents a ring feeding the firmware scheduler. Userspace
>> can forcefully close channels via either a dedicated IOCTL or by just closing
>> the FD which subsequently closes all channels opened through this FD.
>>
>> When this happens the scheduler needs to be teared down. Without keeping track of
>> things in a driver specific way, the only thing I could really come up with is the
>> following.
>>
>> /* Make sure no more jobs are fetched from the entity. */
>> drm_sched_submit_stop();
>>
>> /* Wait for the channel to be idle, namely jobs in flight to complete. */
>> nouveau_channel_idle();
>>
>> /* Stop the scheduler to free jobs from the pending_list. Ring must be idle at this
>>   * point, otherwise me might leak jobs. Feels more like a workaround to free
>>   * finished jobs.
>>   */
>> drm_sched_stop();
>>
>> /* Free jobs from the entity queue. */
>> drm_sched_entity_fini();
>>
>> /* Probably not even needed in this case. */
>> drm_sched_fini();
>>
>> This doesn't look very straightforward though. I wonder if other drivers feeding
>> firmware schedulers have similar cases. Maybe something like drm_sched_teardown(),
>> which would stop job submission, wait for pending jobs to finish and subsequently
>> free them up would makes sense?
>>
> 
> exec queue == gpu scheduler + entity in Xe
> 
> We kinda invented our own flow with reference counting + use the TDR for
> cleanup.

Thanks for the detailed explanation. In case of making it driver specific
I thought about something similar, pretty much the same reference counting,
but instead of the TDR, let jobs from the entity just return -ECANCELED from
job_run() and also signal pending jobs with the same error code.

On the other hand, I don't really want scheduler and job structures to
potentially outlive the channel. Which is where I think I'd be nice to avoid
consuming all the queued up jobs from the entity in the first place, stop the
schdeduler with drm_sched_submit_stop(), signal all pending_jobs with
-ECANCELED and call the free_job() callbacks right away.

The latter I could probably do in Nouveau as well, however, it kinda feels
wrong to do all that within the driver.

Also, I was wondering how existing drivers using the GPU scheduler handle
that. It seems like they just rely on the pending_list of the scheduler being
empty once drm_sched_fini() is called. Admittedly, that's pretty likely (never
to happen) since it's typically called on driver remove, but I don't see how
that's actually ensured. Am I missing something?
> 
> We have a creation ref for the exec queue plus each job takes a ref to
> the exec queue. On exec queue close [1][2] (whether that be IOCTL or FD
> close) we drop the creation reference and call a vfunc for killing thr
> exec queue. The firmware implementation is here [3].
> 
> If you read through it just sets the TDR to the minimum value [4], the
> TDR will kick any running jobs the off the hardware, signals the jobs
> fences, any jobs waiting on dependencies eventually flush out via
> run_job + TDR for cleanup without going on the hardware, the exec queue
> reference count goes to zero once all jobs are flushed out, we trigger
> the exec queue clean up flow and finally free all memory for the exec
> queue.
> 
> Using the TDR in this way is how we teardown an exec queue for other
> reasons too (user page fault, user job times out, user job hang detected
> by firmware, device reset, etc...).
> 
> This all works rather nicely and is a single code path for all of these
> cases. I'm no sure if this can be made any more generic nor do I really
> see the need too (at least I don't see Xe needing a generic solution).
> 
> Hope this helps,
> Matt
> 
> [1] https://gitlab.freedesktop.org/drm/xe/kernel/-/blob/drm-xe-next/drivers/gpu/drm/xe/xe_exec_queue.c#L911
> [2] https://gitlab.freedesktop.org/drm/xe/kernel/-/blob/drm-xe-next/drivers/gpu/drm/xe/xe_device.c#L77
> [3] https://gitlab.freedesktop.org/drm/xe/kernel/-/tree/drm-xe-next/drivers/gpu/drm/xe#L1184
> [4] https://gitlab.freedesktop.org/drm/xe/kernel/-/tree/drm-xe-next/drivers/gpu/drm/xe#L789
> 
>> - Danilo
>>


  reply	other threads:[~2023-08-29  1:20 UTC|newest]

Thread overview: 163+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-11  2:31 [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Matthew Brost
2023-08-11  2:31 ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 1/9] drm/sched: Convert drm scheduler to use a work queue rather than kthread Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-16 11:30   ` [Intel-xe] " Danilo Krummrich
2023-08-16 11:30     ` Danilo Krummrich
2023-08-16 14:05     ` [Intel-xe] " Christian König
2023-08-16 14:05       ` Christian König
2023-08-16 12:30       ` [Intel-xe] " Danilo Krummrich
2023-08-16 12:30         ` Danilo Krummrich
2023-08-16 14:38         ` [Intel-xe] " Matthew Brost
2023-08-16 14:38           ` Matthew Brost
2023-08-16 15:40           ` [Intel-xe] " Danilo Krummrich
2023-08-16 15:40             ` Danilo Krummrich
2023-08-16 14:59         ` [Intel-xe] " Christian König
2023-08-16 14:59           ` Christian König
2023-08-16 16:33           ` [Intel-xe] " Danilo Krummrich
2023-08-16 16:33             ` Danilo Krummrich
2023-08-17  5:33             ` [Intel-xe] " Christian König
2023-08-17  5:33               ` Christian König
2023-08-17 11:13               ` [Intel-xe] " Danilo Krummrich
2023-08-17 11:13                 ` Danilo Krummrich
2023-08-17 13:35                 ` [Intel-xe] " Christian König
2023-08-17 13:35                   ` Christian König
2023-08-17 12:48                   ` [Intel-xe] " Danilo Krummrich
2023-08-17 12:48                     ` Danilo Krummrich
2023-08-17 16:17                     ` [Intel-xe] " Christian König
2023-08-17 16:17                       ` Christian König
2023-08-18 11:58                       ` [Intel-xe] " Danilo Krummrich
2023-08-18 11:58                         ` Danilo Krummrich
2023-08-21 14:07                         ` [Intel-xe] " Christian König
2023-08-21 14:07                           ` Christian König
2023-08-21 18:01                           ` [Intel-xe] " Danilo Krummrich
2023-08-21 18:01                             ` Danilo Krummrich
2023-08-21 18:12                             ` [Intel-xe] " Christian König
2023-08-21 18:12                               ` Christian König
2023-08-21 19:07                               ` [Intel-xe] " Danilo Krummrich
2023-08-21 19:07                                 ` Danilo Krummrich
2023-08-22  9:35                                 ` [Intel-xe] " Christian König
2023-08-22  9:35                                   ` Christian König
2023-08-21 19:46                               ` [Intel-xe] " Faith Ekstrand
2023-08-21 19:46                                 ` Faith Ekstrand
2023-08-22  9:51                                 ` [Intel-xe] " Christian König
2023-08-22  9:51                                   ` Christian König
2023-08-22 16:55                                   ` [Intel-xe] " Faith Ekstrand
2023-08-22 16:55                                     ` Faith Ekstrand
2023-08-24 11:50                                     ` [Intel-xe] " Bas Nieuwenhuizen
2023-08-24 11:50                                       ` Bas Nieuwenhuizen
2023-08-18  3:08                 ` [Intel-xe] " Matthew Brost
2023-08-18  3:08                   ` Matthew Brost
2023-08-18  5:40                   ` [Intel-xe] " Christian König
2023-08-18  5:40                     ` Christian König
2023-08-18 12:49                     ` [Intel-xe] " Matthew Brost
2023-08-18 12:49                       ` Matthew Brost
2023-08-18 12:06                       ` [Intel-xe] " Danilo Krummrich
2023-08-18 12:06                         ` Danilo Krummrich
2023-09-12 14:28                 ` [Intel-xe] " Boris Brezillon
2023-09-12 14:28                   ` Boris Brezillon
2023-09-12 14:33                   ` [Intel-xe] " Danilo Krummrich
2023-09-12 14:33                     ` Danilo Krummrich
2023-09-12 14:49                     ` [Intel-xe] " Boris Brezillon
2023-09-12 14:49                       ` Boris Brezillon
2023-09-12 15:13                       ` [Intel-xe] " Boris Brezillon
2023-09-12 15:13                         ` Boris Brezillon
2023-09-12 16:58                         ` [Intel-xe] " Danilo Krummrich
2023-09-12 16:58                           ` Danilo Krummrich
2023-09-12 16:52                       ` [Intel-xe] " Danilo Krummrich
2023-09-12 16:52                         ` Danilo Krummrich
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 2/9] drm/sched: Move schedule policy to scheduler / entity Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11 21:43   ` [Intel-xe] " Maira Canal
2023-08-11 21:43     ` Maira Canal
2023-08-12  3:20     ` [Intel-xe] " Matthew Brost
2023-08-12  3:20       ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 3/9] drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-29 17:37   ` [Intel-xe] " Danilo Krummrich
2023-08-29 17:37     ` Danilo Krummrich
2023-09-05 11:10     ` [Intel-xe] " Danilo Krummrich
2023-09-05 11:10       ` Danilo Krummrich
2023-09-11 19:44       ` [Intel-xe] " Matthew Brost
2023-09-11 19:44         ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 4/9] drm/sched: Split free_job into own work item Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-17 13:39   ` [Intel-xe] " Christian König
2023-08-17 13:39     ` Christian König
2023-08-17 17:54     ` [Intel-xe] " Matthew Brost
2023-08-17 17:54       ` Matthew Brost
2023-08-18  5:27       ` [Intel-xe] " Christian König
2023-08-18  5:27         ` Christian König
2023-08-18 13:13         ` [Intel-xe] " Matthew Brost
2023-08-18 13:13           ` Matthew Brost
2023-08-21 13:17           ` [Intel-xe] " Christian König
2023-08-21 13:17             ` Christian König
2023-08-23  3:27             ` [Intel-xe] " Matthew Brost
2023-08-23  3:27               ` Matthew Brost
2023-08-23  7:10               ` [Intel-xe] " Christian König
2023-08-23  7:10                 ` Christian König
2023-08-23 15:24                 ` [Intel-xe] " Matthew Brost
2023-08-23 15:24                   ` Matthew Brost
2023-08-23 15:41                   ` [Intel-xe] " Alex Deucher
2023-08-23 15:41                     ` Alex Deucher
2023-08-23 17:26                     ` [Intel-xe] " Rodrigo Vivi
2023-08-23 17:26                       ` Rodrigo Vivi
2023-08-23 23:12                       ` Matthew Brost
2023-08-23 23:12                         ` Matthew Brost
2023-08-24 11:44                         ` Christian König
2023-08-24 11:44                           ` Christian König
2023-08-24 14:30                           ` Matthew Brost
2023-08-24 14:30                             ` Matthew Brost
2023-08-24 23:04   ` Danilo Krummrich
2023-08-24 23:04     ` Danilo Krummrich
2023-08-25  2:58     ` [Intel-xe] " Matthew Brost
2023-08-25  2:58       ` Matthew Brost
2023-08-25  8:02       ` [Intel-xe] " Christian König
2023-08-25  8:02         ` Christian König
2023-08-25 13:36         ` [Intel-xe] " Matthew Brost
2023-08-25 13:36           ` Matthew Brost
2023-08-25 13:45           ` [Intel-xe] " Christian König
2023-08-25 13:45             ` Christian König
2023-09-12 10:13             ` [Intel-xe] " Boris Brezillon
2023-09-12 10:13               ` Boris Brezillon
2023-09-12 10:46               ` [Intel-xe] " Danilo Krummrich
2023-09-12 10:46                 ` Danilo Krummrich
2023-09-12 12:18                 ` [Intel-xe] " Boris Brezillon
2023-09-12 12:18                   ` Boris Brezillon
2023-09-12 12:56                   ` [Intel-xe] " Danilo Krummrich
2023-09-12 12:56                     ` Danilo Krummrich
2023-09-12 13:52                     ` [Intel-xe] " Boris Brezillon
2023-09-12 13:52                       ` Boris Brezillon
2023-09-12 14:10                       ` [Intel-xe] " Danilo Krummrich
2023-09-12 14:10                         ` Danilo Krummrich
2023-09-12 13:27             ` [Intel-xe] " Boris Brezillon
2023-09-12 13:27               ` Boris Brezillon
2023-09-12 13:34               ` [Intel-xe] " Danilo Krummrich
2023-09-12 13:34                 ` Danilo Krummrich
2023-09-12 13:53                 ` [Intel-xe] " Boris Brezillon
2023-09-12 13:53                   ` Boris Brezillon
2023-08-28 18:04   ` [Intel-xe] " Danilo Krummrich
2023-08-28 18:04     ` Danilo Krummrich
2023-08-28 18:41     ` [Intel-xe] " Matthew Brost
2023-08-28 18:41       ` Matthew Brost
2023-08-29  1:20       ` Danilo Krummrich [this message]
2023-08-29  1:20         ` Danilo Krummrich
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 5/9] drm/sched: Add generic scheduler message interface Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 6/9] drm/sched: Add drm_sched_start_timeout_unlocked helper Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 7/9] drm/sched: Start run wq before TDR in drm_sched_start Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 8/9] drm/sched: Submit job before starting TDR Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 9/9] drm/sched: Add helper to set TDR timeout Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:34 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev2) Patchwork
2023-08-24  0:08 ` [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Danilo Krummrich
2023-08-24  0:08   ` Danilo Krummrich
2023-08-24  3:23   ` [Intel-xe] " Matthew Brost
2023-08-24  3:23     ` Matthew Brost
2023-08-24 14:51     ` [Intel-xe] " Danilo Krummrich
2023-08-24 14:51       ` Danilo Krummrich
2023-08-25  3:01 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev3) Patchwork
2023-09-05 11:13 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev4) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fe3e6d7b-8915-e6c5-43db-ddb778bfea9c@redhat.com \
    --to=dakr@redhat.com \
    --cc=Liviu.Dudau@arm.com \
    --cc=boris.brezillon@collabora.com \
    --cc=christian.koenig@amd.com \
    --cc=donald.robson@imgtec.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=faith.ekstrand@collabora.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=ketil.johnsen@arm.com \
    --cc=lina@asahilina.net \
    --cc=luben.tuikov@amd.com \
    --cc=matthew.brost@intel.com \
    --cc=robdclark@chromium.org \
    --cc=sarah.walker@imgtec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.