From: "Christian König" <christian.koenig@amd.com>
To: Danilo Krummrich <dakr@redhat.com>,
Matthew Brost <matthew.brost@intel.com>
Cc: robdclark@chromium.org, sarah.walker@imgtec.com,
ketil.johnsen@arm.com, lina@asahilina.net, Liviu.Dudau@arm.com,
dri-devel@lists.freedesktop.org, luben.tuikov@amd.com,
donald.robson@imgtec.com, boris.brezillon@collabora.com,
intel-xe@lists.freedesktop.org, faith.ekstrand@collabora.com
Subject: Re: [Intel-xe] [PATCH v2 1/9] drm/sched: Convert drm scheduler to use a work queue rather than kthread
Date: Thu, 17 Aug 2023 15:35:51 +0200 [thread overview]
Message-ID: <ef4d2c78-6927-3d3b-7aac-27d013af7ea6@amd.com> (raw)
In-Reply-To: <e8fa305a-0ac8-ece7-efeb-f9cec2892d44@redhat.com>
Am 17.08.23 um 13:13 schrieb Danilo Krummrich:
> On 8/17/23 07:33, Christian König wrote:
>> [SNIP]
>> The hardware seems to work mostly the same for all vendors, but you
>> somehow seem to think that filling the ring is somehow beneficial
>> which is really not the case as far as I can see.
>
> I think that's a misunderstanding. I'm not trying to say that it is
> *always* beneficial to fill up the ring as much as possible. But I
> think it is under certain circumstances, exactly those circumstances I
> described for Nouveau.
As far as I can see this is not correct for Nouveau either.
>
> As mentioned, in Nouveau the size of a job is only really limited by
> the ring size, which means that one job can (but does not necessarily)
> fill up the whole ring. We both agree that this is inefficient,
> because it potentially results into the HW run dry due to
> hw_submission_limit == 1.
>
> I recognize you said that one should define hw_submission_limit and
> adjust the other parts of the equation accordingly, the options I see
> are:
>
> (1) Increase the ring size while keeping the maximum job size.
> (2) Decrease the maximum job size while keeping the ring size.
> (3) Let the scheduler track the actual job size rather than the
> maximum job size.
>
> (1) results into potentially wasted ring memory, because we're not
> always reaching the maximum job size, but the scheduler assumes so.
>
> (2) results into more IOCTLs from userspace for the same amount of IBs
> and more jobs result into more memory allocations and more work being
> submitted to the workqueue (with Matt's patches).
>
> (3) doesn't seem to have any of those draw backs.
>
> What would be your take on that?
>
> Actually, if none of the other drivers is interested into a more
> precise way of keeping track of the ring utilization, I'd be totally
> fine to do it in a driver specific way. However, unfortunately I don't
> see how this would be possible.
>
> My proposal would be to just keep the hw_submission_limit (maybe
> rename it to submission_unit_limit) and add a submission_units field
> to struct drm_sched_job. By default a jobs submission_units field
> would be 0 and the scheduler would behave the exact same way as it
> does now.
>
> Accordingly, jobs with submission_units > 1 would contribute more than
> one unit to the submission_unit_limit.
>
> What do you think about that?
I think you are approaching this from the completely wrong side.
See the UAPI needs to be stable, so you need a maximum job size
otherwise it can happen that a combination of large and small
submissions work while a different combination doesn't.
So what you usually do, and this is driver independent because simply a
requirement of the UAPI, is that you say here that's my maximum job size
as well as the number of submission which should be pushed to the hw at
the same time. And then get the resulting ring size by the product of
the two.
That the ring in this use case can't be fully utilized is not a draw
back, this is completely intentional design which should apply to all
drivers independent of the vendor.
>
> Besides all that, you said that filling up the ring just enough to not
> let the HW run dry rather than filling it up entirely is desirable.
> Why do you think so? I tend to think that in most cases it shouldn't
> make difference.
That results in better scheduling behavior. It's mostly beneficial if
you don't have a hw scheduler, but as far as I can see there is no need
to pump everything to the hw as fast as possible.
Regards,
Christian.
>
> - Danilo
>
>>
>> Regards,
>> Christian.
>>
>>> Because one really is the minimum if you want to do work at all, but
>>> as you mentioned above a job limit of one can let the ring run dry.
>>>
>>> In the end my proposal comes down to tracking the actual size of a
>>> job rather than just assuming a pre-defined maximum job size, and
>>> hence a dynamic job limit.
>>>
>>> I don't think this would hurt the scheduler granularity. In fact, it
>>> should even contribute to the desire of not letting the ring run dry
>>> even better. Especially for sequences of small jobs, where the
>>> current implementation might wrongly assume the ring is already full
>>> although actually there would still be enough space left.
>>>
>>>>
>>>> Christian.
>>>>
>>>>>
>>>>>>
>>>>>> Otherwise your scheduler might just overwrite the ring buffer by
>>>>>> pushing things to fast.
>>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>>
>>>>>>> Given that, it seems like it would be better to let the
>>>>>>> scheduler keep track of empty ring "slots" instead, such that
>>>>>>> the scheduler can deceide whether a subsequent job will still
>>>>>>> fit on the ring and if not re-evaluate once a previous job
>>>>>>> finished. Of course each submitted job would be required to
>>>>>>> carry the number of slots it requires on the ring.
>>>>>>>
>>>>>>> What to you think of implementing this as alternative flow
>>>>>>> control mechanism? Implementation wise this could be a union
>>>>>>> with the existing hw_submission_limit.
>>>>>>>
>>>>>>> - Danilo
>>>>>>>
>>>>>>>>
>>>>>>>> A problem with this design is currently a drm_gpu_scheduler uses a
>>>>>>>> kthread for submission / job cleanup. This doesn't scale if a
>>>>>>>> large
>>>>>>>> number of drm_gpu_scheduler are used. To work around the
>>>>>>>> scaling issue,
>>>>>>>> use a worker rather than kthread for submission / job cleanup.
>>>>>>>>
>>>>>>>> v2:
>>>>>>>> - (Rob Clark) Fix msm build
>>>>>>>> - Pass in run work queue
>>>>>>>> v3:
>>>>>>>> - (Boris) don't have loop in worker
>>>>>>>> v4:
>>>>>>>> - (Tvrtko) break out submit ready, stop, start helpers into
>>>>>>>> own patch
>>>>>>>>
>>>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <christian.koenig@amd.com>
To: Danilo Krummrich <dakr@redhat.com>,
Matthew Brost <matthew.brost@intel.com>
Cc: robdclark@chromium.org, thomas.hellstrom@linux.intel.com,
sarah.walker@imgtec.com, ketil.johnsen@arm.com,
lina@asahilina.net, Liviu.Dudau@arm.com,
dri-devel@lists.freedesktop.org, luben.tuikov@amd.com,
donald.robson@imgtec.com, boris.brezillon@collabora.com,
intel-xe@lists.freedesktop.org, faith.ekstrand@collabora.com
Subject: Re: [PATCH v2 1/9] drm/sched: Convert drm scheduler to use a work queue rather than kthread
Date: Thu, 17 Aug 2023 15:35:51 +0200 [thread overview]
Message-ID: <ef4d2c78-6927-3d3b-7aac-27d013af7ea6@amd.com> (raw)
In-Reply-To: <e8fa305a-0ac8-ece7-efeb-f9cec2892d44@redhat.com>
Am 17.08.23 um 13:13 schrieb Danilo Krummrich:
> On 8/17/23 07:33, Christian König wrote:
>> [SNIP]
>> The hardware seems to work mostly the same for all vendors, but you
>> somehow seem to think that filling the ring is somehow beneficial
>> which is really not the case as far as I can see.
>
> I think that's a misunderstanding. I'm not trying to say that it is
> *always* beneficial to fill up the ring as much as possible. But I
> think it is under certain circumstances, exactly those circumstances I
> described for Nouveau.
As far as I can see this is not correct for Nouveau either.
>
> As mentioned, in Nouveau the size of a job is only really limited by
> the ring size, which means that one job can (but does not necessarily)
> fill up the whole ring. We both agree that this is inefficient,
> because it potentially results into the HW run dry due to
> hw_submission_limit == 1.
>
> I recognize you said that one should define hw_submission_limit and
> adjust the other parts of the equation accordingly, the options I see
> are:
>
> (1) Increase the ring size while keeping the maximum job size.
> (2) Decrease the maximum job size while keeping the ring size.
> (3) Let the scheduler track the actual job size rather than the
> maximum job size.
>
> (1) results into potentially wasted ring memory, because we're not
> always reaching the maximum job size, but the scheduler assumes so.
>
> (2) results into more IOCTLs from userspace for the same amount of IBs
> and more jobs result into more memory allocations and more work being
> submitted to the workqueue (with Matt's patches).
>
> (3) doesn't seem to have any of those draw backs.
>
> What would be your take on that?
>
> Actually, if none of the other drivers is interested into a more
> precise way of keeping track of the ring utilization, I'd be totally
> fine to do it in a driver specific way. However, unfortunately I don't
> see how this would be possible.
>
> My proposal would be to just keep the hw_submission_limit (maybe
> rename it to submission_unit_limit) and add a submission_units field
> to struct drm_sched_job. By default a jobs submission_units field
> would be 0 and the scheduler would behave the exact same way as it
> does now.
>
> Accordingly, jobs with submission_units > 1 would contribute more than
> one unit to the submission_unit_limit.
>
> What do you think about that?
I think you are approaching this from the completely wrong side.
See the UAPI needs to be stable, so you need a maximum job size
otherwise it can happen that a combination of large and small
submissions work while a different combination doesn't.
So what you usually do, and this is driver independent because simply a
requirement of the UAPI, is that you say here that's my maximum job size
as well as the number of submission which should be pushed to the hw at
the same time. And then get the resulting ring size by the product of
the two.
That the ring in this use case can't be fully utilized is not a draw
back, this is completely intentional design which should apply to all
drivers independent of the vendor.
>
> Besides all that, you said that filling up the ring just enough to not
> let the HW run dry rather than filling it up entirely is desirable.
> Why do you think so? I tend to think that in most cases it shouldn't
> make difference.
That results in better scheduling behavior. It's mostly beneficial if
you don't have a hw scheduler, but as far as I can see there is no need
to pump everything to the hw as fast as possible.
Regards,
Christian.
>
> - Danilo
>
>>
>> Regards,
>> Christian.
>>
>>> Because one really is the minimum if you want to do work at all, but
>>> as you mentioned above a job limit of one can let the ring run dry.
>>>
>>> In the end my proposal comes down to tracking the actual size of a
>>> job rather than just assuming a pre-defined maximum job size, and
>>> hence a dynamic job limit.
>>>
>>> I don't think this would hurt the scheduler granularity. In fact, it
>>> should even contribute to the desire of not letting the ring run dry
>>> even better. Especially for sequences of small jobs, where the
>>> current implementation might wrongly assume the ring is already full
>>> although actually there would still be enough space left.
>>>
>>>>
>>>> Christian.
>>>>
>>>>>
>>>>>>
>>>>>> Otherwise your scheduler might just overwrite the ring buffer by
>>>>>> pushing things to fast.
>>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>>
>>>>>>> Given that, it seems like it would be better to let the
>>>>>>> scheduler keep track of empty ring "slots" instead, such that
>>>>>>> the scheduler can deceide whether a subsequent job will still
>>>>>>> fit on the ring and if not re-evaluate once a previous job
>>>>>>> finished. Of course each submitted job would be required to
>>>>>>> carry the number of slots it requires on the ring.
>>>>>>>
>>>>>>> What to you think of implementing this as alternative flow
>>>>>>> control mechanism? Implementation wise this could be a union
>>>>>>> with the existing hw_submission_limit.
>>>>>>>
>>>>>>> - Danilo
>>>>>>>
>>>>>>>>
>>>>>>>> A problem with this design is currently a drm_gpu_scheduler uses a
>>>>>>>> kthread for submission / job cleanup. This doesn't scale if a
>>>>>>>> large
>>>>>>>> number of drm_gpu_scheduler are used. To work around the
>>>>>>>> scaling issue,
>>>>>>>> use a worker rather than kthread for submission / job cleanup.
>>>>>>>>
>>>>>>>> v2:
>>>>>>>> - (Rob Clark) Fix msm build
>>>>>>>> - Pass in run work queue
>>>>>>>> v3:
>>>>>>>> - (Boris) don't have loop in worker
>>>>>>>> v4:
>>>>>>>> - (Tvrtko) break out submit ready, stop, start helpers into
>>>>>>>> own patch
>>>>>>>>
>>>>>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
next prev parent reply other threads:[~2023-08-17 13:36 UTC|newest]
Thread overview: 163+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-11 2:31 [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 1/9] drm/sched: Convert drm scheduler to use a work queue rather than kthread Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-16 11:30 ` [Intel-xe] " Danilo Krummrich
2023-08-16 11:30 ` Danilo Krummrich
2023-08-16 14:05 ` [Intel-xe] " Christian König
2023-08-16 14:05 ` Christian König
2023-08-16 12:30 ` [Intel-xe] " Danilo Krummrich
2023-08-16 12:30 ` Danilo Krummrich
2023-08-16 14:38 ` [Intel-xe] " Matthew Brost
2023-08-16 14:38 ` Matthew Brost
2023-08-16 15:40 ` [Intel-xe] " Danilo Krummrich
2023-08-16 15:40 ` Danilo Krummrich
2023-08-16 14:59 ` [Intel-xe] " Christian König
2023-08-16 14:59 ` Christian König
2023-08-16 16:33 ` [Intel-xe] " Danilo Krummrich
2023-08-16 16:33 ` Danilo Krummrich
2023-08-17 5:33 ` [Intel-xe] " Christian König
2023-08-17 5:33 ` Christian König
2023-08-17 11:13 ` [Intel-xe] " Danilo Krummrich
2023-08-17 11:13 ` Danilo Krummrich
2023-08-17 13:35 ` Christian König [this message]
2023-08-17 13:35 ` Christian König
2023-08-17 12:48 ` [Intel-xe] " Danilo Krummrich
2023-08-17 12:48 ` Danilo Krummrich
2023-08-17 16:17 ` [Intel-xe] " Christian König
2023-08-17 16:17 ` Christian König
2023-08-18 11:58 ` [Intel-xe] " Danilo Krummrich
2023-08-18 11:58 ` Danilo Krummrich
2023-08-21 14:07 ` [Intel-xe] " Christian König
2023-08-21 14:07 ` Christian König
2023-08-21 18:01 ` [Intel-xe] " Danilo Krummrich
2023-08-21 18:01 ` Danilo Krummrich
2023-08-21 18:12 ` [Intel-xe] " Christian König
2023-08-21 18:12 ` Christian König
2023-08-21 19:07 ` [Intel-xe] " Danilo Krummrich
2023-08-21 19:07 ` Danilo Krummrich
2023-08-22 9:35 ` [Intel-xe] " Christian König
2023-08-22 9:35 ` Christian König
2023-08-21 19:46 ` [Intel-xe] " Faith Ekstrand
2023-08-21 19:46 ` Faith Ekstrand
2023-08-22 9:51 ` [Intel-xe] " Christian König
2023-08-22 9:51 ` Christian König
2023-08-22 16:55 ` [Intel-xe] " Faith Ekstrand
2023-08-22 16:55 ` Faith Ekstrand
2023-08-24 11:50 ` [Intel-xe] " Bas Nieuwenhuizen
2023-08-24 11:50 ` Bas Nieuwenhuizen
2023-08-18 3:08 ` [Intel-xe] " Matthew Brost
2023-08-18 3:08 ` Matthew Brost
2023-08-18 5:40 ` [Intel-xe] " Christian König
2023-08-18 5:40 ` Christian König
2023-08-18 12:49 ` [Intel-xe] " Matthew Brost
2023-08-18 12:49 ` Matthew Brost
2023-08-18 12:06 ` [Intel-xe] " Danilo Krummrich
2023-08-18 12:06 ` Danilo Krummrich
2023-09-12 14:28 ` [Intel-xe] " Boris Brezillon
2023-09-12 14:28 ` Boris Brezillon
2023-09-12 14:33 ` [Intel-xe] " Danilo Krummrich
2023-09-12 14:33 ` Danilo Krummrich
2023-09-12 14:49 ` [Intel-xe] " Boris Brezillon
2023-09-12 14:49 ` Boris Brezillon
2023-09-12 15:13 ` [Intel-xe] " Boris Brezillon
2023-09-12 15:13 ` Boris Brezillon
2023-09-12 16:58 ` [Intel-xe] " Danilo Krummrich
2023-09-12 16:58 ` Danilo Krummrich
2023-09-12 16:52 ` [Intel-xe] " Danilo Krummrich
2023-09-12 16:52 ` Danilo Krummrich
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 2/9] drm/sched: Move schedule policy to scheduler / entity Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 21:43 ` [Intel-xe] " Maira Canal
2023-08-11 21:43 ` Maira Canal
2023-08-12 3:20 ` [Intel-xe] " Matthew Brost
2023-08-12 3:20 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 3/9] drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-29 17:37 ` [Intel-xe] " Danilo Krummrich
2023-08-29 17:37 ` Danilo Krummrich
2023-09-05 11:10 ` [Intel-xe] " Danilo Krummrich
2023-09-05 11:10 ` Danilo Krummrich
2023-09-11 19:44 ` [Intel-xe] " Matthew Brost
2023-09-11 19:44 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 4/9] drm/sched: Split free_job into own work item Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-17 13:39 ` [Intel-xe] " Christian König
2023-08-17 13:39 ` Christian König
2023-08-17 17:54 ` [Intel-xe] " Matthew Brost
2023-08-17 17:54 ` Matthew Brost
2023-08-18 5:27 ` [Intel-xe] " Christian König
2023-08-18 5:27 ` Christian König
2023-08-18 13:13 ` [Intel-xe] " Matthew Brost
2023-08-18 13:13 ` Matthew Brost
2023-08-21 13:17 ` [Intel-xe] " Christian König
2023-08-21 13:17 ` Christian König
2023-08-23 3:27 ` [Intel-xe] " Matthew Brost
2023-08-23 3:27 ` Matthew Brost
2023-08-23 7:10 ` [Intel-xe] " Christian König
2023-08-23 7:10 ` Christian König
2023-08-23 15:24 ` [Intel-xe] " Matthew Brost
2023-08-23 15:24 ` Matthew Brost
2023-08-23 15:41 ` [Intel-xe] " Alex Deucher
2023-08-23 15:41 ` Alex Deucher
2023-08-23 17:26 ` [Intel-xe] " Rodrigo Vivi
2023-08-23 17:26 ` Rodrigo Vivi
2023-08-23 23:12 ` Matthew Brost
2023-08-23 23:12 ` Matthew Brost
2023-08-24 11:44 ` Christian König
2023-08-24 11:44 ` Christian König
2023-08-24 14:30 ` Matthew Brost
2023-08-24 14:30 ` Matthew Brost
2023-08-24 23:04 ` Danilo Krummrich
2023-08-24 23:04 ` Danilo Krummrich
2023-08-25 2:58 ` [Intel-xe] " Matthew Brost
2023-08-25 2:58 ` Matthew Brost
2023-08-25 8:02 ` [Intel-xe] " Christian König
2023-08-25 8:02 ` Christian König
2023-08-25 13:36 ` [Intel-xe] " Matthew Brost
2023-08-25 13:36 ` Matthew Brost
2023-08-25 13:45 ` [Intel-xe] " Christian König
2023-08-25 13:45 ` Christian König
2023-09-12 10:13 ` [Intel-xe] " Boris Brezillon
2023-09-12 10:13 ` Boris Brezillon
2023-09-12 10:46 ` [Intel-xe] " Danilo Krummrich
2023-09-12 10:46 ` Danilo Krummrich
2023-09-12 12:18 ` [Intel-xe] " Boris Brezillon
2023-09-12 12:18 ` Boris Brezillon
2023-09-12 12:56 ` [Intel-xe] " Danilo Krummrich
2023-09-12 12:56 ` Danilo Krummrich
2023-09-12 13:52 ` [Intel-xe] " Boris Brezillon
2023-09-12 13:52 ` Boris Brezillon
2023-09-12 14:10 ` [Intel-xe] " Danilo Krummrich
2023-09-12 14:10 ` Danilo Krummrich
2023-09-12 13:27 ` [Intel-xe] " Boris Brezillon
2023-09-12 13:27 ` Boris Brezillon
2023-09-12 13:34 ` [Intel-xe] " Danilo Krummrich
2023-09-12 13:34 ` Danilo Krummrich
2023-09-12 13:53 ` [Intel-xe] " Boris Brezillon
2023-09-12 13:53 ` Boris Brezillon
2023-08-28 18:04 ` [Intel-xe] " Danilo Krummrich
2023-08-28 18:04 ` Danilo Krummrich
2023-08-28 18:41 ` [Intel-xe] " Matthew Brost
2023-08-28 18:41 ` Matthew Brost
2023-08-29 1:20 ` [Intel-xe] " Danilo Krummrich
2023-08-29 1:20 ` Danilo Krummrich
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 5/9] drm/sched: Add generic scheduler message interface Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 6/9] drm/sched: Add drm_sched_start_timeout_unlocked helper Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 7/9] drm/sched: Start run wq before TDR in drm_sched_start Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 8/9] drm/sched: Submit job before starting TDR Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:31 ` [Intel-xe] [PATCH v2 9/9] drm/sched: Add helper to set TDR timeout Matthew Brost
2023-08-11 2:31 ` Matthew Brost
2023-08-11 2:34 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev2) Patchwork
2023-08-24 0:08 ` [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Danilo Krummrich
2023-08-24 0:08 ` Danilo Krummrich
2023-08-24 3:23 ` [Intel-xe] " Matthew Brost
2023-08-24 3:23 ` Matthew Brost
2023-08-24 14:51 ` [Intel-xe] " Danilo Krummrich
2023-08-24 14:51 ` Danilo Krummrich
2023-08-25 3:01 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev3) Patchwork
2023-09-05 11:13 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev4) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ef4d2c78-6927-3d3b-7aac-27d013af7ea6@amd.com \
--to=christian.koenig@amd.com \
--cc=Liviu.Dudau@arm.com \
--cc=boris.brezillon@collabora.com \
--cc=dakr@redhat.com \
--cc=donald.robson@imgtec.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=faith.ekstrand@collabora.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=ketil.johnsen@arm.com \
--cc=lina@asahilina.net \
--cc=luben.tuikov@amd.com \
--cc=matthew.brost@intel.com \
--cc=robdclark@chromium.org \
--cc=sarah.walker@imgtec.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.