All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boris Brezillon <boris.brezillon@collabora.com>
To: Danilo Krummrich <dakr@redhat.com>
Cc: robdclark@chromium.org, sarah.walker@imgtec.com,
	ketil.johnsen@arm.com, lina@asahilina.net, Liviu.Dudau@arm.com,
	dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
	luben.tuikov@amd.com, donald.robson@imgtec.com,
	"Christian König" <christian.koenig@amd.com>,
	faith.ekstrand@collabora.com
Subject: Re: [Intel-xe] [PATCH v2 1/9] drm/sched: Convert drm scheduler to use a work queue rather than kthread
Date: Tue, 12 Sep 2023 17:13:22 +0200	[thread overview]
Message-ID: <20230912171322.6c47a973@collabora.com> (raw)
In-Reply-To: <20230912164909.018d13c8@collabora.com>

On Tue, 12 Sep 2023 16:49:09 +0200
Boris Brezillon <boris.brezillon@collabora.com> wrote:

> On Tue, 12 Sep 2023 16:33:01 +0200
> Danilo Krummrich <dakr@redhat.com> wrote:
> 
> > On 9/12/23 16:28, Boris Brezillon wrote:  
> > > On Thu, 17 Aug 2023 13:13:31 +0200
> > > Danilo Krummrich <dakr@redhat.com> wrote:
> > >     
> > >> I think that's a misunderstanding. I'm not trying to say that it is
> > >> *always* beneficial to fill up the ring as much as possible. But I think
> > >> it is under certain circumstances, exactly those circumstances I
> > >> described for Nouveau.
> > >>
> > >> As mentioned, in Nouveau the size of a job is only really limited by the
> > >> ring size, which means that one job can (but does not necessarily) fill
> > >> up the whole ring. We both agree that this is inefficient, because it
> > >> potentially results into the HW run dry due to hw_submission_limit == 1.
> > >>
> > >> I recognize you said that one should define hw_submission_limit and
> > >> adjust the other parts of the equation accordingly, the options I see are:
> > >>
> > >> (1) Increase the ring size while keeping the maximum job size.
> > >> (2) Decrease the maximum job size while keeping the ring size.
> > >> (3) Let the scheduler track the actual job size rather than the maximum
> > >> job size.
> > >>
> > >> (1) results into potentially wasted ring memory, because we're not
> > >> always reaching the maximum job size, but the scheduler assumes so.
> > >>
> > >> (2) results into more IOCTLs from userspace for the same amount of IBs
> > >> and more jobs result into more memory allocations and more work being
> > >> submitted to the workqueue (with Matt's patches).
> > >>
> > >> (3) doesn't seem to have any of those draw backs.
> > >>
> > >> What would be your take on that?
> > >>
> > >> Actually, if none of the other drivers is interested into a more precise
> > >> way of keeping track of the ring utilization, I'd be totally fine to do
> > >> it in a driver specific way. However, unfortunately I don't see how this
> > >> would be possible.    
> > > 
> > > I'm not entirely sure, but I think PowerVR is pretty close to your
> > > description: jobs size is dynamic size, and the ring buffer size is
> > > picked by the driver at queue initialization time. What we did was to
> > > set hw_submission_limit to an arbitrarily high value of 64k (we could
> > > have used something like ringbuf_size/min_job_size instead), and then
> > > have the control flow implemented with ->prepare_job() [1] (CCCB is the
> > > PowerVR ring buffer). This allows us to maximize ring buffer utilization
> > > while still allowing dynamic-size jobs.    
> > 
> > I guess this would work, but I think it would be better to bake this in,
> > especially if more drivers do have this need. I already have an
> > implementation [1] for doing that in the scheduler. My plan was to push
> > that as soon as Matt sends out V3.
> > 
> > [1] https://gitlab.freedesktop.org/nouvelles/kernel/-/commit/269f05d6a2255384badff8b008b3c32d640d2d95  
> 
> PowerVR's ->can_fit_in_ringbuf() logic is a bit more involved in that
> native fences waits are passed to the FW, and those add to the job size.
> When we know our job is ready for execution (all non-native deps are
> signaled), we evict already signaled native-deps (or native fences) to
> shrink the job size further more, but that's something we need to
> calculate late if we want the job size to be minimal. Of course, we can
> always over-estimate the job size, but if we go for a full-blown
> drm_sched integration, I wonder if it wouldn't be preferable to have a
> ->get_job_size() callback returning the number of units needed by job,  
> and have the core pick 1 when the hook is not implemented.

FWIW, I think last time I asked how to do that, I've been pointed to
->prepare_job() by someone  (don't remember if it was Daniel or
Christian), hence the PowerVR implementation. If that's still the
preferred solution, there's some opportunity to have a generic layer to
automate ringbuf utilization tracking and some helpers to prepare
wait_for_ringbuf dma_fences that drivers could return from
->prepare_job() (those fences would then be signaled when the driver
calls drm_ringbuf_job_done() and the next job waiting for ringbuf space
now fits in the ringbuf).

WARNING: multiple messages have this Message-ID (diff)
From: Boris Brezillon <boris.brezillon@collabora.com>
To: Danilo Krummrich <dakr@redhat.com>
Cc: "Matthew Brost" <matthew.brost@intel.com>,
	robdclark@chromium.org, sarah.walker@imgtec.com,
	thomas.hellstrom@linux.intel.com, ketil.johnsen@arm.com,
	lina@asahilina.net, Liviu.Dudau@arm.com,
	dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org,
	luben.tuikov@amd.com, donald.robson@imgtec.com,
	"Christian König" <christian.koenig@amd.com>,
	faith.ekstrand@collabora.com
Subject: Re: [PATCH v2 1/9] drm/sched: Convert drm scheduler to use a work queue rather than kthread
Date: Tue, 12 Sep 2023 17:13:22 +0200	[thread overview]
Message-ID: <20230912171322.6c47a973@collabora.com> (raw)
In-Reply-To: <20230912164909.018d13c8@collabora.com>

On Tue, 12 Sep 2023 16:49:09 +0200
Boris Brezillon <boris.brezillon@collabora.com> wrote:

> On Tue, 12 Sep 2023 16:33:01 +0200
> Danilo Krummrich <dakr@redhat.com> wrote:
> 
> > On 9/12/23 16:28, Boris Brezillon wrote:  
> > > On Thu, 17 Aug 2023 13:13:31 +0200
> > > Danilo Krummrich <dakr@redhat.com> wrote:
> > >     
> > >> I think that's a misunderstanding. I'm not trying to say that it is
> > >> *always* beneficial to fill up the ring as much as possible. But I think
> > >> it is under certain circumstances, exactly those circumstances I
> > >> described for Nouveau.
> > >>
> > >> As mentioned, in Nouveau the size of a job is only really limited by the
> > >> ring size, which means that one job can (but does not necessarily) fill
> > >> up the whole ring. We both agree that this is inefficient, because it
> > >> potentially results into the HW run dry due to hw_submission_limit == 1.
> > >>
> > >> I recognize you said that one should define hw_submission_limit and
> > >> adjust the other parts of the equation accordingly, the options I see are:
> > >>
> > >> (1) Increase the ring size while keeping the maximum job size.
> > >> (2) Decrease the maximum job size while keeping the ring size.
> > >> (3) Let the scheduler track the actual job size rather than the maximum
> > >> job size.
> > >>
> > >> (1) results into potentially wasted ring memory, because we're not
> > >> always reaching the maximum job size, but the scheduler assumes so.
> > >>
> > >> (2) results into more IOCTLs from userspace for the same amount of IBs
> > >> and more jobs result into more memory allocations and more work being
> > >> submitted to the workqueue (with Matt's patches).
> > >>
> > >> (3) doesn't seem to have any of those draw backs.
> > >>
> > >> What would be your take on that?
> > >>
> > >> Actually, if none of the other drivers is interested into a more precise
> > >> way of keeping track of the ring utilization, I'd be totally fine to do
> > >> it in a driver specific way. However, unfortunately I don't see how this
> > >> would be possible.    
> > > 
> > > I'm not entirely sure, but I think PowerVR is pretty close to your
> > > description: jobs size is dynamic size, and the ring buffer size is
> > > picked by the driver at queue initialization time. What we did was to
> > > set hw_submission_limit to an arbitrarily high value of 64k (we could
> > > have used something like ringbuf_size/min_job_size instead), and then
> > > have the control flow implemented with ->prepare_job() [1] (CCCB is the
> > > PowerVR ring buffer). This allows us to maximize ring buffer utilization
> > > while still allowing dynamic-size jobs.    
> > 
> > I guess this would work, but I think it would be better to bake this in,
> > especially if more drivers do have this need. I already have an
> > implementation [1] for doing that in the scheduler. My plan was to push
> > that as soon as Matt sends out V3.
> > 
> > [1] https://gitlab.freedesktop.org/nouvelles/kernel/-/commit/269f05d6a2255384badff8b008b3c32d640d2d95  
> 
> PowerVR's ->can_fit_in_ringbuf() logic is a bit more involved in that
> native fences waits are passed to the FW, and those add to the job size.
> When we know our job is ready for execution (all non-native deps are
> signaled), we evict already signaled native-deps (or native fences) to
> shrink the job size further more, but that's something we need to
> calculate late if we want the job size to be minimal. Of course, we can
> always over-estimate the job size, but if we go for a full-blown
> drm_sched integration, I wonder if it wouldn't be preferable to have a
> ->get_job_size() callback returning the number of units needed by job,  
> and have the core pick 1 when the hook is not implemented.

FWIW, I think last time I asked how to do that, I've been pointed to
->prepare_job() by someone  (don't remember if it was Daniel or
Christian), hence the PowerVR implementation. If that's still the
preferred solution, there's some opportunity to have a generic layer to
automate ringbuf utilization tracking and some helpers to prepare
wait_for_ringbuf dma_fences that drivers could return from
->prepare_job() (those fences would then be signaled when the driver
calls drm_ringbuf_job_done() and the next job waiting for ringbuf space
now fits in the ringbuf).

  reply	other threads:[~2023-09-12 15:13 UTC|newest]

Thread overview: 163+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-11  2:31 [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Matthew Brost
2023-08-11  2:31 ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 1/9] drm/sched: Convert drm scheduler to use a work queue rather than kthread Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-16 11:30   ` [Intel-xe] " Danilo Krummrich
2023-08-16 11:30     ` Danilo Krummrich
2023-08-16 14:05     ` [Intel-xe] " Christian König
2023-08-16 14:05       ` Christian König
2023-08-16 12:30       ` [Intel-xe] " Danilo Krummrich
2023-08-16 12:30         ` Danilo Krummrich
2023-08-16 14:38         ` [Intel-xe] " Matthew Brost
2023-08-16 14:38           ` Matthew Brost
2023-08-16 15:40           ` [Intel-xe] " Danilo Krummrich
2023-08-16 15:40             ` Danilo Krummrich
2023-08-16 14:59         ` [Intel-xe] " Christian König
2023-08-16 14:59           ` Christian König
2023-08-16 16:33           ` [Intel-xe] " Danilo Krummrich
2023-08-16 16:33             ` Danilo Krummrich
2023-08-17  5:33             ` [Intel-xe] " Christian König
2023-08-17  5:33               ` Christian König
2023-08-17 11:13               ` [Intel-xe] " Danilo Krummrich
2023-08-17 11:13                 ` Danilo Krummrich
2023-08-17 13:35                 ` [Intel-xe] " Christian König
2023-08-17 13:35                   ` Christian König
2023-08-17 12:48                   ` [Intel-xe] " Danilo Krummrich
2023-08-17 12:48                     ` Danilo Krummrich
2023-08-17 16:17                     ` [Intel-xe] " Christian König
2023-08-17 16:17                       ` Christian König
2023-08-18 11:58                       ` [Intel-xe] " Danilo Krummrich
2023-08-18 11:58                         ` Danilo Krummrich
2023-08-21 14:07                         ` [Intel-xe] " Christian König
2023-08-21 14:07                           ` Christian König
2023-08-21 18:01                           ` [Intel-xe] " Danilo Krummrich
2023-08-21 18:01                             ` Danilo Krummrich
2023-08-21 18:12                             ` [Intel-xe] " Christian König
2023-08-21 18:12                               ` Christian König
2023-08-21 19:07                               ` [Intel-xe] " Danilo Krummrich
2023-08-21 19:07                                 ` Danilo Krummrich
2023-08-22  9:35                                 ` [Intel-xe] " Christian König
2023-08-22  9:35                                   ` Christian König
2023-08-21 19:46                               ` [Intel-xe] " Faith Ekstrand
2023-08-21 19:46                                 ` Faith Ekstrand
2023-08-22  9:51                                 ` [Intel-xe] " Christian König
2023-08-22  9:51                                   ` Christian König
2023-08-22 16:55                                   ` [Intel-xe] " Faith Ekstrand
2023-08-22 16:55                                     ` Faith Ekstrand
2023-08-24 11:50                                     ` [Intel-xe] " Bas Nieuwenhuizen
2023-08-24 11:50                                       ` Bas Nieuwenhuizen
2023-08-18  3:08                 ` [Intel-xe] " Matthew Brost
2023-08-18  3:08                   ` Matthew Brost
2023-08-18  5:40                   ` [Intel-xe] " Christian König
2023-08-18  5:40                     ` Christian König
2023-08-18 12:49                     ` [Intel-xe] " Matthew Brost
2023-08-18 12:49                       ` Matthew Brost
2023-08-18 12:06                       ` [Intel-xe] " Danilo Krummrich
2023-08-18 12:06                         ` Danilo Krummrich
2023-09-12 14:28                 ` [Intel-xe] " Boris Brezillon
2023-09-12 14:28                   ` Boris Brezillon
2023-09-12 14:33                   ` [Intel-xe] " Danilo Krummrich
2023-09-12 14:33                     ` Danilo Krummrich
2023-09-12 14:49                     ` [Intel-xe] " Boris Brezillon
2023-09-12 14:49                       ` Boris Brezillon
2023-09-12 15:13                       ` Boris Brezillon [this message]
2023-09-12 15:13                         ` Boris Brezillon
2023-09-12 16:58                         ` [Intel-xe] " Danilo Krummrich
2023-09-12 16:58                           ` Danilo Krummrich
2023-09-12 16:52                       ` [Intel-xe] " Danilo Krummrich
2023-09-12 16:52                         ` Danilo Krummrich
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 2/9] drm/sched: Move schedule policy to scheduler / entity Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11 21:43   ` [Intel-xe] " Maira Canal
2023-08-11 21:43     ` Maira Canal
2023-08-12  3:20     ` [Intel-xe] " Matthew Brost
2023-08-12  3:20       ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 3/9] drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-29 17:37   ` [Intel-xe] " Danilo Krummrich
2023-08-29 17:37     ` Danilo Krummrich
2023-09-05 11:10     ` [Intel-xe] " Danilo Krummrich
2023-09-05 11:10       ` Danilo Krummrich
2023-09-11 19:44       ` [Intel-xe] " Matthew Brost
2023-09-11 19:44         ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 4/9] drm/sched: Split free_job into own work item Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-17 13:39   ` [Intel-xe] " Christian König
2023-08-17 13:39     ` Christian König
2023-08-17 17:54     ` [Intel-xe] " Matthew Brost
2023-08-17 17:54       ` Matthew Brost
2023-08-18  5:27       ` [Intel-xe] " Christian König
2023-08-18  5:27         ` Christian König
2023-08-18 13:13         ` [Intel-xe] " Matthew Brost
2023-08-18 13:13           ` Matthew Brost
2023-08-21 13:17           ` [Intel-xe] " Christian König
2023-08-21 13:17             ` Christian König
2023-08-23  3:27             ` [Intel-xe] " Matthew Brost
2023-08-23  3:27               ` Matthew Brost
2023-08-23  7:10               ` [Intel-xe] " Christian König
2023-08-23  7:10                 ` Christian König
2023-08-23 15:24                 ` [Intel-xe] " Matthew Brost
2023-08-23 15:24                   ` Matthew Brost
2023-08-23 15:41                   ` [Intel-xe] " Alex Deucher
2023-08-23 15:41                     ` Alex Deucher
2023-08-23 17:26                     ` [Intel-xe] " Rodrigo Vivi
2023-08-23 17:26                       ` Rodrigo Vivi
2023-08-23 23:12                       ` Matthew Brost
2023-08-23 23:12                         ` Matthew Brost
2023-08-24 11:44                         ` Christian König
2023-08-24 11:44                           ` Christian König
2023-08-24 14:30                           ` Matthew Brost
2023-08-24 14:30                             ` Matthew Brost
2023-08-24 23:04   ` Danilo Krummrich
2023-08-24 23:04     ` Danilo Krummrich
2023-08-25  2:58     ` [Intel-xe] " Matthew Brost
2023-08-25  2:58       ` Matthew Brost
2023-08-25  8:02       ` [Intel-xe] " Christian König
2023-08-25  8:02         ` Christian König
2023-08-25 13:36         ` [Intel-xe] " Matthew Brost
2023-08-25 13:36           ` Matthew Brost
2023-08-25 13:45           ` [Intel-xe] " Christian König
2023-08-25 13:45             ` Christian König
2023-09-12 10:13             ` [Intel-xe] " Boris Brezillon
2023-09-12 10:13               ` Boris Brezillon
2023-09-12 10:46               ` [Intel-xe] " Danilo Krummrich
2023-09-12 10:46                 ` Danilo Krummrich
2023-09-12 12:18                 ` [Intel-xe] " Boris Brezillon
2023-09-12 12:18                   ` Boris Brezillon
2023-09-12 12:56                   ` [Intel-xe] " Danilo Krummrich
2023-09-12 12:56                     ` Danilo Krummrich
2023-09-12 13:52                     ` [Intel-xe] " Boris Brezillon
2023-09-12 13:52                       ` Boris Brezillon
2023-09-12 14:10                       ` [Intel-xe] " Danilo Krummrich
2023-09-12 14:10                         ` Danilo Krummrich
2023-09-12 13:27             ` [Intel-xe] " Boris Brezillon
2023-09-12 13:27               ` Boris Brezillon
2023-09-12 13:34               ` [Intel-xe] " Danilo Krummrich
2023-09-12 13:34                 ` Danilo Krummrich
2023-09-12 13:53                 ` [Intel-xe] " Boris Brezillon
2023-09-12 13:53                   ` Boris Brezillon
2023-08-28 18:04   ` [Intel-xe] " Danilo Krummrich
2023-08-28 18:04     ` Danilo Krummrich
2023-08-28 18:41     ` [Intel-xe] " Matthew Brost
2023-08-28 18:41       ` Matthew Brost
2023-08-29  1:20       ` [Intel-xe] " Danilo Krummrich
2023-08-29  1:20         ` Danilo Krummrich
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 5/9] drm/sched: Add generic scheduler message interface Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 6/9] drm/sched: Add drm_sched_start_timeout_unlocked helper Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 7/9] drm/sched: Start run wq before TDR in drm_sched_start Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 8/9] drm/sched: Submit job before starting TDR Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:31 ` [Intel-xe] [PATCH v2 9/9] drm/sched: Add helper to set TDR timeout Matthew Brost
2023-08-11  2:31   ` Matthew Brost
2023-08-11  2:34 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev2) Patchwork
2023-08-24  0:08 ` [Intel-xe] [PATCH v2 0/9] DRM scheduler changes for Xe Danilo Krummrich
2023-08-24  0:08   ` Danilo Krummrich
2023-08-24  3:23   ` [Intel-xe] " Matthew Brost
2023-08-24  3:23     ` Matthew Brost
2023-08-24 14:51     ` [Intel-xe] " Danilo Krummrich
2023-08-24 14:51       ` Danilo Krummrich
2023-08-25  3:01 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev3) Patchwork
2023-09-05 11:13 ` [Intel-xe] ✗ CI.Patch_applied: failure for DRM scheduler changes for Xe (rev4) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230912171322.6c47a973@collabora.com \
    --to=boris.brezillon@collabora.com \
    --cc=Liviu.Dudau@arm.com \
    --cc=christian.koenig@amd.com \
    --cc=dakr@redhat.com \
    --cc=donald.robson@imgtec.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=faith.ekstrand@collabora.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=ketil.johnsen@arm.com \
    --cc=lina@asahilina.net \
    --cc=luben.tuikov@amd.com \
    --cc=robdclark@chromium.org \
    --cc=sarah.walker@imgtec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.