From: "Christian König" <christian.koenig@amd.com>
To: Matthew Brost <matthew.brost@intel.com>,
dri-devel@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Cc: alexdeucher@gmail.com, dakr@kernel.org, pstanner@redhat.com,
Philipp Stanner <phasta@kernel.org>
Subject: Re: [RFC PATCH 1/4] drm/sched: Add pending job list iterator
Date: Tue, 7 Oct 2025 09:28:56 +0200 [thread overview]
Message-ID: <be806a5e-75c9-45da-962f-2c1db75b4a07@amd.com> (raw)
In-Reply-To: <20251002051604.1865322-2-matthew.brost@intel.com>
On 02.10.25 07:16, Matthew Brost wrote:
> Stop open coding pending job list in drivers. Add pending job list
> iterator which safely walks DRM scheduler list either locklessly
> asserting DRM scheduler is stopped or takes pending job list lock.
Taking the job list lock and walking the pending list while the scheduler is not stopped is most likely a NO-GO.
As far as I understand it that is exactly what Philip rejected as looking into scheduler internals during the discussion on XDC.
So why is that actually needed? For debugging or something functional?
Regards,
Christian.
>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
> include/drm/gpu_scheduler.h | 64 +++++++++++++++++++++++++++++++++++++
> 1 file changed, 64 insertions(+)
>
> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
> index fb88301b3c45..a2dcabab8284 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -698,4 +698,68 @@ void drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
> struct drm_gpu_scheduler **sched_list,
> unsigned int num_sched_list);
>
> +/* Inlines */
> +
> +/**
> + * struct drm_sched_iter_pending_job - DRM scheduler pending job iterator state
> + * @sched: DRM scheduler associated with pending job iterator
> + * @stopped: DRM scheduler stopped state associated with pending job iterator
> + */
> +struct drm_sched_iter_pending_job {
> + struct drm_gpu_scheduler *sched;
> + bool stopped;
> +};
> +
> +/* Drivers should never call this directly */
> +static inline struct drm_sched_iter_pending_job
> +__drm_sched_iter_pending_job_begin(struct drm_gpu_scheduler *sched, bool stopped)
> +{
> + struct drm_sched_iter_pending_job iter = {
> + .sched = sched,
> + .stopped = stopped,
> + };
> +
> + if (stopped)
> + WARN_ON(!READ_ONCE(sched->pause_submit));
> + else
> + spin_lock(&sched->job_list_lock);
> +
> + return iter;
> +}
> +
> +/* Drivers should never call this directly */
> +static inline void
> +__drm_sched_iter_pending_job_end(const struct drm_sched_iter_pending_job iter)
> +{
> + if (iter.stopped)
> + WARN_ON(!READ_ONCE(iter.sched->pause_submit));
> + else
> + spin_unlock(&iter.sched->job_list_lock);
> +}
> +
> +DEFINE_CLASS(drm_sched_iter_pending_job, struct drm_sched_iter_pending_job,
> + __drm_sched_iter_pending_job_end(_T);,
> + __drm_sched_iter_pending_job_begin(__sched, __stopped),
> + struct drm_gpu_scheduler * __sched, bool __stopped);
> +static inline void
> +*class_drm_sched_iter_pending_job_lock_ptr(class_drm_sched_iter_pending_job_t *_T)
> +{return _T; }
> +#define class_drm_sched_iter_pending_job_is_conditional false
> +
> +/**
> + * drm_sched_for_each_pending_job() - Iterator for each pending job in scheduler
> + * @__job: Current pending job being iterated over
> + * @__sched: DRM scheduler to iterate over pending jobs
> + * @__entity: DRM scheduler entity to filter jobs, NULL indicates no filter
> + * @__stopped: DRM scheduler stopped state
> + *
> + * Iterator for each pending job in scheduler, filtering on an entity, and
> + * enforcing locking rules (either scheduler fully stoppped or correctly takes
> + * job_list_lock).
> + */
> +#define drm_sched_for_each_pending_job(__job, __sched, __entitiy, __stopped) \
> + scoped_guard(drm_sched_iter_pending_job, __sched, __stopped) \
> + list_for_each_entry(__job, &(__sched)->pending_list, list) \
> + for_each_if(!__entitiy || (__job)->entity == (__entitiy))
> +
> #endif
next prev parent reply other threads:[~2025-10-07 7:29 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-02 5:16 [RFC PATCH 0/4] Fix DRM scheduler layering violations in Xe Matthew Brost
2025-10-02 5:16 ` [RFC PATCH 1/4] drm/sched: Add pending job list iterator Matthew Brost
2025-10-07 7:28 ` Christian König [this message]
2025-10-07 8:09 ` Matthew Brost
2025-10-07 8:28 ` Matthew Brost
2025-10-07 8:44 ` Christian König
2025-10-07 9:44 ` Matthew Brost
2025-10-07 9:51 ` Danilo Krummrich
2025-10-02 5:16 ` [RFC PATCH 2/4] drm/sched: Add several job helpers to avoid drivers touching scheduler state Matthew Brost
2025-10-02 5:16 ` [RFC PATCH 3/4] drm/xe: Add dedicated message lock Matthew Brost
2025-10-02 5:16 ` [RFC PATCH 4/4] drm/xe: Stop abusing DRM scheduler internals Matthew Brost
2025-10-02 6:01 ` ✗ CI.checkpatch: warning for Fix DRM scheduler layering violations in Xe Patchwork
2025-10-02 6:02 ` ✓ CI.KUnit: success " Patchwork
2025-10-02 6:38 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-02 7:58 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=be806a5e-75c9-45da-962f-2c1db75b4a07@amd.com \
--to=christian.koenig@amd.com \
--cc=alexdeucher@gmail.com \
--cc=dakr@kernel.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
--cc=phasta@kernel.org \
--cc=pstanner@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox