From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E9657FCB623 for ; Fri, 6 Mar 2026 16:34:58 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B0BC710ED89; Fri, 6 Mar 2026 16:34:57 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="fLLKLk30"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine2.igalia.com [213.97.179.56]) by gabe.freedesktop.org (Postfix) with ESMTPS id B7C7810E31D; Fri, 6 Mar 2026 16:34:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=GBTIYR2OAVwcJEgv3VqtYJ1RtNhOwxuFY8ja+vpT5QA=; b=fLLKLk305sd7TqVClcFHrDCT4E NOHU7sl6PD3c84MCsEuEJmq9FV8H4UULChZJ8t95FdXaCnxQih31S3uWTIHtohusqPJ0ywXnfTFXG qqToTIoaEu+3YE7njuGB5SBwH+RO4y5NFYeBUO9U6EVtD0TSiiAQp6ZjchC68bNhKvaCTB5h8SoPn cEcgoxFxih53XENP2BrozHkSN6Z2mgi055d7RMe54svlMN+JXox47kroF18lAGj6kq/V2LWcuKkfp LtjvSRHPAVWl5vp/95qFCiohPqJ8Ug43OIeDBHHgBxFMXqMR6GACgUpAjb9ntxWQmmc5JjcHpTH2G nj8Tt4rA==; Received: from [90.240.106.137] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1vyY8Z-00APRY-HI; Fri, 06 Mar 2026 17:34:51 +0100 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, intel-xe@lists.freedesktop.org, Danilo Krummrich , Philipp Stanner , Tvrtko Ursulin , =?UTF-8?q?Christian=20K=C3=B6nig?= , Matthew Brost Subject: [PATCH v7 02/29] drm/sched: Consolidate entity run queue management Date: Fri, 6 Mar 2026 16:34:18 +0000 Message-ID: <20260306163445.97243-3-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260306163445.97243-1-tvrtko.ursulin@igalia.com> References: <20260306163445.97243-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" Move the code dealing with entities entering and exiting run queues to helpers to logically separate it from jobs entering and exiting entities. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner Acked-by: Danilo Krummrich --- drivers/gpu/drm/scheduler/sched_entity.c | 47 ++----------- drivers/gpu/drm/scheduler/sched_internal.h | 8 +-- drivers/gpu/drm/scheduler/sched_main.c | 78 ++++++++++++++++++---- 3 files changed, 72 insertions(+), 61 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index bb7e5fc47f99..768f11510129 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -487,26 +487,7 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) spsc_queue_pop(&entity->job_queue); - /* - * Update the entity's location in the min heap according to - * the timestamp of the next job, if any. - */ - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) { - struct drm_sched_job *next; - - next = drm_sched_entity_queue_peek(entity); - if (next) { - struct drm_sched_rq *rq; - - spin_lock(&entity->lock); - rq = entity->rq; - spin_lock(&rq->lock); - drm_sched_rq_update_fifo_locked(entity, rq, - next->submit_ts); - spin_unlock(&rq->lock); - spin_unlock(&entity->lock); - } - } + drm_sched_rq_pop_entity(entity); /* Jobs and entities might have different lifecycles. Since we're * removing the job from the entities queue, set the jobs entity pointer @@ -597,30 +578,10 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) /* first job wakes up scheduler */ if (first) { struct drm_gpu_scheduler *sched; - struct drm_sched_rq *rq; - /* Add the entity to the run queue */ - spin_lock(&entity->lock); - if (entity->stopped) { - spin_unlock(&entity->lock); - - DRM_ERROR("Trying to push to a killed entity\n"); - return; - } - - rq = entity->rq; - sched = rq->sched; - - spin_lock(&rq->lock); - drm_sched_rq_add_entity(rq, entity); - - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) - drm_sched_rq_update_fifo_locked(entity, rq, submit_ts); - - spin_unlock(&rq->lock); - spin_unlock(&entity->lock); - - drm_sched_wakeup(sched); + sched = drm_sched_rq_add_entity(entity, submit_ts); + if (sched) + drm_sched_wakeup(sched); } } EXPORT_SYMBOL(drm_sched_entity_push_job); diff --git a/drivers/gpu/drm/scheduler/sched_internal.h b/drivers/gpu/drm/scheduler/sched_internal.h index 7ea5a6736f98..8269c5392a82 100644 --- a/drivers/gpu/drm/scheduler/sched_internal.h +++ b/drivers/gpu/drm/scheduler/sched_internal.h @@ -12,13 +12,11 @@ extern int drm_sched_policy; void drm_sched_wakeup(struct drm_gpu_scheduler *sched); -void drm_sched_rq_add_entity(struct drm_sched_rq *rq, - struct drm_sched_entity *entity); +struct drm_gpu_scheduler * +drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts); void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, struct drm_sched_entity *entity); - -void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, - struct drm_sched_rq *rq, ktime_t ts); +void drm_sched_rq_pop_entity(struct drm_sched_entity *entity); void drm_sched_entity_select_rq(struct drm_sched_entity *entity); struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 21dc82c75c9e..f4aab2915df8 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -151,9 +151,9 @@ static void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity *entity, } } -void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, - struct drm_sched_rq *rq, - ktime_t ts) +static void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity, + struct drm_sched_rq *rq, + ktime_t ts) { /* * Both locks need to be grabbed, one to protect from entity->rq change @@ -191,23 +191,45 @@ static void drm_sched_rq_init(struct drm_gpu_scheduler *sched, /** * drm_sched_rq_add_entity - add an entity - * - * @rq: scheduler run queue * @entity: scheduler entity + * @ts: submission timestamp * * Adds a scheduler entity to the run queue. + * + * Return: DRM scheduler selected to handle this entity or NULL if entity has + * been stopped and cannot be submitted to. */ -void drm_sched_rq_add_entity(struct drm_sched_rq *rq, - struct drm_sched_entity *entity) +struct drm_gpu_scheduler * +drm_sched_rq_add_entity(struct drm_sched_entity *entity, ktime_t ts) { - lockdep_assert_held(&entity->lock); - lockdep_assert_held(&rq->lock); + struct drm_gpu_scheduler *sched; + struct drm_sched_rq *rq; - if (!list_empty(&entity->list)) - return; + /* Add the entity to the run queue */ + spin_lock(&entity->lock); + if (entity->stopped) { + spin_unlock(&entity->lock); - atomic_inc(rq->sched->score); - list_add_tail(&entity->list, &rq->entities); + DRM_ERROR("Trying to push to a killed entity\n"); + return NULL; + } + + rq = entity->rq; + spin_lock(&rq->lock); + sched = rq->sched; + + if (list_empty(&entity->list)) { + atomic_inc(sched->score); + list_add_tail(&entity->list, &rq->entities); + } + + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) + drm_sched_rq_update_fifo_locked(entity, rq, ts); + + spin_unlock(&rq->lock); + spin_unlock(&entity->lock); + + return sched; } /** @@ -240,6 +262,36 @@ void drm_sched_rq_remove_entity(struct drm_sched_rq *rq, spin_unlock(&rq->lock); } +/** + * drm_sched_rq_pop_entity - pops an entity + * @entity: scheduler entity + * + * To be called every time after a job is popped from the entity. + */ +void drm_sched_rq_pop_entity(struct drm_sched_entity *entity) +{ + /* + * Update the entity's location in the min heap according to + * the timestamp of the next job, if any. + */ + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) { + struct drm_sched_job *next; + + next = drm_sched_entity_queue_peek(entity); + if (next) { + struct drm_sched_rq *rq; + + spin_lock(&entity->lock); + rq = entity->rq; + spin_lock(&rq->lock); + drm_sched_rq_update_fifo_locked(entity, rq, + next->submit_ts); + spin_unlock(&rq->lock); + spin_unlock(&entity->lock); + } + } +} + /** * drm_sched_rq_select_entity_rr - Select an entity which could provide a job to run * -- 2.52.0