From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 710B8FCB626 for ; Fri, 6 Mar 2026 16:35:03 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id B9C7F10ED93; Fri, 6 Mar 2026 16:34:58 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=igalia.com header.i=@igalia.com header.b="iBNq26Yt"; dkim-atps=neutral Received: from fanzine2.igalia.com (fanzine2.igalia.com [213.97.179.56]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9FE3510ED87; Fri, 6 Mar 2026 16:34:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References: In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=7mFtQnscR0ghOyg8y4gpYR6yXsvLP0tuFR4HMQ94OrI=; b=iBNq26YtOAodseNCy4CBRvOe1N zAlbYa6x65KyqdtxgXMZX12DXS9mzVBav0sjH22jggaSKf54Y9hO5/Ttv4tkFbTaGH0G1Sz264AF+ D9EnDdq+V7cNCc/32JbgDc/gTSrM70qjppmkdkAHhgRNgIvJWlw/jxB3oFlI7DHsJb6xR46rqmhrK cgK23e7YMSVts0KHEhIL32910FcOdqIELrqTmD621YZBEU+KnETUFRClnF3M0awW7A9dBdpCT0MPM oqHemAzsuN87jbSPhKIBpc2NMbvLqQLFuvE0hPl1YeSNTAjTxiy5ANs7H/ivW/jXq83Ilhkt7dhPY C/Sz0qmw==; Received: from [90.240.106.137] (helo=localhost) by fanzine2.igalia.com with esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1vyY8d-00APSG-GQ; Fri, 06 Mar 2026 17:34:55 +0100 From: Tvrtko Ursulin To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Cc: kernel-dev@igalia.com, intel-xe@lists.freedesktop.org, Danilo Krummrich , Philipp Stanner , Tvrtko Ursulin , =?UTF-8?q?Christian=20K=C3=B6nig?= , Matthew Brost Subject: [PATCH v7 07/29] drm/sched: Free all finished jobs at once Date: Fri, 6 Mar 2026 16:34:23 +0000 Message-ID: <20260306163445.97243-8-tvrtko.ursulin@igalia.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260306163445.97243-1-tvrtko.ursulin@igalia.com> References: <20260306163445.97243-1-tvrtko.ursulin@igalia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: intel-xe@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Xe graphics driver List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-xe-bounces@lists.freedesktop.org Sender: "Intel-xe" To implement fair scheduling we will need as accurate as possible view into per entity GPU time utilisation. Because sched fence execution time are only adjusted for accuracy in the free worker we need to process completed jobs as soon as possible so the metric is most up to date when view from the submission side of things. Signed-off-by: Tvrtko Ursulin Cc: Christian König Cc: Danilo Krummrich Cc: Matthew Brost Cc: Philipp Stanner Reviewed-by: Matthew Brost Acked-by: Danilo Krummrich --- drivers/gpu/drm/scheduler/sched_main.c | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 097ea187d08e..046686a83699 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -910,7 +910,6 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) * drm_sched_get_finished_job - fetch the next finished job to be destroyed * * @sched: scheduler instance - * @have_more: are there more finished jobs on the list * * Informs the caller through @have_more whether there are more finished jobs * besides the returned one. @@ -919,7 +918,7 @@ drm_sched_select_entity(struct drm_gpu_scheduler *sched) * ready for it to be destroyed. */ static struct drm_sched_job * -drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more) +drm_sched_get_finished_job(struct drm_gpu_scheduler *sched) { struct drm_sched_job *job, *next; @@ -934,7 +933,6 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more) /* cancel this job's TO timer */ cancel_delayed_work(&sched->work_tdr); - *have_more = false; next = list_first_entry_or_null(&sched->pending_list, typeof(*next), list); if (next) { @@ -944,8 +942,6 @@ drm_sched_get_finished_job(struct drm_gpu_scheduler *sched, bool *have_more) next->s_fence->scheduled.timestamp = dma_fence_timestamp(&job->s_fence->finished); - *have_more = dma_fence_is_signaled(&next->s_fence->finished); - /* start TO timer for next job */ drm_sched_start_timeout(sched); } @@ -1004,14 +1000,9 @@ static void drm_sched_free_job_work(struct work_struct *w) struct drm_gpu_scheduler *sched = container_of(w, struct drm_gpu_scheduler, work_free_job); struct drm_sched_job *job; - bool have_more; - job = drm_sched_get_finished_job(sched, &have_more); - if (job) { + while ((job = drm_sched_get_finished_job(sched))) sched->ops->free_job(job); - if (have_more) - drm_sched_run_free_queue(sched); - } drm_sched_run_job_queue(sched); } -- 2.52.0