From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6AA1147F2D8; Tue, 5 May 2026 14:05:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777989954; cv=none; b=ZFrk/sbaDcGpIKdbLmFkLhe5x5NDF1nl4uLMBXVXV2jPjJ6cOk6XuGe97RgkOB8mnPio20kr5rwX4cYgq7PrsrOoLPK0ynI74I7+b2yt9hNpcdl0thFk5/qnDVGtzZvIfXDRfge+lnzdPWAv9MfrEAQjj6CGqzy+5422n2uEtYI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777989954; c=relaxed/simple; bh=zdJqmzSnp5+/bhjrJp/EGOi5xHyCf1RHTudbn1wRpjY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=obyUkEXKGfmPS92+VidO/ao3S9xubYFNu8z2VOB1jxG5zFFKmqMwsSsy7xKdj+teSdeZWVDhZZ0GWjikZC+2kBc0dMqRmUyBhKf7qNUIlSm7xOcz4foW0pDKh3JZJImstM9sehIttVNnDQmZzzWRAGI4rxfMOe8x9InooSbdQQc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b=nLmzr7T/; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=arm.com header.i=@arm.com header.b="nLmzr7T/" Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6B95B3027; Tue, 5 May 2026 07:05:46 -0700 (PDT) Received: from e120398-lin.trondheim.arm.com (e120398-lin.trondheim.arm.com [10.42.46.160]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 24CA13F905; Tue, 5 May 2026 07:05:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1777989951; bh=zdJqmzSnp5+/bhjrJp/EGOi5xHyCf1RHTudbn1wRpjY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nLmzr7T/TXYtOkfV3AGfj2YRmwWby8LeyYgsM8YDQVWSa2wp/XUbW/QeY6ikcJKL9 qYz1xBcxllN6WxUxw1fzHLoGg6DS9vnBO56bIOtt2aOsX87MZO1miHt/u09MPq2FCi 90nWbncH1wDLxkAvKwb5VB4+nJ99BUgO+VoHeRAo= From: Ketil Johnsen To: David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Jonathan Corbet , Shuah Khan , Sumit Semwal , Benjamin Gaignard , Brian Starkey , John Stultz , "T.J. Mercier" , =?UTF-8?q?Christian=20K=C3=B6nig?= , Boris Brezillon , Steven Price , Liviu Dudau , Daniel Almeida , Alice Ryhl , Matthias Brugger , AngeloGioacchino Del Regno Cc: dri-devel@lists.freedesktop.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org, linux-arm-kernel@lists.infradead.org, linux-mediatek@lists.infradead.org, Florent Tomasin , Ketil Johnsen Subject: [PATCH 5/8] drm/panthor: Minor scheduler refactoring Date: Tue, 5 May 2026 16:05:11 +0200 Message-ID: <20260505140516.1372388-6-ketil.johnsen@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260505140516.1372388-1-ketil.johnsen@arm.com> References: <20260505140516.1372388-1-ketil.johnsen@arm.com> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Florent Tomasin Refactor parts of the group scheduling logic into new helper functions. This will simplify addition of the protected mode feature. Remove redundant assignments of csg_slot. Signed-off-by: Florent Tomasin Co-developed-by: Ketil Johnsen Signed-off-by: Ketil Johnsen --- drivers/gpu/drm/panthor/panthor_sched.c | 135 +++++++++++++++--------- 1 file changed, 86 insertions(+), 49 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c index 5ee386338005c..987072bd867c4 100644 --- a/drivers/gpu/drm/panthor/panthor_sched.c +++ b/drivers/gpu/drm/panthor/panthor_sched.c @@ -1934,6 +1934,12 @@ static void csgs_upd_ctx_init(struct panthor_csg_slots_upd_ctx *ctx) memset(ctx, 0, sizeof(*ctx)); } +static void csgs_upd_ctx_ring_doorbell(struct panthor_csg_slots_upd_ctx *ctx, + u32 csg_id) +{ + ctx->update_mask |= BIT(csg_id); +} + static void csgs_upd_ctx_queue_reqs(struct panthor_device *ptdev, struct panthor_csg_slots_upd_ctx *ctx, u32 csg_id, u32 value, u32 mask) @@ -1944,7 +1950,8 @@ static void csgs_upd_ctx_queue_reqs(struct panthor_device *ptdev, ctx->requests[csg_id].value = (ctx->requests[csg_id].value & ~mask) | (value & mask); ctx->requests[csg_id].mask |= mask; - ctx->update_mask |= BIT(csg_id); + + csgs_upd_ctx_ring_doorbell(ctx, csg_id); } static int csgs_upd_ctx_apply_locked(struct panthor_device *ptdev, @@ -1961,8 +1968,12 @@ static int csgs_upd_ctx_apply_locked(struct panthor_device *ptdev, while (update_slots) { struct panthor_fw_csg_iface *csg_iface; u32 csg_id = ffs(update_slots) - 1; + u32 req_mask = ctx->requests[csg_id].mask; update_slots &= ~BIT(csg_id); + if (!req_mask) + continue; + csg_iface = panthor_fw_get_csg_iface(ptdev, csg_id); panthor_fw_update_reqs(csg_iface, req, ctx->requests[csg_id].value, @@ -1979,6 +1990,9 @@ static int csgs_upd_ctx_apply_locked(struct panthor_device *ptdev, int ret; update_slots &= ~BIT(csg_id); + if (!req_mask) + continue; + csg_iface = panthor_fw_get_csg_iface(ptdev, csg_id); ret = panthor_fw_csg_wait_acks(ptdev, csg_id, req_mask, &acked, 100); @@ -2266,12 +2280,76 @@ tick_ctx_cleanup(struct panthor_scheduler *sched, } } +static void +tick_ctx_evict_group(struct panthor_scheduler *sched, + struct panthor_csg_slots_upd_ctx *upd_ctx, + struct panthor_group *group) +{ + struct panthor_device *ptdev = sched->ptdev; + + if (drm_WARN_ON(&ptdev->base, group->csg_id < 0)) + return; + + csgs_upd_ctx_queue_reqs(ptdev, upd_ctx, group->csg_id, + group_can_run(group) ? + CSG_STATE_SUSPEND : CSG_STATE_TERMINATE, + CSG_STATE_MASK); +} + + +static void +tick_ctx_reschedule_group(struct panthor_scheduler *sched, + struct panthor_csg_slots_upd_ctx *upd_ctx, + struct panthor_group *group, + int new_csg_prio) +{ + struct panthor_device *ptdev = sched->ptdev; + struct panthor_fw_csg_iface *csg_iface; + struct panthor_csg_slot *csg_slot; + + if (group->csg_id < 0) + return; + + csg_iface = panthor_fw_get_csg_iface(ptdev, group->csg_id); + csg_slot = &sched->csg_slots[group->csg_id]; + + if (csg_slot->priority != new_csg_prio) { + panthor_fw_update_reqs(csg_iface, endpoint_req, + CSG_EP_REQ_PRIORITY(new_csg_prio), + CSG_EP_REQ_PRIORITY_MASK); + csgs_upd_ctx_queue_reqs(ptdev, upd_ctx, group->csg_id, + csg_iface->output->ack ^ CSG_ENDPOINT_CONFIG, + CSG_ENDPOINT_CONFIG); + } +} + +static void +tick_ctx_schedule_group(struct panthor_scheduler *sched, + struct panthor_sched_tick_ctx *ctx, + struct panthor_csg_slots_upd_ctx *upd_ctx, + struct panthor_group *group, + int csg_id, int csg_prio) +{ + struct panthor_device *ptdev = sched->ptdev; + struct panthor_fw_csg_iface *csg_iface = panthor_fw_get_csg_iface(ptdev, csg_id); + + group_bind_locked(group, csg_id); + csg_slot_prog_locked(ptdev, csg_id, csg_prio); + + csgs_upd_ctx_queue_reqs(ptdev, upd_ctx, csg_id, + group->state == PANTHOR_CS_GROUP_SUSPENDED ? + CSG_STATE_RESUME : CSG_STATE_START, + CSG_STATE_MASK); + csgs_upd_ctx_queue_reqs(ptdev, upd_ctx, csg_id, + csg_iface->output->ack ^ CSG_ENDPOINT_CONFIG, + CSG_ENDPOINT_CONFIG); +} + static void tick_ctx_apply(struct panthor_scheduler *sched, struct panthor_sched_tick_ctx *ctx) { struct panthor_group *group, *tmp; struct panthor_device *ptdev = sched->ptdev; - struct panthor_csg_slot *csg_slot; int prio, new_csg_prio = MAX_CSG_PRIO, i; u32 free_csg_slots = 0; struct panthor_csg_slots_upd_ctx upd_ctx; @@ -2282,42 +2360,12 @@ tick_ctx_apply(struct panthor_scheduler *sched, struct panthor_sched_tick_ctx *c for (prio = PANTHOR_CSG_PRIORITY_COUNT - 1; prio >= 0; prio--) { /* Suspend or terminate evicted groups. */ list_for_each_entry(group, &ctx->old_groups[prio], run_node) { - bool term = !group_can_run(group); - int csg_id = group->csg_id; - - if (drm_WARN_ON(&ptdev->base, csg_id < 0)) - continue; - - csg_slot = &sched->csg_slots[csg_id]; - csgs_upd_ctx_queue_reqs(ptdev, &upd_ctx, csg_id, - term ? CSG_STATE_TERMINATE : CSG_STATE_SUSPEND, - CSG_STATE_MASK); + tick_ctx_evict_group(sched, &upd_ctx, group); } /* Update priorities on already running groups. */ list_for_each_entry(group, &ctx->groups[prio], run_node) { - struct panthor_fw_csg_iface *csg_iface; - int csg_id = group->csg_id; - - if (csg_id < 0) { - new_csg_prio--; - continue; - } - - csg_slot = &sched->csg_slots[csg_id]; - csg_iface = panthor_fw_get_csg_iface(ptdev, csg_id); - if (csg_slot->priority == new_csg_prio) { - new_csg_prio--; - continue; - } - - panthor_fw_csg_endpoint_req_update(ptdev, csg_iface, - CSG_EP_REQ_PRIORITY(new_csg_prio), - CSG_EP_REQ_PRIORITY_MASK); - csgs_upd_ctx_queue_reqs(ptdev, &upd_ctx, csg_id, - csg_iface->output->ack ^ CSG_ENDPOINT_CONFIG, - CSG_ENDPOINT_CONFIG); - new_csg_prio--; + tick_ctx_reschedule_group(sched, &upd_ctx, group, new_csg_prio--); } } @@ -2354,28 +2402,17 @@ tick_ctx_apply(struct panthor_scheduler *sched, struct panthor_sched_tick_ctx *c for (prio = PANTHOR_CSG_PRIORITY_COUNT - 1; prio >= 0; prio--) { list_for_each_entry(group, &ctx->groups[prio], run_node) { int csg_id = group->csg_id; - struct panthor_fw_csg_iface *csg_iface; + int csg_prio = new_csg_prio--; - if (csg_id >= 0) { - new_csg_prio--; + if (csg_id >= 0) continue; - } csg_id = ffs(free_csg_slots) - 1; if (drm_WARN_ON(&ptdev->base, csg_id < 0)) break; - csg_iface = panthor_fw_get_csg_iface(ptdev, csg_id); - csg_slot = &sched->csg_slots[csg_id]; - group_bind_locked(group, csg_id); - csg_slot_prog_locked(ptdev, csg_id, new_csg_prio--); - csgs_upd_ctx_queue_reqs(ptdev, &upd_ctx, csg_id, - group->state == PANTHOR_CS_GROUP_SUSPENDED ? - CSG_STATE_RESUME : CSG_STATE_START, - CSG_STATE_MASK); - csgs_upd_ctx_queue_reqs(ptdev, &upd_ctx, csg_id, - csg_iface->output->ack ^ CSG_ENDPOINT_CONFIG, - CSG_ENDPOINT_CONFIG); + tick_ctx_schedule_group(sched, ctx, &upd_ctx, group, csg_id, csg_prio); + free_csg_slots &= ~BIT(csg_id); } } -- 2.43.0