From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D899223D2B1 for ; Sun, 1 Mar 2026 01:18:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772327881; cv=none; b=TqHy6mq/l1pfVIEsTyLLia3+D8tXKCHgEqYbhV/hDxUsm9VvqcHyxYO3/ay3Fv+IxiRB4U3ZgaqXMoaMWrmvS4DKcG39zGE8N4obF1rwGNwXYYBy7sX9kB4DEnQw9ZigcqeBq4sJamU9znAUnAU8iyjdIxXTvezS43wVKdWQmYo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772327881; c=relaxed/simple; bh=oX7yNXLTbGvb/uLyVqm7AKZQ66U23OfAjbAu/LYXgog=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=Jq0u5jwL+Tr1HY+n2CQl4xmgQYC2xXxd2wJvlQ8aPMqoOZwBHDDpB1d/byGB6DXnlmEfaGeRUdHFsD4v/SRI9HdAsLGx47JMcqg4jpSW/3n+YgXeYW9lp3bPrrBr6Cq6WZKSDymEIrhzDcOafXTRKm4MJbWUd10rb6Gx1NetlQ8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JPOR7K5r; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JPOR7K5r" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C0F86C19421; Sun, 1 Mar 2026 01:18:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772327881; bh=oX7yNXLTbGvb/uLyVqm7AKZQ66U23OfAjbAu/LYXgog=; h=From:To:Cc:Subject:Date:From; b=JPOR7K5rSRB4KX1UOKfu5o5njdXeiL8IrZXDRzZ7EIQBShZk7KIB7BzEJJXPROsrw S+mrLewpyxD9WIrUezg8WoIYOa4nTaikGYDOPFKdXMfk/whtBB+Mwl3jjmSdX6Mf0P UnjSik284g+23HavyqtOEB3TU4KXcK5s0rH3W0FkYvKYXvW0pDOa/yiiI4M+FKpiCz ygJ1d/VWOzMtjASZYtH6DegUkw/LBr8+dmp1ZG9wIZpEf8lI1ycCfId4umOFcn+AuS Tj/ShhnREMN9Xp+klvUqMyo+aq8c3veFdg8TXXD6VwMgcqyA6the4jpBLYiNMFo/WY Fwhizh0rtiG+A== From: Sasha Levin To: stable@vger.kernel.org, olvaffe@gmail.com Cc: Boris Brezillon , Liviu Dudau , Steven Price , dri-devel@lists.freedesktop.org Subject: FAILED: Patch "drm/panthor: fix for dma-fence safe access rules" failed to apply to 6.12-stable tree Date: Sat, 28 Feb 2026 20:17:59 -0500 Message-ID: <20260301011759.1672072-1-sashal@kernel.org> X-Mailer: git-send-email 2.51.0 Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Hint: ignore X-stable: review Content-Transfer-Encoding: 8bit The patch below does not apply to the 6.12-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . Thanks, Sasha ------------------ original commit in Linus's tree ------------------ >From efe24898485c5c831e629d9c6fb9350c35cb576f Mon Sep 17 00:00:00 2001 From: Chia-I Wu Date: Thu, 4 Dec 2025 09:45:45 -0800 Subject: [PATCH] drm/panthor: fix for dma-fence safe access rules Commit 506aa8b02a8d6 ("dma-fence: Add safe access helpers and document the rules") details the dma-fence safe access rules. The most common culprit is that drm_sched_fence_get_timeline_name may race with group_free_queue. Signed-off-by: Chia-I Wu Reviewed-by: Boris Brezillon Reviewed-by: Liviu Dudau Reviewed-by: Steven Price Cc: stable@vger.kernel.org # v6.17+ Signed-off-by: Steven Price Link: https://patch.msgid.link/20251204174545.399059-1-olvaffe@gmail.com --- drivers/gpu/drm/panthor/panthor_sched.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c index a17b067a04392..0f83e778d89aa 100644 --- a/drivers/gpu/drm/panthor/panthor_sched.c +++ b/drivers/gpu/drm/panthor/panthor_sched.c @@ -23,6 +23,7 @@ #include #include #include +#include #include "panthor_devfreq.h" #include "panthor_device.h" @@ -943,6 +944,9 @@ static void group_release_work(struct work_struct *work) release_work); u32 i; + /* dma-fences may still be accessing group->queues under rcu lock. */ + synchronize_rcu(); + for (i = 0; i < group->queue_count; i++) group_free_queue(group, group->queues[i]); -- 2.51.0