From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C4923EDE55; Tue, 12 May 2026 18:04:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778609072; cv=none; b=BFsYekuefTe015aLz3OssWe0cx98ED6276YSPl1ZstSJLNcKLRTTu/j+9LzZfLVRoixYfAKKV1p11Vna2V/koaM6IoNgnJmqkzAvV7K5/RW2JSZUm2sGrHcx+Hm20jom6ndRWceqEqs023zZRECm9I5C5ajImuwJbqGr8xNxAR0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778609072; c=relaxed/simple; bh=YVFX2J7Max5X+qXLUQXPQw+JPh+5VFQaLflGodOuwsw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Of9ODIzsPvczB0nTMvasTA2KjneLk8nQZoLSbDxnQHxTb8/DzeuVK2NHPBkdMTv8Pj1BXMqa3buMpIvv0SQ3qu2RoulotTNYHWQ9mVSRDQfRQgV/k/r3Iy/iswgZ1Ib4gYusuTdlAWBHc9jaOYlJO5H4iibNN+Ay5IutDSv2nxY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=q/BBvbgo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="q/BBvbgo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6DFF4C2BCB0; Tue, 12 May 2026 18:04:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1778609071; bh=YVFX2J7Max5X+qXLUQXPQw+JPh+5VFQaLflGodOuwsw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q/BBvbgoLDA1n+PviDTx03Z24xVC0bVg0SBaBsyy+ygfXXs2N4Peld+Q0huO9S7cj z4T5rX1z1Ig4YX8WKaDIw4Y8n37tw4GazE4h2Mo9Bsk/Ww1gUHj/+0uWn62jPRazow /OXbgm2reXtorAqNZDuXmb3+3in12eO1//6CX03c= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Chris Mason , Tejun Heo , Andrea Righi Subject: [PATCH 7.0 053/307] sched_ext: Read scx_root under scx_cgroup_ops_rwsem in cgroup setters Date: Tue, 12 May 2026 19:37:28 +0200 Message-ID: <20260512173941.242946370@linuxfoundation.org> X-Mailer: git-send-email 2.54.0 In-Reply-To: <20260512173940.117428952@linuxfoundation.org> References: <20260512173940.117428952@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 7.0-stable review patch. If anyone has any objections, please let me know. ------------------ From: Tejun Heo commit 80afd4c84bc8f5e80145ce35279f5ce53f6043db upstream. scx_group_set_{weight,idle,bandwidth}() cache scx_root before acquiring scx_cgroup_ops_rwsem, so the pointer can be stale by the time the op runs. If the loaded scheduler is disabled and freed (via RCU work) and another is enabled between the naked load and the rwsem acquire, the reader sees scx_cgroup_enabled=true (the new scheduler's) but dereferences the freed one - UAF on SCX_HAS_OP(sch, ...) / SCX_CALL_OP(sch, ...). scx_cgroup_enabled is toggled only under scx_cgroup_ops_rwsem write (scx_cgroup_{init,exit}), so reading scx_root inside the rwsem read section correlates @sch with the enabled snapshot. Fixes: a5bd6ba30b33 ("sched_ext: Use cgroup_lock/unlock() to synchronize against cgroup operations") Cc: stable@vger.kernel.org # v6.18+ Reported-by: Chris Mason Signed-off-by: Tejun Heo Reviewed-by: Andrea Righi Signed-off-by: Greg Kroah-Hartman --- kernel/sched/ext.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -3430,9 +3430,10 @@ void scx_cgroup_cancel_attach(struct cgr void scx_group_set_weight(struct task_group *tg, unsigned long weight) { - struct scx_sched *sch = scx_root; + struct scx_sched *sch; percpu_down_read(&scx_cgroup_ops_rwsem); + sch = scx_root; if (scx_cgroup_enabled && SCX_HAS_OP(sch, cgroup_set_weight) && tg->scx.weight != weight) @@ -3446,9 +3447,10 @@ void scx_group_set_weight(struct task_gr void scx_group_set_idle(struct task_group *tg, bool idle) { - struct scx_sched *sch = scx_root; + struct scx_sched *sch; percpu_down_read(&scx_cgroup_ops_rwsem); + sch = scx_root; if (scx_cgroup_enabled && SCX_HAS_OP(sch, cgroup_set_idle)) SCX_CALL_OP(sch, SCX_KF_UNLOCKED, cgroup_set_idle, NULL, @@ -3463,9 +3465,10 @@ void scx_group_set_idle(struct task_grou void scx_group_set_bandwidth(struct task_group *tg, u64 period_us, u64 quota_us, u64 burst_us) { - struct scx_sched *sch = scx_root; + struct scx_sched *sch; percpu_down_read(&scx_cgroup_ops_rwsem); + sch = scx_root; if (scx_cgroup_enabled && SCX_HAS_OP(sch, cgroup_set_bandwidth) && (tg->scx.bw_period_us != period_us ||