From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9ACCE3D74F5; Sat, 28 Feb 2026 17:40:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772300436; cv=none; b=oBI7cPJRWFeAqE7sddT1R1SDcjmQAlX+fht5IiQ0v/ppZOQa/CVvqVOrHYCvHju3gml6empDwe147TyZB3+Od81AOaNOz7otAx4ZYkJ48wktlY+MW03gSHRzeM/SEM2kOd+1hIz/L1iLS1NaBVwQCbtD2FyDfcRyajsXjRBU+eI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772300436; c=relaxed/simple; bh=48EgMikTl/bu9W9zNPAxWtRIaL/Orh+GgvS+Au/IDV0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jnQL0npnezSQ7J4qzbryxQ1xtQoXfJ/8b9UpYtmPtaBMb5OZm1FR1EhYB3HzPK93XN3kyR6jmqXqnX6R83P0nvFNkiwCSJBURlW769optLbz/N66WfvHju0l9+2B9viw0tP2Gr71Da5mzM9ta3G75CvqahcS7ebm+GRGdkg5zjs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JwtuMui4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JwtuMui4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9164C19425; Sat, 28 Feb 2026 17:40:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772300436; bh=48EgMikTl/bu9W9zNPAxWtRIaL/Orh+GgvS+Au/IDV0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JwtuMui4JT3JkeZ8r3tjxwldKAvIV3HqoYPOIaDcMjn7NMGFOhSnXwt8UGwxT63ln 9CKhlwtHQ9Co++O0+/Q2o0Q7pZAI0xGz7FNuZaJSi3cuTDf7Gi4sdJPD522WtLD5c4 SMoJzQmkEA4h3xBf3+Q1mi3tzHj7GH4ytTh1LtVXlaAlUli4UbMZKGiiiDAkW0boCG BJmfL9A70C4oULy7gpO0n8zUgWmzvIImdgDEySrDbqfpyGXE8l9eXklDkvPYaWg77f 5ftYmk6MsiiLZblXd4OxFxw54+dZcDve6a68LDLpSxfVjDZZJLPwDbqGQO9XRc6QId vw9nMxqiQBnnQ== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Eric Dumazet , Krishna Kumar , Kuniyuki Iwashima , Jakub Kicinski , Sasha Levin Subject: [PATCH 6.19 476/844] net: do not pass flow_id to set_rps_cpu() Date: Sat, 28 Feb 2026 12:26:29 -0500 Message-ID: <20260228173244.1509663-477-sashal@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20260228173244.1509663-1-sashal@kernel.org> References: <20260228173244.1509663-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit From: Eric Dumazet [ Upstream commit 8a8a9fac9efa6423fd74938b940cb7d731780718 ] Blamed commit made the assumption that the RPS table for each receive queue would have the same size, and that it would not change. Compute flow_id in set_rps_cpu(), do not assume we can use the value computed by get_rps_cpu(). Otherwise we risk out-of-bound access and/or crashes. Fixes: 48aa30443e52 ("net: Cache hash and flow_id to avoid recalculation") Signed-off-by: Eric Dumazet Cc: Krishna Kumar Reviewed-by: Kuniyuki Iwashima Link: https://patch.msgid.link/20260220222605.3468081-1-edumazet@google.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/core/dev.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index f5e4040e08399..60a26208cbd87 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4988,8 +4988,7 @@ static bool rps_flow_is_active(struct rps_dev_flow *rflow, static struct rps_dev_flow * set_rps_cpu(struct net_device *dev, struct sk_buff *skb, - struct rps_dev_flow *rflow, u16 next_cpu, u32 hash, - u32 flow_id) + struct rps_dev_flow *rflow, u16 next_cpu, u32 hash) { if (next_cpu < nr_cpu_ids) { u32 head; @@ -5000,6 +4999,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb, struct rps_dev_flow *tmp_rflow; unsigned int tmp_cpu; u16 rxq_index; + u32 flow_id; int rc; /* Should we steer this flow to a different hardware queue? */ @@ -5015,6 +5015,7 @@ set_rps_cpu(struct net_device *dev, struct sk_buff *skb, if (!flow_table) goto out; + flow_id = rfs_slot(hash, flow_table); tmp_rflow = &flow_table->flows[flow_id]; tmp_cpu = READ_ONCE(tmp_rflow->cpu); @@ -5062,7 +5063,6 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb, struct rps_dev_flow_table *flow_table; struct rps_map *map; int cpu = -1; - u32 flow_id; u32 tcpu; u32 hash; @@ -5109,8 +5109,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb, /* OK, now we know there is a match, * we can look at the local (per receive queue) flow table */ - flow_id = rfs_slot(hash, flow_table); - rflow = &flow_table->flows[flow_id]; + rflow = &flow_table->flows[rfs_slot(hash, flow_table)]; tcpu = rflow->cpu; /* @@ -5129,8 +5128,7 @@ static int get_rps_cpu(struct net_device *dev, struct sk_buff *skb, ((int)(READ_ONCE(per_cpu(softnet_data, tcpu).input_queue_head) - rflow->last_qtail)) >= 0)) { tcpu = next_cpu; - rflow = set_rps_cpu(dev, skb, rflow, next_cpu, hash, - flow_id); + rflow = set_rps_cpu(dev, skb, rflow, next_cpu, hash); } if (tcpu < nr_cpu_ids && cpu_online(tcpu)) { -- 2.51.0