From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E4C1219301; Wed, 6 May 2026 00:06:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778025997; cv=none; b=j55L5bZA7IxOUzAnfi0MNWTPeLf+uXchmOKab0aFbKnzO4Nuozn6VqJos4Yv1KuRlenCoSjEmJ7LnHqKg2qzs8lK+kDC07t2imVwivLYvS7qaa0dNk/Mk6zSoBHKshfVmDbb5T6zKgHiB25KoFozBuGh1gG6ocOiotnQowPiNb8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778025997; c=relaxed/simple; bh=MhdsUVAu8zWSr4mY4ZPOlIDTTXz7Lnb3ZcEsaPMIvYA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O2OOv4liqgPmf5dDjVxjDF/GC/x8tHem26ymPjge92Tb9+6RjT5/7X5+5vwNik5icb9bEre8eW75dJkWNTC+C6IMG4cm01N06dXAF5XUDyA9MwIQarjQg+Ghs5v4H3o9Vk6NlW/Pu1HUn/COGNL7mAhbwVS4N5M9NOzIHXv9HfQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mYpQsvD3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mYpQsvD3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E5A56C2BCF4; Wed, 6 May 2026 00:06:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778025997; bh=MhdsUVAu8zWSr4mY4ZPOlIDTTXz7Lnb3ZcEsaPMIvYA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mYpQsvD3F2ZXs8O5Z6OLOCFKQ5uP8VRah5fFKeBPgP3c3oJHKepXHN4HD9l0sIqz7 m1gjhIbe6LPejK+zsM7agy20Jnh29iXmHPs/wjmba0xjKKAYbt3BS9GkuOKJ0aqHLP zIX9zqnYLK5vSwIrk+ABWFmVhQDgeVDeUAY6uhp5saNE28i5N3f3VkxlmK2CaOcDoy ePjkFNzVnENWEzXqfm8wRpdsfsgHwIqxlgeRoz3OMQZpME6VMno6S9QhtdbZ99XWyV 1LltHdKReUNAv0Ny4rsJG0E2eUPBKN+6RVGFOGCPumfcibt43v5FAfzHTZP7oZMCqs AJpILvkPczvqQ== From: Jakub Kicinski To: davem@davemloft.net Cc: netdev@vger.kernel.org, edumazet@google.com, pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org, shuah@kernel.org, linux-kselftest@vger.kernel.org, Jakub Kicinski Subject: [PATCH net 08/12] net: shaper: fix undersized reply skb allocation in GROUP command Date: Tue, 5 May 2026 17:06:24 -0700 Message-ID: <20260506000628.1501691-9-kuba@kernel.org> X-Mailer: git-send-email 2.54.0 In-Reply-To: <20260506000628.1501691-1-kuba@kernel.org> References: <20260506000628.1501691-1-kuba@kernel.org> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit net_shaper_group_send_reply() writes both the NET_SHAPER_A_IFINDEX attribute (via net_shaper_fill_binding()) and the nested NET_SHAPER_A_HANDLE attribute (via net_shaper_fill_handle()), but the reply skb at the call site in net_shaper_nl_group_doit() is allocated using net_shaper_handle_size(), which only accounts for the nested handle. The allocation is therefore short by nla_total_size(sizeof(u32)) (8 bytes) for the IFINDEX attribute. In practice the slab allocator rounds up the small allocation so the bug is latent, but the size accounting is wrong and could bite if the reply grew further. Introduce net_shaper_group_reply_size() that accounts for the full reply payload and use it both at the genlmsg_new() call site and in the defensive WARN_ONCE message. Fixes: 5d5d4700e75d ("net-shapers: implement NL group operation") Signed-off-by: Jakub Kicinski --- net/shaper/shaper.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/net/shaper/shaper.c b/net/shaper/shaper.c index 2ba397fa3bfd..10d76f7148bf 100644 --- a/net/shaper/shaper.c +++ b/net/shaper/shaper.c @@ -90,6 +90,12 @@ static int net_shaper_handle_size(void) nla_total_size(sizeof(u32))); } +static int net_shaper_group_reply_size(void) +{ + return nla_total_size(sizeof(u32)) + /* NET_SHAPER_A_IFINDEX */ + net_shaper_handle_size(); /* NET_SHAPER_A_HANDLE */ +} + static int net_shaper_fill_binding(struct sk_buff *msg, const struct net_shaper_binding *binding, u32 type) @@ -1225,7 +1231,7 @@ static int net_shaper_group_send_reply(struct net_shaper_binding *binding, free_msg: /* Should never happen as msg is pre-allocated with enough space. */ WARN_ONCE(true, "calculated message payload length (%d)", - net_shaper_handle_size()); + net_shaper_group_reply_size()); nlmsg_free(msg); return -EMSGSIZE; } @@ -1273,7 +1279,7 @@ int net_shaper_nl_group_doit(struct sk_buff *skb, struct genl_info *info) /* Prepare the msg reply in advance, to avoid device operation * rollback on allocation failure. */ - msg = genlmsg_new(net_shaper_handle_size(), GFP_KERNEL); + msg = genlmsg_new(net_shaper_group_reply_size(), GFP_KERNEL); if (!msg) { ret = -ENOMEM; goto free_leaves; -- 2.54.0