From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1E86C7EE31 for ; Sun, 26 Feb 2023 14:56:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231166AbjBZO4B (ORCPT ); Sun, 26 Feb 2023 09:56:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55248 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231277AbjBZOzO (ORCPT ); Sun, 26 Feb 2023 09:55:14 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 577F512F1B; Sun, 26 Feb 2023 06:50:59 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E3E55B80C71; Sun, 26 Feb 2023 14:50:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA589C433D2; Sun, 26 Feb 2023 14:50:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1677423013; bh=Ov297IfQkyRAzBny+6tsDT3S7ysVEojlYlRPxBhLAF0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ERzPslgcSRrcvlVLhXyw51cnpm1J5UEeqZfnJX9o9xwMEV83k9zmitMCU3HPAcX4H ghnOCbgygB/hlg+FhSOZ6Zh4/Gac0udiC7Lb54P69F0/kv6x/Rw2HhhtwjMaJ6FGvR zbm3ucRlIJuNNoaOj79ulwpGew32UAa2LdV7DPSPe74Og+/ec5UT5gu6rBuxmY1pqC 1iQvBfs6yyMESZ7mQdcwzAVXioqPAJxVkteMqFlGyrM5NaAR/9mt2C26qSfaFh8iy6 D/pMc364CK8U3IUj2ni1Rf9xbRDxM4YUywOAinGWZu04OJ3nZvQNEVlTPJykaqK/J8 D3f3Qko79FFvQ== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Eric Dumazet , Kees Cook , "David S . Miller" , Sasha Levin , kuba@kernel.org, pabeni@redhat.com, netdev@vger.kernel.org Subject: [PATCH AUTOSEL 5.15 36/36] scm: add user copy checks to put_cmsg() Date: Sun, 26 Feb 2023 09:48:44 -0500 Message-Id: <20230226144845.827893-36-sashal@kernel.org> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230226144845.827893-1-sashal@kernel.org> References: <20230226144845.827893-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Eric Dumazet [ Upstream commit 5f1eb1ff58ea122e24adf0bc940f268ed2227462 ] This is a followup of commit 2558b8039d05 ("net: use a bounce buffer for copying skb->mark") x86 and powerpc define user_access_begin, meaning that they are not able to perform user copy checks when using user_write_access_begin() / unsafe_copy_to_user() and friends [1] Instead of waiting bugs to trigger on other arches, add a check_object_size() in put_cmsg() to make sure that new code tested on x86 with CONFIG_HARDENED_USERCOPY=y will perform more security checks. [1] We can not generically call check_object_size() from unsafe_copy_to_user() because UACCESS is enabled at this point. Signed-off-by: Eric Dumazet Cc: Kees Cook Acked-by: Kees Cook Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- net/core/scm.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/net/core/scm.c b/net/core/scm.c index 5c356f0dee30c..acb7d776fa6ec 100644 --- a/net/core/scm.c +++ b/net/core/scm.c @@ -229,6 +229,8 @@ int put_cmsg(struct msghdr * msg, int level, int type, int len, void *data) if (msg->msg_control_is_user) { struct cmsghdr __user *cm = msg->msg_control_user; + check_object_size(data, cmlen - sizeof(*cm), true); + if (!user_write_access_begin(cm, cmlen)) goto efault; -- 2.39.0