From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45CD7C433EF for ; Wed, 30 Mar 2022 16:39:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348883AbiC3QlT (ORCPT ); Wed, 30 Mar 2022 12:41:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45820 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348880AbiC3QlQ (ORCPT ); Wed, 30 Mar 2022 12:41:16 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7847A5574F; Wed, 30 Mar 2022 09:39:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 20947B81D6E; Wed, 30 Mar 2022 16:39:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6689DC340EC; Wed, 30 Mar 2022 16:39:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1648658366; bh=BeSm0k0eIvg+IdQoJLQTmbtEwddZsmDSoeDqhzoge1o=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=PRsZkNn4eZcZKLrY55cFovgREtH2qxrBKTXbhobYJuWiAc/gX6Wku763F4obQ9BAz w8tnshA8dU+NYkSdF63PQBg7Q6aytF/YkGYnWAvAzH/L09r1ZlQ9Tlj62RCl0zyNbH 4xNQ2+KvMJywLq1Td6X27rdYf59pjfY1mnXpDgpiS7mhHF1uUwSgp48cWTjgyigEXf FWFat3wxCXVH5Gb4GzIzPixKEauJRwx4We3AsUld48xgk5ixjujLmk0X1MZxp1+FBl InmEQ9e9GzgJdeMHpnLaYOXPtaqtqhPlO6zYdQnWO25GXrsK3R/pnWvBqnFLNJcr3M evHN2pF54ShNA== Date: Wed, 30 Mar 2022 09:39:25 -0700 From: Jakub Kicinski To: Ziyang Xuan Cc: , , , , , , , , , Vadim Fedorenko Subject: Re: [PATCH net] net/tls: fix slab-out-of-bounds bug in decrypt_internal Message-ID: <20220330093925.2d8ee6ca@kernel.org> In-Reply-To: <20220330085009.1011614-1-william.xuanziyang@huawei.com> References: <20220330085009.1011614-1-william.xuanziyang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Wed, 30 Mar 2022 16:50:09 +0800 Ziyang Xuan wrote: > The memory size of tls_ctx->rx.iv for AES128-CCM is 12 setting in > tls_set_sw_offload(). The return value of crypto_aead_ivsize() > for "ccm(aes)" is 16. So memcpy() require 16 bytes from 12 bytes > memory space will trigger slab-out-of-bounds bug as following: > > ================================================================== > BUG: KASAN: slab-out-of-bounds in decrypt_internal+0x385/0xc40 [tls] > Read of size 16 at addr ffff888114e84e60 by task tls/10911 > > Call Trace: > > dump_stack_lvl+0x34/0x44 > print_report.cold+0x5e/0x5db > ? decrypt_internal+0x385/0xc40 [tls] > kasan_report+0xab/0x120 > ? decrypt_internal+0x385/0xc40 [tls] > kasan_check_range+0xf9/0x1e0 > memcpy+0x20/0x60 > decrypt_internal+0x385/0xc40 [tls] > ? tls_get_rec+0x2e0/0x2e0 [tls] > ? process_rx_list+0x1a5/0x420 [tls] > ? tls_setup_from_iter.constprop.0+0x2e0/0x2e0 [tls] > decrypt_skb_update+0x9d/0x400 [tls] > tls_sw_recvmsg+0x3c8/0xb50 [tls] > > Allocated by task 10911: > kasan_save_stack+0x1e/0x40 > __kasan_kmalloc+0x81/0xa0 > tls_set_sw_offload+0x2eb/0xa20 [tls] > tls_setsockopt+0x68c/0x700 [tls] > __sys_setsockopt+0xfe/0x1b0 Interesting, are you running on non-x86 platform or with some crypto accelerator? I wonder why we're not hitting it with KASAN and the selftest we have. > Reserve MAX_IV_SIZE memory space for iv to be compatible with all > ciphers. And do iv and salt copy like done in tls_do_encryption(). > > Fixes: f295b3ae9f59 ("net/tls: Add support of AES128-CCM based ciphers") > Signed-off-by: Ziyang Xuan > --- > net/tls/tls_sw.c | 10 +++------- > 1 file changed, 3 insertions(+), 7 deletions(-) > > diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c > index 0024a692f0f8..6b858f995b23 100644 > --- a/net/tls/tls_sw.c > +++ b/net/tls/tls_sw.c > @@ -1456,7 +1456,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb, > aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv); > mem_size = aead_size + (nsg * sizeof(struct scatterlist)); > mem_size = mem_size + prot->aad_size; > - mem_size = mem_size + crypto_aead_ivsize(ctx->aead_recv); > + mem_size = mem_size + MAX_IV_SIZE; This change is not strictly required for the patch, right? Can we drop it, and perhaps send as an optimization separately later? > /* Allocate a single block of memory which contains > * aead_req || sgin[] || sgout[] || aad || iv. > @@ -1493,12 +1493,8 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb, > kfree(mem); > return err; > } > - if (prot->version == TLS_1_3_VERSION || > - prot->cipher_type == TLS_CIPHER_CHACHA20_POLY1305) > - memcpy(iv + iv_offset, tls_ctx->rx.iv, > - crypto_aead_ivsize(ctx->aead_recv)); > - else > - memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size); > + memcpy(iv + iv_offset, tls_ctx->rx.iv, > + prot->iv_size + prot->salt_size); If the IV really is 16B then we're passing 4 bytes of uninitialized data at the end of the buffer, right? > xor_iv_with_seq(prot, iv + iv_offset, tls_ctx->rx.rec_seq); >