From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DCFA4C636D3 for ; Fri, 3 Feb 2023 02:46:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:From:References:Cc:To:Subject: MIME-Version:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7UWb2A6tAF29kK1PuyVj1SSMhS84IdoV8xkePcWPQhE=; b=snrzPxOLWPkOOl h6WsXaPT7dgnOMZjwE7KgDHgs0lnV0EmmmpkCDKB0U262Q/Rjqgv1t4RJTbOw4jfPW/b5v/viVxvu uhViLh8rQyM7Z12Q49s8HzEPVJCSJkGYD6KL9hNt31IiccM4ZVwCDuMiEMN/DJ2d4lDiKpp2q6FOL KYtwGUdMbDuMghz+yo5pxaHCCiIHMFGAqlXXyd+BQM/sjveZJEU6KpBhnNZN7TWw6b8lv+CR9JRqu NtRq7Gg9W05Rtknsb9OTDTlqY/ru4gcYcN2Qw6uZxD7x6WsM1Rr8WG6jNOtSvWqDrVUfaCwxKMI10 +aIyTdRX20BdQAJlZ7BQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNm4Y-0003rE-Ho; Fri, 03 Feb 2023 02:45:06 +0000 Received: from out30-100.freemail.mail.aliyun.com ([115.124.30.100]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pNm4U-0003pl-N3 for linux-arm-kernel@lists.infradead.org; Fri, 03 Feb 2023 02:45:05 +0000 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=tianjia.zhang@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0Vamridi_1675392294; Received: from 30.240.100.126(mailfrom:tianjia.zhang@linux.alibaba.com fp:SMTPD_---0Vamridi_1675392294) by smtp.aliyun-inc.com; Fri, 03 Feb 2023 10:44:55 +0800 Message-ID: Date: Fri, 3 Feb 2023 10:44:53 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.6.1 Subject: Re: [v4 PATCH] crypto: arm64/sm4-gcm - Fix possible crash in GCM cryption To: Herbert Xu Cc: "David S. Miller" , Catalin Marinas , Will Deacon , linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <20230201123133.99768-1-tianjia.zhang@linux.alibaba.com> Content-Language: en-US From: Tianjia Zhang In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230202_184502_992134_0C14E58A X-CRM114-Status: GOOD ( 27.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Herbert, On 2/2/23 4:33 PM, Herbert Xu wrote: > On Wed, Feb 01, 2023 at 08:31:33PM +0800, Tianjia Zhang wrote: >> >> + sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, walk->dst.virt.addr, >> + walk->src.virt.addr, iv, walk->nbytes, ghash, >> + ctx->ghash_table, (const u8 *)&lengths); > > I still think this is error-prone. When walk->nbytes == 0, > walk->src and walk->dst are undefined. Sure you could argue > that the underlying assembly code won't touch the values, but > accessing uninitialised memory even if just to throw them away > is still a bit icky. You're right, whether used or not, accessing an undefined pointer is always ugly. This benefited me a lot. > > Anyway, here's my attempt at rewriting the gcm loop: > > ---8<--- > An often overlooked aspect of the skcipher walker API is that an > error is not just indicated by a non-zero return value, but by the > fact that walk->nbytes is zero. > > Thus it is an error to call skcipher_walk_done after getting back > walk->nbytes == 0 from the previous interaction with the walker. > > This is because when walk->nbytes is zero the walker is left in > an undefined state and any further calls to it may try to free > uninitialised stack memory. > > The sm4 arm64 ccm code gets this wrong and ends up calling > skcipher_walk_done even when walk->nbytes is zero. > > This patch rewrites the loop in a form that resembles other callers. > > Reported-by: Tianjia Zhang > Fixes: ae1b83c7d572 ("crypto: arm64/sm4 - add CE implementation for GCM mode") > Signed-off-by: Herbert Xu Thanks for the fix, this patch works find to me, so Tested-by: Tianjia Zhang > > diff --git a/arch/arm64/crypto/sm4-ce-gcm-glue.c b/arch/arm64/crypto/sm4-ce-gcm-glue.c > index c450a2025ca9..73bfb6972d3a 100644 > --- a/arch/arm64/crypto/sm4-ce-gcm-glue.c > +++ b/arch/arm64/crypto/sm4-ce-gcm-glue.c > @@ -135,22 +135,23 @@ static void gcm_calculate_auth_mac(struct aead_request *req, u8 ghash[]) > } > > static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, > - struct sm4_gcm_ctx *ctx, u8 ghash[], > + u8 ghash[], int err, > void (*sm4_ce_pmull_gcm_crypt)(const u32 *rkey_enc, > u8 *dst, const u8 *src, u8 *iv, > unsigned int nbytes, u8 *ghash, > const u8 *ghash_table, const u8 *lengths)) > { > + struct crypto_aead *aead = crypto_aead_reqtfm(req); > + struct sm4_gcm_ctx *ctx = crypto_aead_ctx(aead); > u8 __aligned(8) iv[SM4_BLOCK_SIZE]; > be128 __aligned(8) lengths; > - int err; > > memset(ghash, 0, SM4_BLOCK_SIZE); > > lengths.a = cpu_to_be64(req->assoclen * 8); > lengths.b = cpu_to_be64(walk->total * 8); > > - memcpy(iv, walk->iv, GCM_IV_SIZE); > + memcpy(iv, req->iv, GCM_IV_SIZE); > put_unaligned_be32(2, iv + GCM_IV_SIZE); > > kernel_neon_begin(); > @@ -158,49 +159,51 @@ static int gcm_crypt(struct aead_request *req, struct skcipher_walk *walk, > if (req->assoclen) > gcm_calculate_auth_mac(req, ghash); > > - do { > + while (walk->nbytes) { > unsigned int tail = walk->nbytes % SM4_BLOCK_SIZE; > const u8 *src = walk->src.virt.addr; > u8 *dst = walk->dst.virt.addr; > > if (walk->nbytes == walk->total) { > - tail = 0; > - > sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, > walk->nbytes, ghash, > ctx->ghash_table, > (const u8 *)&lengths); > - } else if (walk->nbytes - tail) { > - sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, > - walk->nbytes - tail, ghash, > - ctx->ghash_table, NULL); > + > + kernel_neon_end(); > + > + return skcipher_walk_done(walk, 0); > } > > + sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, dst, src, iv, > + walk->nbytes - tail, ghash, > + ctx->ghash_table, NULL); > + > kernel_neon_end(); > > err = skcipher_walk_done(walk, tail); > - if (err) > - return err; > - if (walk->nbytes) > - kernel_neon_begin(); > - } while (walk->nbytes > 0); > > - return 0; > + kernel_neon_begin(); > + } > + > + sm4_ce_pmull_gcm_crypt(ctx->key.rkey_enc, NULL, NULL, iv, > + walk->nbytes, ghash, ctx->ghash_table, > + (const u8 *)&lengths); > + > + kernel_neon_end(); > + > + return err; > } > > static int gcm_encrypt(struct aead_request *req) > { > struct crypto_aead *aead = crypto_aead_reqtfm(req); > - struct sm4_gcm_ctx *ctx = crypto_aead_ctx(aead); > u8 __aligned(8) ghash[SM4_BLOCK_SIZE]; > struct skcipher_walk walk; > int err; > > err = skcipher_walk_aead_encrypt(&walk, req, false); > - if (err) > - return err; > - > - err = gcm_crypt(req, &walk, ctx, ghash, sm4_ce_pmull_gcm_enc); > + err = gcm_crypt(req, &walk, ghash, err, sm4_ce_pmull_gcm_enc); > if (err) > return err; > > @@ -215,17 +218,13 @@ static int gcm_decrypt(struct aead_request *req) > { > struct crypto_aead *aead = crypto_aead_reqtfm(req); > unsigned int authsize = crypto_aead_authsize(aead); > - struct sm4_gcm_ctx *ctx = crypto_aead_ctx(aead); > u8 __aligned(8) ghash[SM4_BLOCK_SIZE]; > u8 authtag[SM4_BLOCK_SIZE]; > struct skcipher_walk walk; > int err; > > err = skcipher_walk_aead_decrypt(&walk, req, false); > - if (err) > - return err; > - > - err = gcm_crypt(req, &walk, ctx, ghash, sm4_ce_pmull_gcm_dec); > + err = gcm_crypt(req, &walk, ghash, err, sm4_ce_pmull_gcm_dec); > if (err) > return err; > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel