From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 21EE9C433F5 for ; Thu, 27 Jan 2022 19:27:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2Hv4xaA7L6aJbp/ZHwoBUXYXDzy8sgQKpBHkHTDFbrM=; b=Zb0zvPb3W8NmV2 qWCIQ3Gfbj5a9ctLqyvyiAA97qE0KeIoUnDeWnG41Wjv8+OLSxcMBsrpBPurW2cEbhpPZFLmgqYfE htsiNJP4tuXoOiU1eH4iKJrgBSzg9x/nmnWaCFZ7VXjzFX77l5tBEyOZhhHHdpvZiLE7DyqKEIrMx ETOc3FkiTe8ZtnvoWuNBLuz2/xpm3eWXy+obGeF4zx4AC74CwY3DShPYvsP8N+3mvJ9zdWR2MH1md 16PhcM13NLq5JIl29ENOYHWvQ68yob/0/xbFhSAtm+DNzGhImazCU2tCBr3qMtiy0D93uDzTbjkr4 XfSi3AM52g+TA2zeTEXw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nDAPw-00GwD9-Sf; Thu, 27 Jan 2022 19:26:49 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nDAPt-00GwCZ-DS for linux-arm-kernel@lists.infradead.org; Thu, 27 Jan 2022 19:26:46 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E673161DCD; Thu, 27 Jan 2022 19:26:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20B7DC340E4; Thu, 27 Jan 2022 19:26:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643311604; bh=6Res9NRPtl5YF/M1Ud8bjLRuxD85faal3wD0WLiFvBs=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Qb6ozEBKceGhedjc1f2FDyBZ+Rv36iJlImMdxU0sCyiz3q1jj5sEGpzTCrzxQOJ3d Xr9fvz/IEUB7J1bCYvgJ/HB8oua14QZwH94D/IFhcXoZ/KbrM+Mmp2ITNnYN2WAPda bxxvIPmD3WJC0E+FKEs7YBItncUVOD5KvrzfXGks3XAFpdWOp0NXI556TEKN7arPZ0 5tkGyQTb4A2VTo4npG8QD0L19A2sL61kC8gAIxi+1K5nVzxnrSNsV2Ab0lKFu0eV4K M1Pi9NcmyV5fnKCMmWHywM1+EAn5UjkoeWypoPKTWvoP7bIS5pg1PppTXSoSeSY8rE Z4BjMGeeo+w8Q== Date: Thu, 27 Jan 2022 11:26:42 -0800 From: Eric Biggers To: Ard Biesheuvel Cc: Nathan Huckleberry , Linux Crypto Mailing List , Herbert Xu , "David S. Miller" , Linux ARM , Paul Crowley , Sami Tolvanen Subject: Re: [RFC PATCH 1/7] crypto: xctr - Add XCTR support Message-ID: References: <20220125014422.80552-1-nhuck@google.com> <20220125014422.80552-2-nhuck@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220127_112645_523509_8293960C X-CRM114-Status: GOOD ( 19.27 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Thu, Jan 27, 2022 at 10:42:49AM +0100, Ard Biesheuvel wrote: > > diff --git a/include/crypto/xctr.h b/include/crypto/xctr.h > > new file mode 100644 > > index 000000000000..0d025e08ca26 > > --- /dev/null > > +++ b/include/crypto/xctr.h > > @@ -0,0 +1,19 @@ > > +/* SPDX-License-Identifier: GPL-2.0-or-later */ > > +/* > > + * XCTR: XOR Counter mode > > + * > > + * Copyright 2021 Google LLC > > + */ > > + > > +#include > > + > > +#ifndef _CRYPTO_XCTR_H > > +#define _CRYPTO_XCTR_H > > + > > +static inline void u32_to_le_block(u8 *a, u32 x, unsigned int size) > > +{ > > + memset(a, 0, size); > > + put_unaligned(cpu_to_le32(x), (u32 *)a); > > Please use put_unaligned_le32() here. > > And casting 'a' to (u32 *) is invalid C, so just pass 'a' directly. > Otherwise, the compiler might infer that 'a' is guaranteed to be > aligned after all, and use an aligned access instead. I agree that put_unaligned_le32() is more suitable here, but I don't think casting 'a' to 'u32 *' is undefined; it's only dereferencing it that would be undefined. If such casts were undefined, then get_unaligned() and put_unaligned() would be unusable under any circumstance. Here's an example of code that would be incorrect in that case: https://lore.kernel.org/linux-crypto/20220119093109.1567314-1-ardb@kernel.org - Eric _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel