From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3A4AFD3E786 for ; Thu, 11 Dec 2025 01:21:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=chaZX27TLBq5e2flOwT5p0/dJIbuyo3Q6+7s2YUnxIg=; b=JeuBboKFmSYDqHTLoI2ar1A+YL agGHyODX8S4q4Rh5qBpNrvXGN02LZ8lONLLeBPwDII6gInihrp0erX5jBaM0YBuC6QkrcVRnfPze7 2Q8FtWUxbd92AirpjdGNJFJshEBh+LGA72lQ7oyHwTh/4/JCCw4VOWR3Z9A3SBPJuaLZDGvozBOzH DU6f8oXyfTBs+A6IiHAhiy86QDRO8+DIkpmC3m98RFK8az1s126ZqvUC/FjyK0ugMoLTThGLz5oG3 lwWyNC3extlmSzG6usnfbOB/9RyZl+Om7fx9CjV33yHYIEeGESMtoWM0SoUBuIpLG98CdRJnpUZLA Y6sdGI5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vTVMc-0000000G3lA-21O9; Thu, 11 Dec 2025 01:21:02 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vTVMV-0000000G3dd-2h13 for linux-arm-kernel@lists.infradead.org; Thu, 11 Dec 2025 01:20:58 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 56B5344469; Thu, 11 Dec 2025 01:20:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CFBEBC4CEF7; Thu, 11 Dec 2025 01:20:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1765416055; bh=onANQhEwtvrw+cy6TRC7cmRjkfIZ8qsWJbXDQ34RlFI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GTNAoBZQd2c+wa4Nq0ztoPr7lKhdWTsoBS9yHf06SVXxEeq58TfLq6Y4CC0ukBw71 kFDBv4X7IdLvrkgq884PSpUeTUMbSofy1yYYTQwvCnKyubE3hDlN0SJugJfwpGvpxd VhWudBFqEgrHW5GbHbg8l9utmgpEcw3Rh97kZdkm+F2e02KlgeNICGo/chsM8ImNQe idNguk/kP2WmW6IE3JKwEBYtqsaXrB8vXmFVioks6Aoa7kYfdrcCXtlftSi4T7Pls2 ZpsG+RBrzfl5ZGtvFK6ARpcpXX3dFwqJiPjS5+rMFDLlGo30W8N5CcyDFNCsoX2Wna G0XAIHMlVP7GA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Herbert Xu , linux-arm-kernel@lists.infradead.org, x86@kernel.org, Eric Biggers Subject: [PATCH 07/12] crypto: adiantum - Use scatter_walk API instead of sg_miter Date: Wed, 10 Dec 2025 17:18:39 -0800 Message-ID: <20251211011846.8179-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20251211011846.8179-1-ebiggers@kernel.org> References: <20251211011846.8179-1-ebiggers@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251210_172055_723730_0E60CE1E X-CRM114-Status: GOOD ( 16.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Make adiantum_hash_message() use the scatter_walk API instead of sg_miter. scatter_walk is a bit simpler and also more efficient. For example, unlike sg_miter, scatter_walk doesn't require that the number of scatterlist entries be calculated up-front. Signed-off-by: Eric Biggers --- crypto/adiantum.c | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/crypto/adiantum.c b/crypto/adiantum.c index bbe519fbd739..519e95228ad8 100644 --- a/crypto/adiantum.c +++ b/crypto/adiantum.c @@ -367,30 +367,27 @@ static void nhpoly1305_final(struct nhpoly1305_ctx *ctx, * evaluated as a polynomial in GF(2^{130}-5), like in the Poly1305 MAC. Note * that the polynomial evaluation by itself would suffice to achieve the ε-∆U * property; NH is used for performance since it's much faster than Poly1305. */ static void adiantum_hash_message(struct skcipher_request *req, - struct scatterlist *sgl, unsigned int nents, - le128 *out) + struct scatterlist *sgl, le128 *out) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); - const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; - struct sg_mapping_iter miter; - unsigned int i, n; + unsigned int len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; + struct scatter_walk walk; nhpoly1305_init(&rctx->u.hash_ctx); + scatterwalk_start(&walk, sgl); + while (len) { + unsigned int n = scatterwalk_next(&walk, len); - sg_miter_start(&miter, sgl, nents, SG_MITER_FROM_SG | SG_MITER_ATOMIC); - for (i = 0; i < bulk_len; i += n) { - sg_miter_next(&miter); - n = min_t(unsigned int, miter.length, bulk_len - i); - nhpoly1305_update(&rctx->u.hash_ctx, tctx, miter.addr, n); + nhpoly1305_update(&rctx->u.hash_ctx, tctx, walk.addr, n); + scatterwalk_done_src(&walk, n); + len -= n; } - sg_miter_stop(&miter); - nhpoly1305_final(&rctx->u.hash_ctx, tctx, out); } /* Continue Adiantum encryption/decryption after the stream cipher step */ static int adiantum_finish(struct skcipher_request *req) @@ -398,11 +395,10 @@ static int adiantum_finish(struct skcipher_request *req) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; struct scatterlist *dst = req->dst; - const unsigned int dst_nents = sg_nents(dst); le128 digest; /* If decrypting, decrypt C_M with the block cipher to get P_M */ if (!rctx->enc) crypto_cipher_decrypt_one(tctx->blockcipher, rctx->rbuf.bytes, @@ -412,11 +408,12 @@ static int adiantum_finish(struct skcipher_request *req) * Second hash step * enc: C_R = C_M - H_{K_H}(T, C_L) * dec: P_R = P_M - H_{K_H}(T, P_L) */ le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash); - if (dst_nents == 1 && dst->offset + req->cryptlen <= PAGE_SIZE) { + if (dst->length >= req->cryptlen && + dst->offset + req->cryptlen <= PAGE_SIZE) { /* Fast path for single-page destination */ struct page *page = sg_page(dst); void *virt = kmap_local_page(page) + dst->offset; nhpoly1305_init(&rctx->u.hash_ctx); @@ -426,11 +423,11 @@ static int adiantum_finish(struct skcipher_request *req) memcpy(virt + bulk_len, &rctx->rbuf.bignum, sizeof(le128)); flush_dcache_page(page); kunmap_local(virt); } else { /* Slow path that works for any destination scatterlist */ - adiantum_hash_message(req, dst, dst_nents, &digest); + adiantum_hash_message(req, dst, &digest); le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest); scatterwalk_map_and_copy(&rctx->rbuf.bignum, dst, bulk_len, sizeof(le128), 1); } return 0; @@ -451,11 +448,10 @@ static int adiantum_crypt(struct skcipher_request *req, bool enc) struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm); struct adiantum_request_ctx *rctx = skcipher_request_ctx(req); const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE; struct scatterlist *src = req->src; - const unsigned int src_nents = sg_nents(src); unsigned int stream_len; le128 digest; if (req->cryptlen < BLOCKCIPHER_BLOCK_SIZE) return -EINVAL; @@ -466,22 +462,23 @@ static int adiantum_crypt(struct skcipher_request *req, bool enc) * First hash step * enc: P_M = P_R + H_{K_H}(T, P_L) * dec: C_M = C_R + H_{K_H}(T, C_L) */ adiantum_hash_header(req); - if (src_nents == 1 && src->offset + req->cryptlen <= PAGE_SIZE) { + if (src->length >= req->cryptlen && + src->offset + req->cryptlen <= PAGE_SIZE) { /* Fast path for single-page source */ void *virt = kmap_local_page(sg_page(src)) + src->offset; nhpoly1305_init(&rctx->u.hash_ctx); nhpoly1305_update(&rctx->u.hash_ctx, tctx, virt, bulk_len); nhpoly1305_final(&rctx->u.hash_ctx, tctx, &digest); memcpy(&rctx->rbuf.bignum, virt + bulk_len, sizeof(le128)); kunmap_local(virt); } else { /* Slow path that works for any source scatterlist */ - adiantum_hash_message(req, src, src_nents, &digest); + adiantum_hash_message(req, src, &digest); scatterwalk_map_and_copy(&rctx->rbuf.bignum, src, bulk_len, sizeof(le128), 0); } le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash); le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest); -- 2.52.0