* [PATCH 00/16] SHA-512 library functions @ 2025-06-11 2:09 Eric Biggers 2025-06-11 2:09 ` [PATCH 01/16] crypto: sha512 - rename conflicting symbols Eric Biggers ` (15 more replies) 0 siblings, 16 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds This series applies to v6.16-rc1 and is targeting the libcrypto-next tree. It is also available at: git fetch https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git sha512-lib-v1 This series adds support for SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 to lib/crypto/. The new functions take advantage of the kernel's existing architecture-optimized implementations of the SHA-512 compression function. The new functions are fully tested using KUnit. To avoid duplicating all arch-optimized implementations of the SHA-512 compression function (~3000 lines of code total), they are moved into lib/crypto/ rather than copied. To make the "sha384", "sha512", "hmac(sha384)", and "hmac(sha512)" crypto_shash algorithms in the old-school crypto API continue to be properly optimized after that, they are reimplemented on top of lib/crypto/, which is straightforward. The following lists some of the design choices and conventions that I've followed in more detail. Where these differ from the code or APIs for other algorithms (e.g., SHA-256 in some cases), I'd like to do it this way going forward and plan to fix up the other algorithms accordingly: - APIs are fully documented with kerneldoc comments. - APIs cannot fail, and return void. - APIs work in all contexts. This doesn't mean that they *should* be called in all contexts, but rather they always just work as expected. - Tests are KUnit tests, and they are fairly thorough (more thorough than crypto/testmgr.c) and also optionally include benchmarks. - Architecture-optimized code is integrated the same way I'm doing it for lib/crc/: it's in subdirectories lib/crypto/$(SRCARCH), it's enabled by default, and it's inlined into the same module as the generic code. This solves a number of problems; for more details, see https://lore.kernel.org/r/20250607200454.73587-1-ebiggers@kernel.org - HMAC support is a first-class citizen. - APIs handle zeroization, when applicable. - Message contexts are *_ctx instead of *_state. It's shorter, avoids ambiguity with the compression function state, and matches OpenSSL. - Length arguments are size_t, are in bytes, are named len or *_len, and immediately follow the corresponding buffer. "Object" being operated on is first argument; outputs otherwise follow inputs. - The structures for different algorithms use different types, which prevents usage errors where functions are mixed up between algorithms. - The compression function state is strongly typed, not a plain array. Eric Biggers (16): crypto: sha512 - rename conflicting symbols lib/crypto/sha512: add support for SHA-384 and SHA-512 lib/crypto/sha512: add HMAC-SHA384 and HMAC-SHA512 support lib/crypto/sha512: add KUnit tests for SHA-384 and SHA-512 lib/crypto/sha256: add KUnit tests for SHA-224 and SHA-256 crypto: riscv/sha512 - stop depending on sha512_generic_block_fn crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library lib/crypto/sha512: migrate arm-optimized SHA-512 code to library lib/crypto/sha512: migrate arm64-optimized SHA-512 code to library mips: cavium-octeon: move octeon-crypto.h into asm directory lib/crypto/sha512: migrate mips-optimized SHA-512 code to library lib/crypto/sha512: migrate riscv-optimized SHA-512 code to library lib/crypto/sha512: migrate s390-optimized SHA-512 code to library lib/crypto/sha512: migrate sparc-optimized SHA-512 code to library lib/crypto/sha512: migrate x86-optimized SHA-512 code to library crypto: sha512 - remove sha512_base.h arch/arm/configs/exynos_defconfig | 1 - arch/arm/configs/milbeaut_m10v_defconfig | 1 - arch/arm/configs/multi_v7_defconfig | 1 - arch/arm/configs/omap2plus_defconfig | 1 - arch/arm/configs/pxa_defconfig | 1 - arch/arm/crypto/Kconfig | 10 - arch/arm/crypto/Makefile | 15 - arch/arm/crypto/sha512-glue.c | 110 --- arch/arm/crypto/sha512-neon-glue.c | 75 -- arch/arm/crypto/sha512.h | 3 - arch/arm64/configs/defconfig | 1 - arch/arm64/crypto/Kconfig | 19 - arch/arm64/crypto/Makefile | 14 - arch/arm64/crypto/sha512-ce-glue.c | 96 --- arch/arm64/crypto/sha512-glue.c | 83 --- arch/mips/cavium-octeon/crypto/Makefile | 1 - .../mips/cavium-octeon/crypto/octeon-crypto.c | 3 +- arch/mips/cavium-octeon/crypto/octeon-md5.c | 3 +- arch/mips/cavium-octeon/crypto/octeon-sha1.c | 3 +- .../mips/cavium-octeon/crypto/octeon-sha256.c | 3 +- .../mips/cavium-octeon/crypto/octeon-sha512.c | 167 ----- arch/mips/configs/cavium_octeon_defconfig | 1 - arch/mips/crypto/Kconfig | 10 - .../asm/octeon/crypto.h} | 0 arch/riscv/crypto/Kconfig | 11 - arch/riscv/crypto/Makefile | 3 - arch/riscv/crypto/sha512-riscv64-glue.c | 124 ---- arch/s390/configs/debug_defconfig | 1 - arch/s390/configs/defconfig | 1 - arch/s390/crypto/Kconfig | 10 - arch/s390/crypto/Makefile | 1 - arch/s390/crypto/sha512_s390.c | 151 ---- arch/sparc/crypto/Kconfig | 10 - arch/sparc/crypto/Makefile | 2 - arch/sparc/crypto/sha512_glue.c | 122 ---- arch/x86/crypto/Kconfig | 13 - arch/x86/crypto/Makefile | 3 - arch/x86/crypto/sha512_ssse3_glue.c | 322 --------- crypto/Kconfig | 4 +- crypto/Makefile | 2 +- crypto/sha512.c | 254 +++++++ crypto/sha512_generic.c | 217 ------ crypto/testmgr.c | 16 + drivers/crypto/starfive/jh7110-hash.c | 8 +- include/crypto/sha2.h | 350 +++++++++ include/crypto/sha512_base.h | 120 ---- lib/crypto/Kconfig | 20 + lib/crypto/Makefile | 38 + lib/crypto/arm/.gitignore | 2 + .../crypto => lib/crypto/arm}/sha512-armv4.pl | 0 lib/crypto/arm/sha512.h | 38 + lib/crypto/arm64/.gitignore | 2 + .../crypto/arm64}/sha512-ce-core.S | 10 +- lib/crypto/arm64/sha512.h | 46 ++ lib/crypto/mips/sha512.h | 74 ++ .../riscv}/sha512-riscv64-zvknhb-zvkb.S | 4 +- lib/crypto/riscv/sha512.h | 41 ++ lib/crypto/s390/sha512.h | 28 + lib/crypto/sha512.c | 403 +++++++++++ lib/crypto/sparc/sha512.h | 42 ++ .../crypto => lib/crypto/sparc}/sha512_asm.S | 0 lib/crypto/tests/Kconfig | 24 + lib/crypto/tests/Makefile | 6 + lib/crypto/tests/hash-test-template.h | 512 ++++++++++++++ lib/crypto/tests/sha224-testvecs.h | 223 ++++++ lib/crypto/tests/sha224_kunit.c | 50 ++ lib/crypto/tests/sha256-testvecs.h | 223 ++++++ lib/crypto/tests/sha256_kunit.c | 39 ++ lib/crypto/tests/sha384-testvecs.h | 566 +++++++++++++++ lib/crypto/tests/sha384_kunit.c | 48 ++ lib/crypto/tests/sha512-testvecs.h | 662 ++++++++++++++++++ lib/crypto/tests/sha512_kunit.c | 48 ++ .../crypto/x86}/sha512-avx-asm.S | 11 +- .../crypto/x86}/sha512-avx2-asm.S | 11 +- .../crypto/x86}/sha512-ssse3-asm.S | 12 +- lib/crypto/x86/sha512.h | 54 ++ scripts/crypto/gen-hash-testvecs.py | 83 +++ 77 files changed, 3931 insertions(+), 1756 deletions(-) delete mode 100644 arch/arm/crypto/sha512-glue.c delete mode 100644 arch/arm/crypto/sha512-neon-glue.c delete mode 100644 arch/arm/crypto/sha512.h delete mode 100644 arch/arm64/crypto/sha512-ce-glue.c delete mode 100644 arch/arm64/crypto/sha512-glue.c delete mode 100644 arch/mips/cavium-octeon/crypto/octeon-sha512.c rename arch/mips/{cavium-octeon/crypto/octeon-crypto.h => include/asm/octeon/crypto.h} (100%) delete mode 100644 arch/riscv/crypto/sha512-riscv64-glue.c delete mode 100644 arch/s390/crypto/sha512_s390.c delete mode 100644 arch/sparc/crypto/sha512_glue.c delete mode 100644 arch/x86/crypto/sha512_ssse3_glue.c create mode 100644 crypto/sha512.c delete mode 100644 crypto/sha512_generic.c delete mode 100644 include/crypto/sha512_base.h create mode 100644 lib/crypto/arm/.gitignore rename {arch/arm/crypto => lib/crypto/arm}/sha512-armv4.pl (100%) create mode 100644 lib/crypto/arm/sha512.h create mode 100644 lib/crypto/arm64/.gitignore rename {arch/arm64/crypto => lib/crypto/arm64}/sha512-ce-core.S (97%) create mode 100644 lib/crypto/arm64/sha512.h create mode 100644 lib/crypto/mips/sha512.h rename {arch/riscv/crypto => lib/crypto/riscv}/sha512-riscv64-zvknhb-zvkb.S (98%) create mode 100644 lib/crypto/riscv/sha512.h create mode 100644 lib/crypto/s390/sha512.h create mode 100644 lib/crypto/sha512.c create mode 100644 lib/crypto/sparc/sha512.h rename {arch/sparc/crypto => lib/crypto/sparc}/sha512_asm.S (100%) create mode 100644 lib/crypto/tests/Kconfig create mode 100644 lib/crypto/tests/Makefile create mode 100644 lib/crypto/tests/hash-test-template.h create mode 100644 lib/crypto/tests/sha224-testvecs.h create mode 100644 lib/crypto/tests/sha224_kunit.c create mode 100644 lib/crypto/tests/sha256-testvecs.h create mode 100644 lib/crypto/tests/sha256_kunit.c create mode 100644 lib/crypto/tests/sha384-testvecs.h create mode 100644 lib/crypto/tests/sha384_kunit.c create mode 100644 lib/crypto/tests/sha512-testvecs.h create mode 100644 lib/crypto/tests/sha512_kunit.c rename {arch/x86/crypto => lib/crypto/x86}/sha512-avx-asm.S (97%) rename {arch/x86/crypto => lib/crypto/x86}/sha512-avx2-asm.S (98%) rename {arch/x86/crypto => lib/crypto/x86}/sha512-ssse3-asm.S (97%) create mode 100644 lib/crypto/x86/sha512.h create mode 100755 scripts/crypto/gen-hash-testvecs.py base-commit: 19272b37aa4f83ca52bdf9c16d5d81bdd1354494 -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH 01/16] crypto: sha512 - rename conflicting symbols 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 02/16] lib/crypto/sha512: add support for SHA-384 and SHA-512 Eric Biggers ` (14 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Rename existing functions and structs in architecture-optimized SHA-512 code that had names conflicting with the upcoming library interface which will be added to <crypto/sha2.h>: sha384_init, sha512_init, sha512_update, sha384, and sha512. Note: all affected code will be superseded by later commits that migrate the arch-optimized SHA-512 code into the library. This commit simply keeps the kernel building for the initial introduction of the library. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/arm64/crypto/sha512-glue.c | 8 ++++---- arch/s390/crypto/sha512_s390.c | 8 ++++---- arch/sparc/crypto/sha512_glue.c | 14 +++++++------- arch/x86/crypto/sha512_ssse3_glue.c | 10 +++++----- 4 files changed, 20 insertions(+), 20 deletions(-) diff --git a/arch/arm64/crypto/sha512-glue.c b/arch/arm64/crypto/sha512-glue.c index 15aa9d8b7b2c4..a78e184c100fa 100644 --- a/arch/arm64/crypto/sha512-glue.c +++ b/arch/arm64/crypto/sha512-glue.c @@ -25,12 +25,12 @@ static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src, int blocks) { sha512_blocks_arch(sst->state, src, blocks); } -static int sha512_update(struct shash_desc *desc, const u8 *data, - unsigned int len) +static int sha512_update_arm64(struct shash_desc *desc, const u8 *data, + unsigned int len) { return sha512_base_do_update_blocks(desc, data, len, sha512_arm64_transform); } @@ -42,11 +42,11 @@ static int sha512_finup(struct shash_desc *desc, const u8 *data, } static struct shash_alg algs[] = { { .digestsize = SHA512_DIGEST_SIZE, .init = sha512_base_init, - .update = sha512_update, + .update = sha512_update_arm64, .finup = sha512_finup, .descsize = SHA512_STATE_SIZE, .base.cra_name = "sha512", .base.cra_driver_name = "sha512-arm64", .base.cra_priority = 150, @@ -55,11 +55,11 @@ static struct shash_alg algs[] = { { .base.cra_blocksize = SHA512_BLOCK_SIZE, .base.cra_module = THIS_MODULE, }, { .digestsize = SHA384_DIGEST_SIZE, .init = sha384_base_init, - .update = sha512_update, + .update = sha512_update_arm64, .finup = sha512_finup, .descsize = SHA512_STATE_SIZE, .base.cra_name = "sha384", .base.cra_driver_name = "sha384-arm64", .base.cra_priority = 150, diff --git a/arch/s390/crypto/sha512_s390.c b/arch/s390/crypto/sha512_s390.c index 33711a29618c3..e8bb172dbed75 100644 --- a/arch/s390/crypto/sha512_s390.c +++ b/arch/s390/crypto/sha512_s390.c @@ -15,11 +15,11 @@ #include <linux/kernel.h> #include <linux/module.h> #include "sha.h" -static int sha512_init(struct shash_desc *desc) +static int sha512_init_s390(struct shash_desc *desc) { struct s390_sha_ctx *ctx = shash_desc_ctx(desc); ctx->sha512.state[0] = SHA512_H0; ctx->sha512.state[1] = SHA512_H1; @@ -60,11 +60,11 @@ static int sha512_import(struct shash_desc *desc, const void *in) return 0; } static struct shash_alg sha512_alg = { .digestsize = SHA512_DIGEST_SIZE, - .init = sha512_init, + .init = sha512_init_s390, .update = s390_sha_update_blocks, .finup = s390_sha_finup, .export = sha512_export, .import = sha512_import, .descsize = sizeof(struct s390_sha_ctx), @@ -80,11 +80,11 @@ static struct shash_alg sha512_alg = { } }; MODULE_ALIAS_CRYPTO("sha512"); -static int sha384_init(struct shash_desc *desc) +static int sha384_init_s390(struct shash_desc *desc) { struct s390_sha_ctx *ctx = shash_desc_ctx(desc); ctx->sha512.state[0] = SHA384_H0; ctx->sha512.state[1] = SHA384_H1; @@ -101,11 +101,11 @@ static int sha384_init(struct shash_desc *desc) return 0; } static struct shash_alg sha384_alg = { .digestsize = SHA384_DIGEST_SIZE, - .init = sha384_init, + .init = sha384_init_s390, .update = s390_sha_update_blocks, .finup = s390_sha_finup, .export = sha512_export, .import = sha512_import, .descsize = sizeof(struct s390_sha_ctx), diff --git a/arch/sparc/crypto/sha512_glue.c b/arch/sparc/crypto/sha512_glue.c index 47b9277b6877a..fb81c3290c8c0 100644 --- a/arch/sparc/crypto/sha512_glue.c +++ b/arch/sparc/crypto/sha512_glue.c @@ -38,11 +38,11 @@ static int sha512_sparc64_finup(struct shash_desc *desc, const u8 *src, { sha512_base_do_finup(desc, src, len, sha512_block); return sha512_base_finish(desc, out); } -static struct shash_alg sha512 = { +static struct shash_alg sha512_alg = { .digestsize = SHA512_DIGEST_SIZE, .init = sha512_base_init, .update = sha512_sparc64_update, .finup = sha512_sparc64_finup, .descsize = SHA512_STATE_SIZE, @@ -53,11 +53,11 @@ static struct shash_alg sha512 = { .cra_blocksize = SHA512_BLOCK_SIZE, .cra_module = THIS_MODULE, } }; -static struct shash_alg sha384 = { +static struct shash_alg sha384_alg = { .digestsize = SHA384_DIGEST_SIZE, .init = sha384_base_init, .update = sha512_sparc64_update, .finup = sha512_sparc64_finup, .descsize = SHA512_STATE_SIZE, @@ -85,17 +85,17 @@ static bool __init sparc64_has_sha512_opcode(void) } static int __init sha512_sparc64_mod_init(void) { if (sparc64_has_sha512_opcode()) { - int ret = crypto_register_shash(&sha384); + int ret = crypto_register_shash(&sha384_alg); if (ret < 0) return ret; - ret = crypto_register_shash(&sha512); + ret = crypto_register_shash(&sha512_alg); if (ret < 0) { - crypto_unregister_shash(&sha384); + crypto_unregister_shash(&sha384_alg); return ret; } pr_info("Using sparc64 sha512 opcode optimized SHA-512/SHA-384 implementation\n"); return 0; @@ -104,12 +104,12 @@ static int __init sha512_sparc64_mod_init(void) return -ENODEV; } static void __exit sha512_sparc64_mod_fini(void) { - crypto_unregister_shash(&sha384); - crypto_unregister_shash(&sha512); + crypto_unregister_shash(&sha384_alg); + crypto_unregister_shash(&sha512_alg); } module_init(sha512_sparc64_mod_init); module_exit(sha512_sparc64_mod_fini); diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 067684c543952..97744b7d23817 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -36,12 +36,12 @@ #include <crypto/sha512_base.h> asmlinkage void sha512_transform_ssse3(struct sha512_state *state, const u8 *data, int blocks); -static int sha512_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha512_block_fn *sha512_xform) +static int sha512_update_x86(struct shash_desc *desc, const u8 *data, + unsigned int len, sha512_block_fn *sha512_xform) { int remain; /* * Make sure struct sha512_state begins directly with the SHA512 @@ -67,11 +67,11 @@ static int sha512_finup(struct shash_desc *desc, const u8 *data, } static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha512_update(desc, data, len, sha512_transform_ssse3); + return sha512_update_x86(desc, data, len, sha512_transform_ssse3); } static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { @@ -139,11 +139,11 @@ static bool avx_usable(void) } static int sha512_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha512_update(desc, data, len, sha512_transform_avx); + return sha512_update_x86(desc, data, len, sha512_transform_avx); } static int sha512_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { @@ -201,11 +201,11 @@ asmlinkage void sha512_transform_rorx(struct sha512_state *state, const u8 *data, int blocks); static int sha512_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha512_update(desc, data, len, sha512_transform_rorx); + return sha512_update_x86(desc, data, len, sha512_transform_rorx); } static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 02/16] lib/crypto/sha512: add support for SHA-384 and SHA-512 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers 2025-06-11 2:09 ` [PATCH 01/16] crypto: sha512 - rename conflicting symbols Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 03/16] lib/crypto/sha512: add HMAC-SHA384 and HMAC-SHA512 support Eric Biggers ` (13 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Add basic support for SHA-384 and SHA-512 to lib/crypto/. Various in-kernel users will be able to use this instead of the old-school crypto API, which is harder to use and has more overhead. The basic support added by this commit consists of the API and its documentation, backed by a C implementation of the algorithms. sha512_block_generic() is derived from crypto/sha512_generic.c. Signed-off-by: Eric Biggers <ebiggers@google.com> --- include/crypto/sha2.h | 128 ++++++++++++++++++++ lib/crypto/Kconfig | 10 ++ lib/crypto/Makefile | 6 + lib/crypto/sha512.c | 263 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 407 insertions(+) create mode 100644 lib/crypto/sha512.c diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 4912572578dc2..9b679ab9a3230 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -127,6 +127,134 @@ static inline void sha224_init(struct sha256_state *sctx) sha224_block_init(&sctx->ctx); } /* Simply use sha256_update as it is equivalent to sha224_update. */ void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); +/* State for the SHA-512 (and SHA-384) compression function */ +struct sha512_block_state { + u64 h[8]; +}; + +/* + * Context structure, shared by SHA-384 and SHA-512. The sha384_ctx and + * sha512_ctx structs wrap this one so that the API has proper typing and + * doesn't allow mixing the SHA-384 and SHA-512 functions arbitrarily. + */ +struct __sha512_ctx { + struct sha512_block_state state; + u64 bytecount_lo; + u64 bytecount_hi; + u8 buf[SHA512_BLOCK_SIZE]; +}; +void __sha512_update(struct __sha512_ctx *ctx, const u8 *data, size_t len); + +/** + * struct sha384_ctx - Context for hashing a message with SHA-384 + * @ctx: private + */ +struct sha384_ctx { + struct __sha512_ctx ctx; +}; + +/** + * sha384_init() - Initialize a SHA-384 context for a new message + * @ctx: the context to initialize + * + * If you don't need incremental computation, consider sha384() instead. + * + * Context: Any context. + */ +void sha384_init(struct sha384_ctx *ctx); + +/** + * sha384_update() - Update a SHA-384 context with message data + * @ctx: the context to update; must have been initialized + * @data: the message data + * @len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ +static inline void sha384_update(struct sha384_ctx *ctx, + const u8 *data, size_t len) +{ + __sha512_update(&ctx->ctx, data, len); +} + +/** + * sha384_final() - Finish computing a SHA-384 message digest + * @ctx: the context to finalize; must have been initialized + * @out: (output) the resulting SHA-384 message digest + * + * After finishing, this zeroizes @ctx. So the caller does not need to do it. + * + * Context: Any context. + */ +void sha384_final(struct sha384_ctx *ctx, u8 out[SHA384_DIGEST_SIZE]); + +/** + * sha384() - Compute SHA-384 message digest in one shot + * @data: the message data + * @len: the data length in bytes + * @out: (output) the resulting SHA-384 message digest + * + * Context: Any context. + */ +void sha384(const u8 *data, size_t len, u8 out[SHA384_DIGEST_SIZE]); + +/** + * struct sha512_ctx - Context for hashing a message with SHA-512 + * @ctx: private + */ +struct sha512_ctx { + struct __sha512_ctx ctx; +}; + +/** + * sha512_init() - Initialize a SHA-512 context for a new message + * @ctx: the context to initialize + * + * If you don't need incremental computation, consider sha512() instead. + * + * Context: Any context. + */ +void sha512_init(struct sha512_ctx *ctx); + +/** + * sha512_update() - Update a SHA-512 context with message data + * @ctx: the context to update; must have been initialized + * @data: the message data + * @len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ +static inline void sha512_update(struct sha512_ctx *ctx, + const u8 *data, size_t len) +{ + __sha512_update(&ctx->ctx, data, len); +} + +/** + * sha512_final() - Finish computing a SHA-512 message digest + * @ctx: the context to finalize; must have been initialized + * @out: (output) the resulting SHA-512 message digest + * + * After finishing, this zeroizes @ctx. So the caller does not need to do it. + * + * Context: Any context. + */ +void sha512_final(struct sha512_ctx *ctx, u8 out[SHA512_DIGEST_SIZE]); + +/** + * sha512() - Compute SHA-512 message digest in one shot + * @data: the message data + * @len: the data length in bytes + * @out: (output) the resulting SHA-512 message digest + * + * Context: Any context. + */ +void sha512(const u8 *data, size_t len, u8 out[SHA512_DIGEST_SIZE]); + #endif /* _CRYPTO_SHA2_H */ diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 1ec1466108ccd..adabf2640fdc2 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -165,10 +165,20 @@ config CRYPTO_LIB_SHA256_GENERIC This symbol can be selected by arch implementations of the SHA-256 library interface that require the generic code as a fallback, e.g., for SIMD implementations. If no arch specific implementation is enabled, this implementation serves the users of CRYPTO_LIB_SHA256. +config CRYPTO_LIB_SHA512 + tristate + help + The SHA-384 and SHA-512 library functions. Select this if your module + uses any of these functions from <crypto/sha2.h>. + +config CRYPTO_LIB_SHA512_ARCH + bool + depends on CRYPTO_LIB_SHA512 + config CRYPTO_LIB_SM3 tristate if !KMSAN # avoid false positives from assembly if ARM diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 3e79283b617d9..7e8baa4590896 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -58,10 +58,16 @@ obj-$(CONFIG_CRYPTO_LIB_SHA256) += libsha256.o libsha256-y := sha256.o obj-$(CONFIG_CRYPTO_LIB_SHA256_GENERIC) += libsha256-generic.o libsha256-generic-y := sha256-generic.o +obj-$(CONFIG_CRYPTO_LIB_SHA512) += libsha512.o +libsha512-y := sha512.o +ifeq ($(CONFIG_CRYPTO_LIB_SHA512_ARCH),y) +CFLAGS_sha512.o += -I$(src)/$(SRCARCH) +endif # CONFIG_CRYPTO_LIB_SHA512_ARCH + obj-$(CONFIG_MPILIB) += mpi/ obj-$(CONFIG_CRYPTO_SELFTESTS) += simd.o obj-$(CONFIG_CRYPTO_LIB_SM3) += libsm3.o diff --git a/lib/crypto/sha512.c b/lib/crypto/sha512.c new file mode 100644 index 0000000000000..88452e21b66ee --- /dev/null +++ b/lib/crypto/sha512.c @@ -0,0 +1,263 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * SHA-384 and SHA-512 library functions + * + * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com> + * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> + * Copyright (c) 2003 Kyle McMartin <kyle@debian.org> + * Copyright 2025 Google LLC + */ + +#include <crypto/internal/sha2.h> +#include <linux/kernel.h> +#include <linux/module.h> +#include <linux/overflow.h> + +static const u64 sha512_K[80] = { + 0x428a2f98d728ae22ULL, 0x7137449123ef65cdULL, 0xb5c0fbcfec4d3b2fULL, + 0xe9b5dba58189dbbcULL, 0x3956c25bf348b538ULL, 0x59f111f1b605d019ULL, + 0x923f82a4af194f9bULL, 0xab1c5ed5da6d8118ULL, 0xd807aa98a3030242ULL, + 0x12835b0145706fbeULL, 0x243185be4ee4b28cULL, 0x550c7dc3d5ffb4e2ULL, + 0x72be5d74f27b896fULL, 0x80deb1fe3b1696b1ULL, 0x9bdc06a725c71235ULL, + 0xc19bf174cf692694ULL, 0xe49b69c19ef14ad2ULL, 0xefbe4786384f25e3ULL, + 0x0fc19dc68b8cd5b5ULL, 0x240ca1cc77ac9c65ULL, 0x2de92c6f592b0275ULL, + 0x4a7484aa6ea6e483ULL, 0x5cb0a9dcbd41fbd4ULL, 0x76f988da831153b5ULL, + 0x983e5152ee66dfabULL, 0xa831c66d2db43210ULL, 0xb00327c898fb213fULL, + 0xbf597fc7beef0ee4ULL, 0xc6e00bf33da88fc2ULL, 0xd5a79147930aa725ULL, + 0x06ca6351e003826fULL, 0x142929670a0e6e70ULL, 0x27b70a8546d22ffcULL, + 0x2e1b21385c26c926ULL, 0x4d2c6dfc5ac42aedULL, 0x53380d139d95b3dfULL, + 0x650a73548baf63deULL, 0x766a0abb3c77b2a8ULL, 0x81c2c92e47edaee6ULL, + 0x92722c851482353bULL, 0xa2bfe8a14cf10364ULL, 0xa81a664bbc423001ULL, + 0xc24b8b70d0f89791ULL, 0xc76c51a30654be30ULL, 0xd192e819d6ef5218ULL, + 0xd69906245565a910ULL, 0xf40e35855771202aULL, 0x106aa07032bbd1b8ULL, + 0x19a4c116b8d2d0c8ULL, 0x1e376c085141ab53ULL, 0x2748774cdf8eeb99ULL, + 0x34b0bcb5e19b48a8ULL, 0x391c0cb3c5c95a63ULL, 0x4ed8aa4ae3418acbULL, + 0x5b9cca4f7763e373ULL, 0x682e6ff3d6b2b8a3ULL, 0x748f82ee5defb2fcULL, + 0x78a5636f43172f60ULL, 0x84c87814a1f0ab72ULL, 0x8cc702081a6439ecULL, + 0x90befffa23631e28ULL, 0xa4506cebde82bde9ULL, 0xbef9a3f7b2c67915ULL, + 0xc67178f2e372532bULL, 0xca273eceea26619cULL, 0xd186b8c721c0c207ULL, + 0xeada7dd6cde0eb1eULL, 0xf57d4f7fee6ed178ULL, 0x06f067aa72176fbaULL, + 0x0a637dc5a2c898a6ULL, 0x113f9804bef90daeULL, 0x1b710b35131c471bULL, + 0x28db77f523047d84ULL, 0x32caab7b40c72493ULL, 0x3c9ebe0a15c9bebcULL, + 0x431d67c49c100d4cULL, 0x4cc5d4becb3e42b6ULL, 0x597f299cfc657e2aULL, + 0x5fcb6fab3ad6faecULL, 0x6c44198c4a475817ULL, +}; + +static const struct sha512_block_state sha384_iv = { + .h = { + SHA384_H0, SHA384_H1, SHA384_H2, SHA384_H3, + SHA384_H4, SHA384_H5, SHA384_H6, SHA384_H7, + }, +}; + +static const struct sha512_block_state sha512_iv = { + .h = { + SHA512_H0, SHA512_H1, SHA512_H2, SHA512_H3, + SHA512_H4, SHA512_H5, SHA512_H6, SHA512_H7, + }, +}; + +#define Ch(x, y, z) ((z) ^ ((x) & ((y) ^ (z)))) +#define Maj(x, y, z) (((x) & (y)) | ((z) & ((x) | (y)))) +#define e0(x) (ror64((x), 28) ^ ror64((x), 34) ^ ror64((x), 39)) +#define e1(x) (ror64((x), 14) ^ ror64((x), 18) ^ ror64((x), 41)) +#define s0(x) (ror64((x), 1) ^ ror64((x), 8) ^ ((x) >> 7)) +#define s1(x) (ror64((x), 19) ^ ror64((x), 61) ^ ((x) >> 6)) + +static void sha512_block_generic(struct sha512_block_state *state, + const u8 *data) +{ + u64 a = state->h[0]; + u64 b = state->h[1]; + u64 c = state->h[2]; + u64 d = state->h[3]; + u64 e = state->h[4]; + u64 f = state->h[5]; + u64 g = state->h[6]; + u64 h = state->h[7]; + u64 t1, t2; + u64 W[16]; + + for (int j = 0; j < 16; j++) + W[j] = get_unaligned_be64(data + j * sizeof(u64)); + + for (int i = 0; i < 80; i += 8) { + if ((i & 15) == 0 && i != 0) { + for (int j = 0; j < 16; j++) { + W[j & 15] += s1(W[(j - 2) & 15]) + + W[(j - 7) & 15] + + s0(W[(j - 15) & 15]); + } + } + t1 = h + e1(e) + Ch(e, f, g) + sha512_K[i] + W[(i & 15)]; + t2 = e0(a) + Maj(a, b, c); d += t1; h = t1 + t2; + t1 = g + e1(d) + Ch(d, e, f) + sha512_K[i+1] + W[(i & 15) + 1]; + t2 = e0(h) + Maj(h, a, b); c += t1; g = t1 + t2; + t1 = f + e1(c) + Ch(c, d, e) + sha512_K[i+2] + W[(i & 15) + 2]; + t2 = e0(g) + Maj(g, h, a); b += t1; f = t1 + t2; + t1 = e + e1(b) + Ch(b, c, d) + sha512_K[i+3] + W[(i & 15) + 3]; + t2 = e0(f) + Maj(f, g, h); a += t1; e = t1 + t2; + t1 = d + e1(a) + Ch(a, b, c) + sha512_K[i+4] + W[(i & 15) + 4]; + t2 = e0(e) + Maj(e, f, g); h += t1; d = t1 + t2; + t1 = c + e1(h) + Ch(h, a, b) + sha512_K[i+5] + W[(i & 15) + 5]; + t2 = e0(d) + Maj(d, e, f); g += t1; c = t1 + t2; + t1 = b + e1(g) + Ch(g, h, a) + sha512_K[i+6] + W[(i & 15) + 6]; + t2 = e0(c) + Maj(c, d, e); f += t1; b = t1 + t2; + t1 = a + e1(f) + Ch(f, g, h) + sha512_K[i+7] + W[(i & 15) + 7]; + t2 = e0(b) + Maj(b, c, d); e += t1; a = t1 + t2; + } + + state->h[0] += a; + state->h[1] += b; + state->h[2] += c; + state->h[3] += d; + state->h[4] += e; + state->h[5] += f; + state->h[6] += g; + state->h[7] += h; +} + +static void __maybe_unused +sha512_blocks_generic(struct sha512_block_state *state, + const u8 *data, size_t nblocks) +{ + do { + sha512_block_generic(state, data); + data += SHA512_BLOCK_SIZE; + } while (--nblocks); +} + +#ifdef CONFIG_CRYPTO_LIB_SHA512_ARCH +#include "sha512.h" /* $(SRCARCH)/sha512.h */ +#else +#define sha512_blocks sha512_blocks_generic +#endif + +static void __sha512_final(struct __sha512_ctx *ctx, + u8 *out, size_t digest_size) +{ + const __be64 bitcount_hi = cpu_to_be64((ctx->bytecount_hi << 3) | + (ctx->bytecount_lo >> 61)); + const __be64 bitcount_lo = cpu_to_be64(ctx->bytecount_lo << 3); + size_t partial = ctx->bytecount_lo % SHA512_BLOCK_SIZE; + + ctx->buf[partial++] = 0x80; + if (partial > SHA512_BLOCK_SIZE - 16) { + memset(&ctx->buf[partial], 0, SHA512_BLOCK_SIZE - partial); + sha512_blocks(&ctx->state, ctx->buf, 1); + partial = 0; + } + memset(&ctx->buf[partial], 0, SHA512_BLOCK_SIZE - 16 - partial); + memcpy(&ctx->buf[SHA512_BLOCK_SIZE - 16], &bitcount_hi, 8); + memcpy(&ctx->buf[SHA512_BLOCK_SIZE - 8], &bitcount_lo, 8); + sha512_blocks(&ctx->state, ctx->buf, 1); + + for (size_t i = 0; i < digest_size; i += 8) + put_unaligned_be64(ctx->state.h[i / 8], out + i); +} + +static void __sha512_init(struct __sha512_ctx *ctx, + const struct sha512_block_state *iv, + u64 initial_bytecount) +{ + ctx->state = *iv; + ctx->bytecount_lo = initial_bytecount; + ctx->bytecount_hi = 0; +} + +void sha384_init(struct sha384_ctx *ctx) +{ + __sha512_init(&ctx->ctx, &sha384_iv, 0); +} +EXPORT_SYMBOL_GPL(sha384_init); + +void sha512_init(struct sha512_ctx *ctx) +{ + __sha512_init(&ctx->ctx, &sha512_iv, 0); +} +EXPORT_SYMBOL_GPL(sha512_init); + +void __sha512_update(struct __sha512_ctx *ctx, const u8 *data, size_t len) +{ + size_t partial = ctx->bytecount_lo % SHA512_BLOCK_SIZE; + + if (check_add_overflow(ctx->bytecount_lo, len, &ctx->bytecount_lo)) + ctx->bytecount_hi++; + + if (partial + len >= SHA512_BLOCK_SIZE) { + size_t nblocks; + + if (partial) { + size_t l = SHA512_BLOCK_SIZE - partial; + + memcpy(&ctx->buf[partial], data, l); + data += l; + len -= l; + + sha512_blocks(&ctx->state, ctx->buf, 1); + } + + nblocks = len / SHA512_BLOCK_SIZE; + len %= SHA512_BLOCK_SIZE; + + if (nblocks) { + sha512_blocks(&ctx->state, data, nblocks); + data += nblocks * SHA512_BLOCK_SIZE; + } + partial = 0; + } + if (len) + memcpy(&ctx->buf[partial], data, len); +} +EXPORT_SYMBOL_GPL(__sha512_update); + +void sha384_final(struct sha384_ctx *ctx, u8 out[SHA384_DIGEST_SIZE]) +{ + __sha512_final(&ctx->ctx, out, SHA384_DIGEST_SIZE); + memzero_explicit(ctx, sizeof(*ctx)); +} +EXPORT_SYMBOL_GPL(sha384_final); + +void sha512_final(struct sha512_ctx *ctx, u8 out[SHA512_DIGEST_SIZE]) +{ + __sha512_final(&ctx->ctx, out, SHA512_DIGEST_SIZE); + memzero_explicit(ctx, sizeof(*ctx)); +} +EXPORT_SYMBOL_GPL(sha512_final); + +void sha384(const u8 *data, size_t len, u8 out[SHA384_DIGEST_SIZE]) +{ + struct sha384_ctx ctx; + + sha384_init(&ctx); + sha384_update(&ctx, data, len); + sha384_final(&ctx, out); +} +EXPORT_SYMBOL_GPL(sha384); + +void sha512(const u8 *data, size_t len, u8 out[SHA512_DIGEST_SIZE]) +{ + struct sha512_ctx ctx; + + sha512_init(&ctx); + sha512_update(&ctx, data, len); + sha512_final(&ctx, out); +} +EXPORT_SYMBOL_GPL(sha512); + +#ifdef sha512_mod_init_arch +static int __init sha512_mod_init(void) +{ + sha512_mod_init_arch(); + return 0; +} +subsys_initcall(sha512_mod_init); + +static void __exit sha512_mod_exit(void) +{ +} +module_exit(sha512_mod_exit); +#endif + +MODULE_DESCRIPTION("SHA-384 and SHA-512 library functions"); +MODULE_LICENSE("GPL"); -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 03/16] lib/crypto/sha512: add HMAC-SHA384 and HMAC-SHA512 support 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers 2025-06-11 2:09 ` [PATCH 01/16] crypto: sha512 - rename conflicting symbols Eric Biggers 2025-06-11 2:09 ` [PATCH 02/16] lib/crypto/sha512: add support for SHA-384 and SHA-512 Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 04/16] lib/crypto/sha512: add KUnit tests for SHA-384 and SHA-512 Eric Biggers ` (12 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Since HMAC support is commonly needed and is fairly simple, include it as a first-class citizen of the SHA-512 library. The API supports both incremental and one-shot computation, and either preparing the key ahead of time or just using a raw key. The implementation is much more streamlined than crypto/hmac.c. Signed-off-by: Eric Biggers <ebiggers@google.com> --- include/crypto/sha2.h | 222 ++++++++++++++++++++++++++++++++++++++++++ lib/crypto/Kconfig | 5 +- lib/crypto/sha512.c | 144 ++++++++++++++++++++++++++- 3 files changed, 367 insertions(+), 4 deletions(-) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 9b679ab9a3230..c784bffa533b8 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -145,10 +145,26 @@ struct __sha512_ctx { u64 bytecount_hi; u8 buf[SHA512_BLOCK_SIZE]; }; void __sha512_update(struct __sha512_ctx *ctx, const u8 *data, size_t len); +/* + * HMAC key and message context structs, shared by HMAC-SHA384 and HMAC-SHA512. + * The hmac_sha384_* and hmac_sha512_* structs wrap this one so that the API has + * proper typing and doesn't allow mixing the functions arbitrarily. + */ +struct __hmac_sha512_key { + struct sha512_block_state istate; + struct sha512_block_state ostate; +}; +struct __hmac_sha512_ctx { + struct __sha512_ctx sha_ctx; + struct sha512_block_state ostate; +}; +void __hmac_sha512_init(struct __hmac_sha512_ctx *ctx, + const struct __hmac_sha512_key *key); + /** * struct sha384_ctx - Context for hashing a message with SHA-384 * @ctx: private */ struct sha384_ctx { @@ -200,10 +216,113 @@ void sha384_final(struct sha384_ctx *ctx, u8 out[SHA384_DIGEST_SIZE]); * * Context: Any context. */ void sha384(const u8 *data, size_t len, u8 out[SHA384_DIGEST_SIZE]); +/** + * struct hmac_sha384_key - Prepared key for HMAC-SHA384 + * @key: private + */ +struct hmac_sha384_key { + struct __hmac_sha512_key key; +}; + +/** + * struct hmac_sha384_ctx - Context for computing HMAC-SHA384 of a message + * @ctx: private + */ +struct hmac_sha384_ctx { + struct __hmac_sha512_ctx ctx; +}; + +/** + * hmac_sha384_preparekey() - Prepare a key for HMAC-SHA384 + * @key: (output) the key structure to initialize + * @raw_key: the raw HMAC-SHA384 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * + * Note: the caller is responsible for zeroizing both the struct hmac_sha384_key + * and the raw key once they are no longer needed. + * + * Context: Any context. + */ +void hmac_sha384_preparekey(struct hmac_sha384_key *key, + const u8 *raw_key, size_t raw_key_len); + +/** + * hmac_sha384_init() - Initialize a HMAC-SHA384 context for a new message + * @ctx: (output) the HMAC context to initialize + * @key: the prepared HMAC key + * + * If you don't need incremental computation, consider hmac_sha384() instead. + * + * Context: Any context. + */ +static inline void hmac_sha384_init(struct hmac_sha384_ctx *ctx, + const struct hmac_sha384_key *key) +{ + __hmac_sha512_init(&ctx->ctx, &key->key); +} + +/** + * hmac_sha384_update() - Update a HMAC-SHA384 context with message data + * @ctx: the HMAC context to update; must have been initialized + * @data: the message data + * @data_len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ +static inline void hmac_sha384_update(struct hmac_sha384_ctx *ctx, + const u8 *data, size_t data_len) +{ + __sha512_update(&ctx->ctx.sha_ctx, data, data_len); +} + +/** + * hmac_sha384_final() - Finish computing a HMAC-SHA384 value + * @ctx: the HMAC context to finalize; must have been initialized + * @out: (output) the resulting HMAC-SHA384 value + * + * After finishing, this zeroizes @ctx. So the caller does not need to do it. + * + * Context: Any context. + */ +void hmac_sha384_final(struct hmac_sha384_ctx *ctx, u8 out[SHA384_DIGEST_SIZE]); + +/** + * hmac_sha384() - Compute HMAC-SHA384 in one shot, using a prepared key + * @key: the prepared HMAC key + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA384 value + * + * If you're using the key only once, consider using hmac_sha384_usingrawkey(). + * + * Context: Any context. + */ +void hmac_sha384(const struct hmac_sha384_key *key, + const u8 *data, size_t data_len, u8 out[SHA384_DIGEST_SIZE]); + +/** + * hmac_sha384_usingrawkey() - Compute HMAC-SHA384 in one shot, using a raw key + * @raw_key: the raw HMAC-SHA384 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA384 value + * + * If you're using the key multiple times, prefer to use + * hmac_sha384_preparekey() followed by multiple calls to hmac_sha384() instead. + * + * Context: Any context. + */ +void hmac_sha384_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA384_DIGEST_SIZE]); + /** * struct sha512_ctx - Context for hashing a message with SHA-512 * @ctx: private */ struct sha512_ctx { @@ -255,6 +374,109 @@ void sha512_final(struct sha512_ctx *ctx, u8 out[SHA512_DIGEST_SIZE]); * * Context: Any context. */ void sha512(const u8 *data, size_t len, u8 out[SHA512_DIGEST_SIZE]); +/** + * struct hmac_sha512_key - Prepared key for HMAC-SHA512 + * @key: private + */ +struct hmac_sha512_key { + struct __hmac_sha512_key key; +}; + +/** + * struct hmac_sha512_ctx - Context for computing HMAC-SHA512 of a message + * @ctx: private + */ +struct hmac_sha512_ctx { + struct __hmac_sha512_ctx ctx; +}; + +/** + * hmac_sha512_preparekey() - Prepare a key for HMAC-SHA512 + * @key: (output) the key structure to initialize + * @raw_key: the raw HMAC-SHA512 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * + * Note: the caller is responsible for zeroizing both the struct hmac_sha512_key + * and the raw key once they are no longer needed. + * + * Context: Any context. + */ +void hmac_sha512_preparekey(struct hmac_sha512_key *key, + const u8 *raw_key, size_t raw_key_len); + +/** + * hmac_sha512_init() - Initialize a HMAC-SHA512 context for a new message + * @ctx: (output) the HMAC context to initialize + * @key: the prepared HMAC key + * + * If you don't need incremental computation, consider hmac_sha512() instead. + * + * Context: Any context. + */ +static inline void hmac_sha512_init(struct hmac_sha512_ctx *ctx, + const struct hmac_sha512_key *key) +{ + __hmac_sha512_init(&ctx->ctx, &key->key); +} + +/** + * hmac_sha512_update() - Update a HMAC-SHA512 context with message data + * @ctx: the HMAC context to update; must have been initialized + * @data: the message data + * @data_len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ +static inline void hmac_sha512_update(struct hmac_sha512_ctx *ctx, + const u8 *data, size_t data_len) +{ + __sha512_update(&ctx->ctx.sha_ctx, data, data_len); +} + +/** + * hmac_sha512_final() - Finish computing a HMAC-SHA512 value + * @ctx: the HMAC context to finalize; must have been initialized + * @out: (output) the resulting HMAC-SHA512 value + * + * After finishing, this zeroizes @ctx. So the caller does not need to do it. + * + * Context: Any context. + */ +void hmac_sha512_final(struct hmac_sha512_ctx *ctx, u8 out[SHA512_DIGEST_SIZE]); + +/** + * hmac_sha512() - Compute HMAC-SHA512 in one shot, using a prepared key + * @key: the prepared HMAC key + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA512 value + * + * If you're using the key only once, consider using hmac_sha512_usingrawkey(). + * + * Context: Any context. + */ +void hmac_sha512(const struct hmac_sha512_key *key, + const u8 *data, size_t data_len, u8 out[SHA512_DIGEST_SIZE]); + +/** + * hmac_sha512_usingrawkey() - Compute HMAC-SHA512 in one shot, using a raw key + * @raw_key: the raw HMAC-SHA512 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA512 value + * + * If you're using the key multiple times, prefer to use + * hmac_sha512_preparekey() followed by multiple calls to hmac_sha512() instead. + * + * Context: Any context. + */ +void hmac_sha512_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA512_DIGEST_SIZE]); + #endif /* _CRYPTO_SHA2_H */ diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index adabf2640fdc2..2ef61c69ae709 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -168,12 +168,13 @@ config CRYPTO_LIB_SHA256_GENERIC enabled, this implementation serves the users of CRYPTO_LIB_SHA256. config CRYPTO_LIB_SHA512 tristate help - The SHA-384 and SHA-512 library functions. Select this if your module - uses any of these functions from <crypto/sha2.h>. + The SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 library functions. + Select this if your module uses any of these functions from + <crypto/sha2.h>. config CRYPTO_LIB_SHA512_ARCH bool depends on CRYPTO_LIB_SHA512 diff --git a/lib/crypto/sha512.c b/lib/crypto/sha512.c index 88452e21b66ee..ebcb8ff0b76a8 100644 --- a/lib/crypto/sha512.c +++ b/lib/crypto/sha512.c @@ -1,19 +1,21 @@ // SPDX-License-Identifier: GPL-2.0-or-later /* - * SHA-384 and SHA-512 library functions + * SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 library functions * * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com> * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> * Copyright (c) 2003 Kyle McMartin <kyle@debian.org> * Copyright 2025 Google LLC */ +#include <crypto/hmac.h> #include <crypto/internal/sha2.h> #include <linux/kernel.h> #include <linux/module.h> #include <linux/overflow.h> +#include <linux/wordpart.h> static const u64 sha512_K[80] = { 0x428a2f98d728ae22ULL, 0x7137449123ef65cdULL, 0xb5c0fbcfec4d3b2fULL, 0xe9b5dba58189dbbcULL, 0x3956c25bf348b538ULL, 0x59f111f1b605d019ULL, 0x923f82a4af194f9bULL, 0xab1c5ed5da6d8118ULL, 0xd807aa98a3030242ULL, @@ -243,10 +245,148 @@ void sha512(const u8 *data, size_t len, u8 out[SHA512_DIGEST_SIZE]) sha512_update(&ctx, data, len); sha512_final(&ctx, out); } EXPORT_SYMBOL_GPL(sha512); +static void __hmac_sha512_preparekey(struct __hmac_sha512_key *key, + const u8 *raw_key, size_t raw_key_len, + const struct sha512_block_state *iv) +{ + union { + unsigned long w[SHA512_BLOCK_SIZE / sizeof(unsigned long)]; + u8 b[SHA512_BLOCK_SIZE]; + } derived_key = { 0 }; + + if (unlikely(raw_key_len > SHA512_BLOCK_SIZE)) { + if (iv == &sha384_iv) + sha384(raw_key, raw_key_len, derived_key.b); + else + sha512(raw_key, raw_key_len, derived_key.b); + } else { + memcpy(derived_key.b, raw_key, raw_key_len); + } + + for (size_t i = 0; i < ARRAY_SIZE(derived_key.w); i++) + derived_key.w[i] ^= REPEAT_BYTE(HMAC_IPAD_VALUE); + key->istate = *iv; + sha512_blocks(&key->istate, derived_key.b, 1); + + for (size_t i = 0; i < ARRAY_SIZE(derived_key.w); i++) + derived_key.w[i] ^= REPEAT_BYTE(HMAC_OPAD_VALUE ^ + HMAC_IPAD_VALUE); + key->ostate = *iv; + sha512_blocks(&key->ostate, derived_key.b, 1); + + memzero_explicit(&derived_key, sizeof(derived_key)); +} + +void hmac_sha384_preparekey(struct hmac_sha384_key *key, + const u8 *raw_key, size_t raw_key_len) +{ + __hmac_sha512_preparekey(&key->key, raw_key, raw_key_len, &sha384_iv); +} +EXPORT_SYMBOL_GPL(hmac_sha384_preparekey); + +void hmac_sha512_preparekey(struct hmac_sha512_key *key, + const u8 *raw_key, size_t raw_key_len) +{ + __hmac_sha512_preparekey(&key->key, raw_key, raw_key_len, &sha512_iv); +} +EXPORT_SYMBOL_GPL(hmac_sha512_preparekey); + +void __hmac_sha512_init(struct __hmac_sha512_ctx *ctx, + const struct __hmac_sha512_key *key) +{ + __sha512_init(&ctx->sha_ctx, &key->istate, SHA512_BLOCK_SIZE); + ctx->ostate = key->ostate; +} +EXPORT_SYMBOL_GPL(__hmac_sha512_init); + +static void __hmac_sha512_final(struct __hmac_sha512_ctx *ctx, + u8 *out, size_t digest_size) +{ + const u32 bitcount = 8 * (SHA512_BLOCK_SIZE + digest_size); + union { + u8 b[SHA512_BLOCK_SIZE]; + __be32 w[SHA512_BLOCK_SIZE / sizeof(__be32)]; + } block; + + __sha512_final(&ctx->sha_ctx, block.b, digest_size); + + memset(&block.b[digest_size], 0, SHA512_BLOCK_SIZE - digest_size); + block.b[digest_size] = 0x80; + block.w[ARRAY_SIZE(block.w) - 1] = cpu_to_be32(bitcount); + sha512_blocks(&ctx->ostate, block.b, 1); + for (size_t i = 0; i < digest_size; i += 8) + put_unaligned_be64(ctx->ostate.h[i / 8], out + i); + + memzero_explicit(ctx, sizeof(*ctx)); + memzero_explicit(&block, sizeof(block)); +} + +void hmac_sha384_final(struct hmac_sha384_ctx *ctx, + u8 out[SHA384_DIGEST_SIZE]) +{ + __hmac_sha512_final(&ctx->ctx, out, SHA384_DIGEST_SIZE); +} +EXPORT_SYMBOL_GPL(hmac_sha384_final); + +void hmac_sha512_final(struct hmac_sha512_ctx *ctx, + u8 out[SHA512_DIGEST_SIZE]) +{ + __hmac_sha512_final(&ctx->ctx, out, SHA512_DIGEST_SIZE); +} +EXPORT_SYMBOL_GPL(hmac_sha512_final); + +void hmac_sha384(const struct hmac_sha384_key *key, + const u8 *data, size_t data_len, u8 out[SHA384_DIGEST_SIZE]) +{ + struct hmac_sha384_ctx ctx; + + hmac_sha384_init(&ctx, key); + hmac_sha384_update(&ctx, data, data_len); + hmac_sha384_final(&ctx, out); +} +EXPORT_SYMBOL_GPL(hmac_sha384); + +void hmac_sha512(const struct hmac_sha512_key *key, + const u8 *data, size_t data_len, u8 out[SHA512_DIGEST_SIZE]) +{ + struct hmac_sha512_ctx ctx; + + hmac_sha512_init(&ctx, key); + hmac_sha512_update(&ctx, data, data_len); + hmac_sha512_final(&ctx, out); +} +EXPORT_SYMBOL_GPL(hmac_sha512); + +void hmac_sha384_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA384_DIGEST_SIZE]) +{ + struct hmac_sha384_key key; + + hmac_sha384_preparekey(&key, raw_key, raw_key_len); + hmac_sha384(&key, data, data_len, out); + + memzero_explicit(&key, sizeof(key)); +} +EXPORT_SYMBOL_GPL(hmac_sha384_usingrawkey); + +void hmac_sha512_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA512_DIGEST_SIZE]) +{ + struct hmac_sha512_key key; + + hmac_sha512_preparekey(&key, raw_key, raw_key_len); + hmac_sha512(&key, data, data_len, out); + + memzero_explicit(&key, sizeof(key)); +} +EXPORT_SYMBOL_GPL(hmac_sha512_usingrawkey); + #ifdef sha512_mod_init_arch static int __init sha512_mod_init(void) { sha512_mod_init_arch(); return 0; @@ -257,7 +397,7 @@ static void __exit sha512_mod_exit(void) { } module_exit(sha512_mod_exit); #endif -MODULE_DESCRIPTION("SHA-384 and SHA-512 library functions"); +MODULE_DESCRIPTION("SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 library functions"); MODULE_LICENSE("GPL"); -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 04/16] lib/crypto/sha512: add KUnit tests for SHA-384 and SHA-512 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (2 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 03/16] lib/crypto/sha512: add HMAC-SHA384 and HMAC-SHA512 support Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 05/16] lib/crypto/sha256: add KUnit tests for SHA-224 and SHA-256 Eric Biggers ` (11 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Add KUnit tests for the SHA-384 and SHA-512 library functions, including the corresponding HMAC support. Testing strategy: - Each SHA variant gets its own KUnit test suite, but a header is used to share most of the test code among the SHA variants. - Test against vectors generated by the Python hashlib and hmac modules. - Test incremental computation. - Test with a guard page to catch buffer overruns even in assembly code. - Test various overlap and alignment cases. - Compute hashes in task, softirq, and hardirq context in parallel, to verify that the functions work as expected in all contexts and that fallback code paths are exercised. - Test that the finalization functions zeroize their context. - Include benchmarks, guarded by a separate Kconfig option. Signed-off-by: Eric Biggers <ebiggers@google.com> --- lib/crypto/Kconfig | 2 + lib/crypto/Makefile | 2 + lib/crypto/tests/Kconfig | 16 + lib/crypto/tests/Makefile | 4 + lib/crypto/tests/hash-test-template.h | 512 ++++++++++++++++++++ lib/crypto/tests/sha384-testvecs.h | 566 ++++++++++++++++++++++ lib/crypto/tests/sha384_kunit.c | 48 ++ lib/crypto/tests/sha512-testvecs.h | 662 ++++++++++++++++++++++++++ lib/crypto/tests/sha512_kunit.c | 48 ++ scripts/crypto/gen-hash-testvecs.py | 83 ++++ 10 files changed, 1943 insertions(+) create mode 100644 lib/crypto/tests/Kconfig create mode 100644 lib/crypto/tests/Makefile create mode 100644 lib/crypto/tests/hash-test-template.h create mode 100644 lib/crypto/tests/sha384-testvecs.h create mode 100644 lib/crypto/tests/sha384_kunit.c create mode 100644 lib/crypto/tests/sha512-testvecs.h create mode 100644 lib/crypto/tests/sha512_kunit.c create mode 100755 scripts/crypto/gen-hash-testvecs.py diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 2ef61c69ae709..34b249ca3db23 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -1,9 +1,11 @@ # SPDX-License-Identifier: GPL-2.0 menu "Crypto library routines" +source "lib/crypto/tests/Kconfig" + config CRYPTO_LIB_UTILS tristate config CRYPTO_LIB_AES tristate diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 7e8baa4590896..7df76ab5fe692 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -1,7 +1,9 @@ # SPDX-License-Identifier: GPL-2.0 +obj-y += tests/ + obj-$(CONFIG_CRYPTO_LIB_UTILS) += libcryptoutils.o libcryptoutils-y := memneq.o utils.o # chacha is used by the /dev/random driver which is always builtin obj-y += chacha.o diff --git a/lib/crypto/tests/Kconfig b/lib/crypto/tests/Kconfig new file mode 100644 index 0000000000000..90be320c25bd2 --- /dev/null +++ b/lib/crypto/tests/Kconfig @@ -0,0 +1,16 @@ +# SPDX-License-Identifier: GPL-2.0-only + +config CRYPTO_LIB_SHA512_KUNIT_TEST + tristate "KUnit tests for SHA-384 and SHA-512" if !KUNIT_ALL_TESTS + depends on KUNIT + default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS + select CRYPTO_LIB_SHA512 + help + KUnit tests for the SHA-384 and SHA-512 cryptographic hash functions + and their corresponding HMACs. + +config CRYPTO_LIB_BENCHMARK + bool "Include benchmarks in KUnit tests for cryptographic functions" + depends on CRYPTO_LIB_SHA512_KUNIT_TEST + help + Include benchmarks in the KUnit tests for cryptographic functions. diff --git a/lib/crypto/tests/Makefile b/lib/crypto/tests/Makefile new file mode 100644 index 0000000000000..3925dcb6513d8 --- /dev/null +++ b/lib/crypto/tests/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only + +obj-$(CONFIG_CRYPTO_LIB_SHA512_KUNIT_TEST) += sha384_kunit.o +obj-$(CONFIG_CRYPTO_LIB_SHA512_KUNIT_TEST) += sha512_kunit.o diff --git a/lib/crypto/tests/hash-test-template.h b/lib/crypto/tests/hash-test-template.h new file mode 100644 index 0000000000000..3d5b838aa572a --- /dev/null +++ b/lib/crypto/tests/hash-test-template.h @@ -0,0 +1,512 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Copyright 2025 Google LLC + */ +#include <kunit/test.h> +#include <linux/hrtimer.h> +#include <linux/vmalloc.h> +#include <linux/workqueue.h> + +/* test_buf is a guarded buffer, i.e. &test_buf[TEST_BUF_LEN] is not mapped. */ +#define TEST_BUF_LEN 16384 +static u8 *test_buf; + +static u8 *orig_test_buf; + +static u64 random_seed; + +/* + * This is a simple linear congruential generator. It is used only for testing, + * which does not require cryptographically secure random numbers. A hard-coded + * algorithm is used instead of <linux/prandom.h> so that it matches the + * algorithm used by the test vector generation script. This allows the input + * data in random test vectors to be concisely stored as just the seed. + */ +static u32 rand32(void) +{ + random_seed = (random_seed * 25214903917 + 11) & ((1ULL << 48) - 1); + return random_seed >> 16; +} + +static void rand_bytes(u8 *out, size_t len) +{ + for (size_t i = 0; i < len; i++) + out[i] = rand32(); +} + +static bool rand_bool(void) +{ + return rand32() % 2; +} + +/* Generate a random length, preferring small lengths. */ +static size_t rand_length(size_t max_len) +{ + size_t len; + + switch (rand32() % 3) { + case 0: + len = rand32() % 128; + break; + case 1: + len = rand32() % 3072; + break; + default: + len = rand32(); + break; + } + return len % (max_len + 1); +} + +static size_t rand_offset(size_t max_offset) +{ + return min(rand32() % 128, max_offset); +} + +static int hash_suite_init(struct kunit_suite *suite) +{ + /* + * Allocate the test buffer using vmalloc() with a page-aligned length + * so that it is immediately followed by a guard page. This allows + * buffer overreads to be detected, even in assembly code. + */ + size_t alloc_len = round_up(TEST_BUF_LEN, PAGE_SIZE); + + orig_test_buf = vmalloc(alloc_len); + if (!orig_test_buf) + return -ENOMEM; + + test_buf = orig_test_buf + alloc_len - TEST_BUF_LEN; + rand_bytes(test_buf, TEST_BUF_LEN); + return 0; +} + +static void hash_suite_exit(struct kunit_suite *suite) +{ + vfree(orig_test_buf); + orig_test_buf = NULL; +} + +static void rand_bytes_seeded_from_len(u8 *out, size_t len) +{ + random_seed = len; + rand_bytes(out, len); +} + +/* + * Test that the hash function produces the expected results from the test + * vectors. + * + * Note that it's only necessary to run each test vector in one way (e.g., + * one-shot instead of a chain of incremental updates), since consistency + * between different ways of using the APIs is verified by other test cases. + */ +static void test_hash_test_vectors(struct kunit *test) +{ + for (size_t i = 0; i < ARRAY_SIZE(HASH_TESTVECS); i++) { + size_t data_len = HASH_TESTVECS[i].data_len; + u8 actual_digest[HASH_SIZE]; + + KUNIT_ASSERT_LE(test, data_len, TEST_BUF_LEN); + + rand_bytes_seeded_from_len(test_buf, data_len); + + HASH(test_buf, data_len, actual_digest); + KUNIT_ASSERT_MEMEQ_MSG( + test, actual_digest, HASH_TESTVECS[i].digest, HASH_SIZE, + "Wrong result with test vector %zu; data_len=%zu", i, + data_len); + } +} + +/* + * Test that the hash function produces the same result for a one-shot + * computation vs. an incremental computation. + */ +static void test_hash_incremental_updates(struct kunit *test) +{ + for (size_t i = 0; i < 1000; i++) { + size_t total_len, offset; + struct HASH_CTX ctx; + u8 hash1[HASH_SIZE]; + u8 hash2[HASH_SIZE]; + size_t num_parts = 0; + size_t remaining_len, cur_offset; + + total_len = rand_length(TEST_BUF_LEN); + offset = rand_offset(TEST_BUF_LEN - total_len); + + if (rand32() % 8 == 0) + /* Refresh the data occasionally. */ + rand_bytes(&test_buf[offset], total_len); + + /* Compute the hash value in one step. */ + HASH(&test_buf[offset], total_len, hash1); + + /* Compute the hash value incrementally. */ + HASH_INIT(&ctx); + remaining_len = total_len; + cur_offset = offset; + while (rand_bool()) { + size_t part_len = rand_length(remaining_len); + + HASH_UPDATE(&ctx, &test_buf[cur_offset], part_len); + num_parts++; + cur_offset += part_len; + remaining_len -= part_len; + } + if (remaining_len != 0 || rand_bool()) { + HASH_UPDATE(&ctx, &test_buf[cur_offset], remaining_len); + num_parts++; + } + HASH_FINAL(&ctx, hash2); + + /* Compare the two hash values. */ + KUNIT_ASSERT_MEMEQ_MSG( + test, hash1, hash2, HASH_SIZE, + "Incremental test failed with total_len=%zu num_parts=%zu offset=%zu\n", + total_len, num_parts, offset); + } +} + +/* + * Test that the hash function does not overrun any buffers. Uses a guard page + * to catch buffer overruns even if they occur in assembly code. + */ +static void test_hash_buffer_overruns(struct kunit *test) +{ + const size_t max_tested_len = TEST_BUF_LEN - sizeof(struct HASH_CTX); + void *const buf_end = &test_buf[TEST_BUF_LEN]; + struct HASH_CTX *guarded_ctx = buf_end - sizeof(*guarded_ctx); + + for (size_t i = 0; i < 100; i++) { + size_t len = rand_length(max_tested_len); + u8 hash[HASH_SIZE]; + struct HASH_CTX ctx; + + /* Check for overruns of the data buffer. */ + HASH(buf_end - len, len, hash); + HASH_INIT(&ctx); + HASH_UPDATE(&ctx, buf_end - len, len); + HASH_FINAL(&ctx, hash); + + /* Check for overruns of the hash value buffer. */ + HASH(test_buf, len, buf_end - HASH_SIZE); + HASH_INIT(&ctx); + HASH_UPDATE(&ctx, test_buf, len); + HASH_FINAL(&ctx, buf_end - HASH_SIZE); + + /* Check for overuns of the hash context. */ + HASH_INIT(guarded_ctx); + HASH_UPDATE(guarded_ctx, test_buf, len); + HASH_FINAL(guarded_ctx, hash); + } +} + +/* + * Test that the caller is permitted to alias the output digest and source data + * buffer, and also modify the source data buffer after it has been used. + */ +static void test_hash_overlaps(struct kunit *test) +{ + const size_t max_tested_len = TEST_BUF_LEN - HASH_SIZE; + u8 hash[HASH_SIZE]; + struct HASH_CTX ctx; + + for (size_t i = 0; i < 100; i++) { + size_t len = rand_length(max_tested_len); + size_t offset = HASH_SIZE + rand_offset(max_tested_len - len); + bool left_end = rand_bool(); + u8 *ovl_hash = left_end ? &test_buf[offset] : + &test_buf[offset + len - HASH_SIZE]; + + HASH(&test_buf[offset], len, hash); + HASH(&test_buf[offset], len, ovl_hash); + KUNIT_ASSERT_MEMEQ_MSG( + test, hash, ovl_hash, HASH_SIZE, + "Overlap test 1 failed with len=%zu offset=%zu left_end=%d\n", + len, offset, left_end); + + HASH(&test_buf[offset], len, hash); + HASH_INIT(&ctx); + HASH_UPDATE(&ctx, &test_buf[offset], len); + HASH_FINAL(&ctx, ovl_hash); + KUNIT_ASSERT_MEMEQ_MSG( + test, hash, ovl_hash, HASH_SIZE, + "Overlap test 2 failed with len=%zu offset=%zu left_end=%d\n", + len, offset, left_end); + + HASH(&test_buf[offset], len, hash); + HASH_INIT(&ctx); + HASH_UPDATE(&ctx, &test_buf[offset], len); + rand_bytes(&test_buf[offset], len); + HASH_FINAL(&ctx, ovl_hash); + KUNIT_ASSERT_MEMEQ_MSG( + test, hash, ovl_hash, HASH_SIZE, + "Overlap test 3 failed with len=%zu offset=%zu left_end=%d\n", + len, offset, left_end); + } +} + +/* + * Test that if the same data is hashed at different alignments in memory, the + * results are the same. + */ +static void test_hash_alignment_consistency(struct kunit *test) +{ + u8 hash1[128 + HASH_SIZE]; + u8 hash2[128 + HASH_SIZE]; + + for (size_t i = 0; i < 100; i++) { + size_t len = rand_length(TEST_BUF_LEN); + size_t data_offs1 = rand_offset(TEST_BUF_LEN - len); + size_t data_offs2 = rand_offset(TEST_BUF_LEN - len); + size_t hash_offs1 = rand_offset(128); + size_t hash_offs2 = rand_offset(128); + + rand_bytes(&test_buf[data_offs1], len); + HASH(&test_buf[data_offs1], len, &hash1[hash_offs1]); + memmove(&test_buf[data_offs2], &test_buf[data_offs1], len); + HASH(&test_buf[data_offs2], len, &hash2[hash_offs2]); + KUNIT_ASSERT_MEMEQ_MSG( + test, &hash1[hash_offs1], &hash2[hash_offs2], HASH_SIZE, + "Alignment consistency test failed with len=%zu data_offs=(%zu,%zu) hash_offs=(%zu,%zu)\n", + len, data_offs1, data_offs2, hash_offs1, hash_offs2); + } +} + +/* Test that HASH_FINAL zeroizes the context. */ +static void test_hash_ctx_zeroization(struct kunit *test) +{ + static const u8 zeroes[sizeof(struct HASH_CTX)]; + struct HASH_CTX ctx; + + HASH_INIT(&ctx); + HASH_UPDATE(&ctx, test_buf, 128); + HASH_FINAL(&ctx, test_buf); + KUNIT_EXPECT_MEMEQ_MSG(test, &ctx, zeroes, sizeof(ctx), + "Hash context was not zeroized by finalization"); +} + +#define IRQ_TEST_DATA_LEN 256 +#define IRQ_TEST_NUM_BUFFERS 3 +#define IRQ_TEST_HRTIMER_INTERVAL us_to_ktime(5) + +struct irq_test_state { + struct hrtimer timer; + struct work_struct bh_work; + u8 expected_hashes[IRQ_TEST_NUM_BUFFERS][HASH_SIZE]; + atomic_t seqno; + atomic_t timer_func_calls; + atomic_t bh_func_calls; + bool task_wrong_result; + bool timer_func_wrong_result; + bool bh_func_wrong_result; +}; + +/* + * Compute a hash of one of the test messages and check for the expected result + * that was saved earlier in @state->expected_hashes. To increase the chance of + * detecting problems, this cycles through multiple messages. + */ +static bool irq_test_step(struct irq_test_state *state) +{ + u32 i = (u32)atomic_inc_return(&state->seqno) % IRQ_TEST_NUM_BUFFERS; + u8 actual_hash[HASH_SIZE]; + + HASH(&test_buf[i * IRQ_TEST_DATA_LEN], IRQ_TEST_DATA_LEN, actual_hash); + return memcmp(actual_hash, state->expected_hashes[i], HASH_SIZE) == 0; +} + +/* + * This is the timer function run by the IRQ test. It is called in hardirq + * context. It computes a hash, checks for the correct result, then reschedules + * the timer and also the BH work. + */ +static enum hrtimer_restart irq_test_timer_func(struct hrtimer *timer) +{ + struct irq_test_state *state = + container_of(timer, typeof(*state), timer); + + WARN_ON_ONCE(!in_hardirq()); + atomic_inc(&state->timer_func_calls); + if (!irq_test_step(state)) + state->timer_func_wrong_result = true; + + hrtimer_forward_now(&state->timer, IRQ_TEST_HRTIMER_INTERVAL); + queue_work(system_bh_wq, &state->bh_work); + return HRTIMER_RESTART; +} + +/* Compute a hash in softirq context and check for the expected result. */ +static void irq_test_bh_func(struct work_struct *work) +{ + struct irq_test_state *state = + container_of(work, typeof(*state), bh_work); + + WARN_ON_ONCE(!in_serving_softirq()); + atomic_inc(&state->bh_func_calls); + if (!irq_test_step(state)) + state->bh_func_wrong_result = true; +} + +/* + * Test that if hashes are computed in parallel in task, softirq, and hardirq + * context, then all results are as expected. + * + * The primary purpose of this test is to verify the correctness of fallback + * code paths that runs in contexts where the normal code path cannot be used, + * e.g. !may_use_simd(). These code paths are not covered by any of the other + * tests, which are executed by the KUnit test runner thread in task context. + * + * In addition, this test may detect issues with the architecture's + * irq_fpu_usable() and kernel_fpu_begin/end() or equivalent functions. + */ +static void test_hash_interrupt_context(struct kunit *test) +{ + struct irq_test_state state = {}; + size_t i; + unsigned long end_jiffies; + + /* Prepare some test messages and compute the expected hash of each. */ + rand_bytes(test_buf, IRQ_TEST_NUM_BUFFERS * IRQ_TEST_DATA_LEN); + for (i = 0; i < IRQ_TEST_NUM_BUFFERS; i++) + HASH(&test_buf[i * IRQ_TEST_DATA_LEN], IRQ_TEST_DATA_LEN, + state.expected_hashes[i]); + + /* + * Set up a hrtimer (the way we access hardirq context) and a work + * struct for the BH workqueue (the way we access softirq context). + */ + hrtimer_setup_on_stack(&state.timer, irq_test_timer_func, + CLOCK_MONOTONIC, HRTIMER_MODE_REL); + INIT_WORK(&state.bh_work, irq_test_bh_func); + + /* Run for up to 100000 hashes or 1 second, whichever comes first. */ + end_jiffies = jiffies + HZ; + hrtimer_start(&state.timer, IRQ_TEST_HRTIMER_INTERVAL, + HRTIMER_MODE_REL); + for (i = 0; i < 100000 && !time_after(jiffies, end_jiffies); i++) { + if (!irq_test_step(&state)) + state.task_wrong_result = true; + } + + /* Cancel the timer and work. */ + hrtimer_cancel(&state.timer); + flush_work(&state.bh_work); + + /* Sanity check: the timer and BH functions should have been run. */ + KUNIT_EXPECT_GT_MSG(test, atomic_read(&state.timer_func_calls), 0, + "IRQ test failed; timer function was not called"); + KUNIT_EXPECT_GT_MSG(test, atomic_read(&state.bh_func_calls), 0, + "IRQ test failed; BH work function was not called"); + + /* Check the results. */ + KUNIT_EXPECT_FALSE_MSG(test, state.task_wrong_result, + "IRQ test failed; wrong result in task context"); + KUNIT_EXPECT_FALSE_MSG( + test, state.timer_func_wrong_result, + "IRQ test failed; wrong result in timer function (hardirq context)"); + KUNIT_EXPECT_FALSE_MSG( + test, state.bh_func_wrong_result, + "IRQ test failed; wrong result in BH work function (softirq context)"); +} + +#ifdef HMAC +/* + * Test the corresponding HMAC variant. This is a bit less thorough than the + * tests for the hash function, since HMAC is just a small C wrapper around the + * unkeyed hash function. + */ +static void test_hmac(struct kunit *test) +{ + u8 *raw_key = kunit_kmalloc(test, TEST_BUF_LEN, GFP_KERNEL); + static const u8 zeroes[sizeof(struct HMAC_CTX)]; + + KUNIT_ASSERT_NOT_NULL(test, raw_key); + + for (size_t i = 0; i < ARRAY_SIZE(HMAC_TESTVECS); i++) { + size_t data_len = HMAC_TESTVECS[i].data_len; + size_t key_len = HMAC_TESTVECS[i].key_len; + struct HMAC_CTX ctx; + struct HMAC_KEY key; + u8 actual_mac[HASH_SIZE]; + + KUNIT_ASSERT_LE(test, data_len, TEST_BUF_LEN); + KUNIT_ASSERT_LE(test, key_len, TEST_BUF_LEN); + + rand_bytes_seeded_from_len(test_buf, data_len); + rand_bytes_seeded_from_len(raw_key, key_len); + + HMAC_USINGRAWKEY(raw_key, key_len, test_buf, data_len, + actual_mac); + KUNIT_ASSERT_MEMEQ_MSG( + test, actual_mac, HMAC_TESTVECS[i].mac, HASH_SIZE, + "Wrong result with HMAC test vector %zu using raw key; data_len=%zu key_len=%zu", + i, data_len, key_len); + + memset(actual_mac, 0xff, HASH_SIZE); + HMAC_SETKEY(&key, raw_key, key_len); + HMAC(&key, test_buf, data_len, actual_mac); + KUNIT_ASSERT_MEMEQ_MSG( + test, actual_mac, HMAC_TESTVECS[i].mac, HASH_SIZE, + "Wrong result with HMAC test vector %zu using key struct; data_len=%zu key_len=%zu", + i, data_len, key_len); + + memset(actual_mac, 0xff, HASH_SIZE); + HMAC_INIT(&ctx, &key); + HMAC_UPDATE(&ctx, test_buf, data_len); + HMAC_FINAL(&ctx, actual_mac); + KUNIT_ASSERT_MEMEQ_MSG( + test, actual_mac, HMAC_TESTVECS[i].mac, HASH_SIZE, + "Wrong result with HMAC test vector %zu on init+update+final; data_len=%zu key_len=%zu", + i, data_len, key_len); + KUNIT_ASSERT_MEMEQ_MSG( + test, &ctx, zeroes, sizeof(ctx), + "HMAC context was not zeroized by finalization"); + + memset(actual_mac, 0xff, HASH_SIZE); + HMAC_INIT(&ctx, &key); + HMAC_UPDATE(&ctx, test_buf, data_len / 2); + HMAC_UPDATE(&ctx, &test_buf[data_len / 2], (data_len + 1) / 2); + HMAC_FINAL(&ctx, actual_mac); + KUNIT_ASSERT_MEMEQ_MSG( + test, actual_mac, HMAC_TESTVECS[i].mac, HASH_SIZE, + "Wrong result with HMAC test vector %zu on init+update+update+final; data_len=%zu key_len=%zu", + i, data_len, key_len); + } +} +#endif /* HMAC */ + +/* Benchmark the hash function on various data lengths. */ +static void benchmark_hash(struct kunit *test) +{ + static const size_t lens_to_test[] = { + 1, 16, 64, 127, 128, 200, 256, + 511, 512, 1024, 3173, 4096, 16384, + }; + size_t len, i, j, num_iters; + u8 hash[HASH_SIZE]; + u64 t; + + if (!IS_ENABLED(CONFIG_CRYPTO_LIB_BENCHMARK)) + kunit_skip(test, "not enabled"); + + /* warm-up */ + for (i = 0; i < 10000000; i += TEST_BUF_LEN) + HASH(test_buf, TEST_BUF_LEN, hash); + + for (i = 0; i < ARRAY_SIZE(lens_to_test); i++) { + len = lens_to_test[i]; + KUNIT_ASSERT_LE(test, len, TEST_BUF_LEN); + num_iters = 10000000 / (len + 128); + preempt_disable(); + t = ktime_get_ns(); + for (j = 0; j < num_iters; j++) + HASH(test_buf, len, hash); + t = ktime_get_ns() - t; + preempt_enable(); + kunit_info(test, "len=%zu: %llu MB/s\n", len, + div64_u64((u64)len * num_iters * 1000, t)); + } +} diff --git a/lib/crypto/tests/sha384-testvecs.h b/lib/crypto/tests/sha384-testvecs.h new file mode 100644 index 0000000000000..7ea330dfe54f3 --- /dev/null +++ b/lib/crypto/tests/sha384-testvecs.h @@ -0,0 +1,566 @@ +/* This file was generated by: ./scripts/crypto/gen-hash-testvecs.py sha384 */ + +static const struct { + size_t data_len; + u8 digest[SHA384_DIGEST_SIZE]; +} sha384_testvecs[] = { + { + .data_len = 0, + .digest = { + 0x38, 0xb0, 0x60, 0xa7, 0x51, 0xac, 0x96, 0x38, + 0x4c, 0xd9, 0x32, 0x7e, 0xb1, 0xb1, 0xe3, 0x6a, + 0x21, 0xfd, 0xb7, 0x11, 0x14, 0xbe, 0x07, 0x43, + 0x4c, 0x0c, 0xc7, 0xbf, 0x63, 0xf6, 0xe1, 0xda, + 0x27, 0x4e, 0xde, 0xbf, 0xe7, 0x6f, 0x65, 0xfb, + 0xd5, 0x1a, 0xd2, 0xf1, 0x48, 0x98, 0xb9, 0x5b, + }, + }, + { + .data_len = 1, + .digest = { + 0x07, 0x34, 0x9d, 0x74, 0x48, 0x76, 0xa5, 0x72, + 0x78, 0x02, 0xb8, 0x6e, 0x21, 0x59, 0xb0, 0x75, + 0x09, 0x68, 0x11, 0x39, 0x53, 0x61, 0xee, 0x8d, + 0xf2, 0x01, 0xf3, 0x90, 0x53, 0x7c, 0xd3, 0xde, + 0x13, 0x9f, 0xd2, 0x74, 0x28, 0xfe, 0xe1, 0xc8, + 0x2e, 0x95, 0xc6, 0x7d, 0x69, 0x4d, 0x04, 0xc6, + }, + }, + { + .data_len = 2, + .digest = { + 0xc4, 0xef, 0x6e, 0x8c, 0x19, 0x1c, 0xaa, 0x0e, + 0x86, 0xf2, 0x68, 0xa1, 0xa0, 0x2d, 0x2e, 0xb2, + 0x84, 0xbc, 0x5d, 0x53, 0x31, 0xf8, 0x03, 0x75, + 0x56, 0xf4, 0x8b, 0x23, 0x1a, 0x68, 0x15, 0x9a, + 0x60, 0xb2, 0xec, 0x05, 0xe1, 0xd4, 0x5e, 0x9e, + 0xe8, 0x7c, 0x9d, 0xe4, 0x0f, 0x9c, 0x3a, 0xdd, + }, + }, + { + .data_len = 3, + .digest = { + 0x29, 0xd2, 0x02, 0xa2, 0x77, 0x24, 0xc7, 0xa7, + 0x23, 0x0c, 0x3e, 0x30, 0x56, 0x47, 0xdb, 0x75, + 0xd4, 0x41, 0xf8, 0xb3, 0x8e, 0x26, 0xf6, 0x92, + 0xbc, 0x20, 0x2e, 0x96, 0xcc, 0x81, 0x5f, 0x32, + 0x82, 0x60, 0xe2, 0xcf, 0x23, 0xd7, 0x3c, 0x90, + 0xb2, 0x56, 0x8f, 0xb6, 0x0f, 0xf0, 0x6b, 0x80, + }, + }, + { + .data_len = 16, + .digest = { + 0x21, 0x4c, 0xac, 0xfe, 0xbd, 0x40, 0x74, 0x1f, + 0xa2, 0x2d, 0x2f, 0x35, 0x91, 0xfd, 0xc9, 0x97, + 0x88, 0x12, 0x6c, 0x0c, 0x6e, 0xd8, 0x50, 0x0b, + 0x4b, 0x2c, 0x89, 0xa6, 0xa6, 0x4a, 0xad, 0xd7, + 0x72, 0x62, 0x2c, 0x62, 0x81, 0xcd, 0x24, 0x74, + 0xf5, 0x44, 0x05, 0xa0, 0x97, 0xea, 0xf1, 0x78, + }, + }, + { + .data_len = 32, + .digest = { + 0x06, 0x8b, 0x92, 0x9f, 0x8b, 0x64, 0xb2, 0x80, + 0xde, 0xcc, 0xde, 0xc3, 0x2f, 0x22, 0x27, 0xe8, + 0x3b, 0x6e, 0x16, 0x21, 0x14, 0x81, 0xbe, 0x5b, + 0xa7, 0xa7, 0x14, 0x8a, 0x00, 0x8f, 0x0d, 0x38, + 0x11, 0x63, 0xe8, 0x3e, 0xb9, 0xf1, 0xcf, 0x87, + 0xb1, 0x28, 0xe5, 0xa1, 0x89, 0xa8, 0x7a, 0xde, + }, + }, + { + .data_len = 48, + .digest = { + 0x9e, 0x37, 0x76, 0x62, 0x98, 0x39, 0xbe, 0xfd, + 0x2b, 0x91, 0x20, 0x54, 0x8f, 0x21, 0xe7, 0x30, + 0x0a, 0x01, 0x7a, 0x65, 0x0b, 0xc9, 0xb3, 0x89, + 0x3c, 0xb6, 0xd3, 0xa8, 0xff, 0xc9, 0x1b, 0x5c, + 0xd4, 0xac, 0xb4, 0x7e, 0xba, 0x94, 0xc3, 0x8a, + 0x26, 0x41, 0xf6, 0xd5, 0xed, 0x6f, 0x27, 0xa7, + }, + }, + { + .data_len = 49, + .digest = { + 0x03, 0x1f, 0xef, 0x5a, 0x16, 0x28, 0x78, 0x10, + 0x29, 0xe8, 0xe2, 0xe4, 0x84, 0x36, 0x19, 0x10, + 0xaa, 0xea, 0xde, 0x06, 0x39, 0x5f, 0xb2, 0x36, + 0xca, 0x24, 0x4f, 0x7b, 0x66, 0xf7, 0xe7, 0x31, + 0xf3, 0x9b, 0x74, 0x1e, 0x17, 0x20, 0x88, 0x62, + 0x50, 0xeb, 0x5f, 0x9a, 0xa7, 0x2c, 0xf4, 0xc9, + }, + }, + { + .data_len = 63, + .digest = { + 0x10, 0xce, 0xed, 0x26, 0xb8, 0xac, 0xc1, 0x1b, + 0xe6, 0xb9, 0xeb, 0x7c, 0xae, 0xcd, 0x55, 0x5a, + 0x20, 0x2a, 0x7b, 0x43, 0xe6, 0x3e, 0xf0, 0x3f, + 0xd9, 0x2f, 0x8c, 0x52, 0xe2, 0xf0, 0xb6, 0x24, + 0x2e, 0xa4, 0xac, 0x24, 0x3a, 0x54, 0x99, 0x71, + 0x65, 0xab, 0x97, 0x2d, 0xb6, 0xe6, 0x94, 0x20, + }, + }, + { + .data_len = 64, + .digest = { + 0x24, 0x6d, 0x9f, 0x59, 0x42, 0x36, 0xca, 0x34, + 0x36, 0x41, 0xa2, 0xcd, 0x69, 0xdf, 0x3d, 0xcb, + 0x64, 0x94, 0x54, 0xb2, 0xed, 0xc1, 0x1c, 0x31, + 0xe3, 0x26, 0xcb, 0x71, 0xe6, 0x98, 0xb2, 0x56, + 0x74, 0x30, 0xa9, 0x15, 0x98, 0x9d, 0xb3, 0x07, + 0xcc, 0xa8, 0xcc, 0x6f, 0x42, 0xb0, 0x9d, 0x2b, + }, + }, + { + .data_len = 65, + .digest = { + 0x85, 0x1f, 0xbc, 0x5e, 0x2a, 0x00, 0x7d, 0xc2, + 0x21, 0x4c, 0x28, 0x14, 0xc5, 0xd8, 0x0c, 0xe8, + 0x55, 0xa5, 0xa0, 0x77, 0xda, 0x8f, 0xce, 0xd4, + 0xf0, 0xcb, 0x30, 0xb8, 0x9c, 0x47, 0xe1, 0x33, + 0x92, 0x18, 0xc5, 0x1f, 0xf2, 0xef, 0xb5, 0xe5, + 0xbc, 0x63, 0xa6, 0xe5, 0x9a, 0xc9, 0xcc, 0xf1, + }, + }, + { + .data_len = 127, + .digest = { + 0x26, 0xd2, 0x4c, 0xb6, 0xce, 0xd8, 0x22, 0x2b, + 0x44, 0x10, 0x6f, 0x59, 0xf7, 0x0d, 0xb9, 0x3f, + 0x7d, 0x29, 0x75, 0xf1, 0x71, 0xb2, 0x71, 0x23, + 0xef, 0x68, 0xb7, 0x25, 0xae, 0xb8, 0x45, 0xf8, + 0xa3, 0xb2, 0x2d, 0x7a, 0x83, 0x0a, 0x05, 0x61, + 0xbc, 0x73, 0xf1, 0xf9, 0xba, 0xfb, 0x3d, 0xc2, + }, + }, + { + .data_len = 128, + .digest = { + 0x7c, 0xe5, 0x7f, 0x5e, 0xea, 0xd9, 0x7e, 0x54, + 0x14, 0x30, 0x6f, 0x37, 0x02, 0x71, 0x0f, 0xf1, + 0x14, 0x16, 0xfa, 0xeb, 0x6e, 0x1e, 0xf0, 0xbe, + 0x10, 0xed, 0x01, 0xbf, 0xa0, 0x9d, 0xcb, 0x07, + 0x5f, 0x8b, 0x7f, 0x44, 0xe1, 0xd9, 0x13, 0xf0, + 0x29, 0xa2, 0x54, 0x32, 0xd9, 0xb0, 0x69, 0x69, + }, + }, + { + .data_len = 129, + .digest = { + 0xc5, 0x54, 0x1f, 0xcb, 0x9d, 0x8f, 0xdf, 0xbf, + 0xab, 0x55, 0x92, 0x1d, 0x3b, 0x93, 0x79, 0x26, + 0xdf, 0xba, 0x9a, 0x28, 0xff, 0xa0, 0x6c, 0xae, + 0x7b, 0x53, 0x8d, 0xfa, 0xef, 0x35, 0x88, 0x19, + 0x16, 0xb8, 0x72, 0x86, 0x76, 0x2a, 0xf5, 0xe6, + 0xec, 0xb2, 0xd7, 0xd4, 0xbe, 0x1a, 0xe4, 0x9f, + }, + }, + { + .data_len = 256, + .digest = { + 0x74, 0x9d, 0x77, 0xfb, 0xe8, 0x0f, 0x0c, 0x2d, + 0x86, 0x0d, 0x49, 0xea, 0x2b, 0xd0, 0x13, 0xd1, + 0xe8, 0xb8, 0xe1, 0xa3, 0x7b, 0x48, 0xab, 0x6a, + 0x21, 0x2b, 0x4c, 0x48, 0x32, 0xb5, 0xdc, 0x31, + 0x7f, 0xd0, 0x32, 0x67, 0x9a, 0xc0, 0x85, 0x53, + 0xef, 0xe9, 0xfb, 0xe1, 0x8b, 0xd8, 0xcc, 0xc2, + }, + }, + { + .data_len = 511, + .digest = { + 0x7b, 0xa9, 0xde, 0xa3, 0x07, 0x5c, 0x4c, 0xaa, + 0x31, 0xc6, 0x9e, 0x55, 0xd4, 0x3f, 0x52, 0xdd, + 0xde, 0x36, 0x70, 0x96, 0x59, 0x6e, 0x90, 0x78, + 0x4c, 0x6a, 0x27, 0xde, 0x83, 0x84, 0xc3, 0x35, + 0x53, 0x76, 0x1d, 0xbf, 0x83, 0x64, 0xcf, 0xf2, + 0xb0, 0x3e, 0x07, 0x27, 0xe4, 0x25, 0x6c, 0x56, + }, + }, + { + .data_len = 513, + .digest = { + 0x53, 0x50, 0xf7, 0x3b, 0x86, 0x1d, 0x7a, 0xe2, + 0x5d, 0x9b, 0x71, 0xfa, 0x25, 0x23, 0x5a, 0xfe, + 0x8c, 0xb9, 0xac, 0x8a, 0x9d, 0x6c, 0x99, 0xbc, + 0x01, 0x9e, 0xa0, 0xd6, 0x3c, 0x03, 0x46, 0x21, + 0xb6, 0xd0, 0xb0, 0xb3, 0x23, 0x23, 0x58, 0xf1, + 0xea, 0x4e, 0xf2, 0x1a, 0x2f, 0x14, 0x2b, 0x5a, + }, + }, + { + .data_len = 1000, + .digest = { + 0x06, 0x03, 0xb3, 0xba, 0x14, 0xe0, 0x28, 0x07, + 0xd5, 0x15, 0x97, 0x1f, 0x87, 0xef, 0x80, 0xba, + 0x48, 0x03, 0xb6, 0xc5, 0x47, 0xca, 0x8c, 0x95, + 0xed, 0x95, 0xfd, 0x27, 0xb6, 0x83, 0xda, 0x6d, + 0xa7, 0xb2, 0x1a, 0xd2, 0xb5, 0x89, 0xbb, 0xb4, + 0x00, 0xbc, 0x86, 0x54, 0x7d, 0x5a, 0x91, 0x63, + }, + }, + { + .data_len = 3333, + .digest = { + 0xd3, 0xe0, 0x6e, 0x7d, 0x80, 0x08, 0x53, 0x07, + 0x8c, 0x0f, 0xc2, 0xce, 0x9f, 0x09, 0x86, 0x31, + 0x28, 0x24, 0x3c, 0x3e, 0x2d, 0x36, 0xb4, 0x28, + 0xc7, 0x1b, 0x70, 0xf9, 0x35, 0x9b, 0x10, 0xfa, + 0xc8, 0x5e, 0x2b, 0x32, 0x7f, 0x65, 0xd2, 0x68, + 0xb2, 0x84, 0x90, 0xf6, 0xc8, 0x6e, 0xb8, 0xdb, + }, + }, + { + .data_len = 4096, + .digest = { + 0x39, 0xeb, 0xc4, 0xb3, 0x08, 0xe2, 0xdd, 0xf3, + 0x9f, 0x5e, 0x44, 0x93, 0x63, 0x8b, 0x39, 0x57, + 0xd7, 0xe8, 0x7e, 0x3d, 0x74, 0xf8, 0xf6, 0xab, + 0xfe, 0x74, 0x51, 0xe4, 0x1b, 0x4a, 0x23, 0xbc, + 0x69, 0xfc, 0xbb, 0xa7, 0x71, 0xa7, 0x86, 0x24, + 0xcc, 0x85, 0x70, 0xf2, 0x31, 0x0d, 0x47, 0xc0, + }, + }, + { + .data_len = 4128, + .digest = { + 0x23, 0xc3, 0x97, 0x06, 0x79, 0xbe, 0x8a, 0xe9, + 0x1f, 0x1a, 0x43, 0xad, 0xe6, 0x76, 0x23, 0x13, + 0x64, 0xae, 0xda, 0xe7, 0x8b, 0x88, 0x96, 0xb6, + 0xa9, 0x1a, 0xb7, 0x80, 0x8e, 0x1c, 0x94, 0x98, + 0x09, 0x08, 0xdb, 0x8e, 0x4d, 0x0a, 0x09, 0x65, + 0xe5, 0x21, 0x1c, 0xd9, 0xab, 0x64, 0xbb, 0xea, + }, + }, + { + .data_len = 4160, + .digest = { + 0x4f, 0x4a, 0x88, 0x9f, 0x40, 0x89, 0xfe, 0xb6, + 0xda, 0x9d, 0xcd, 0xa5, 0x27, 0xd2, 0x29, 0x71, + 0x58, 0x60, 0xd4, 0x55, 0xfe, 0x92, 0xcd, 0x51, + 0x8b, 0xec, 0x3b, 0xd3, 0xd1, 0x3e, 0x8d, 0x36, + 0x7b, 0xb1, 0x41, 0xef, 0xec, 0x9d, 0xdf, 0xcd, + 0x4e, 0xde, 0x5a, 0xe5, 0xe5, 0x16, 0x14, 0x54, + }, + }, + { + .data_len = 4224, + .digest = { + 0xb5, 0xa5, 0x3e, 0x86, 0x39, 0x20, 0x49, 0x4c, + 0xcd, 0xb6, 0xdd, 0x03, 0xfe, 0x36, 0x6e, 0xa6, + 0xfc, 0xff, 0x19, 0x33, 0x0c, 0x52, 0xea, 0x37, + 0x94, 0xda, 0x5b, 0x27, 0xd1, 0x99, 0x5a, 0x89, + 0x40, 0x78, 0xfa, 0x96, 0xb9, 0x2f, 0xa0, 0x48, + 0xc9, 0xf8, 0x5c, 0xf0, 0x95, 0xf4, 0xea, 0x61, + }, + }, + { + .data_len = 16384, + .digest = { + 0x6f, 0x48, 0x6f, 0x21, 0xb9, 0xc1, 0xcc, 0x92, + 0x4e, 0xed, 0x6b, 0xef, 0x51, 0x88, 0xdf, 0xfd, + 0xcb, 0x3d, 0x44, 0x9c, 0x37, 0x85, 0xb4, 0xc5, + 0xeb, 0x60, 0x55, 0x58, 0x01, 0x47, 0xbf, 0x75, + 0x9b, 0xa8, 0x82, 0x8c, 0xec, 0xe8, 0x0e, 0x58, + 0xc1, 0x26, 0xa2, 0x45, 0x87, 0x3e, 0xfb, 0x8d, + }, + }, +}; + +static const struct { + size_t data_len; + size_t key_len; + u8 mac[SHA384_DIGEST_SIZE]; +} hmac_sha384_testvecs[] = { + { + .data_len = 0, + .key_len = 0, + .mac = { + 0x6c, 0x1f, 0x2e, 0xe9, 0x38, 0xfa, 0xd2, 0xe2, + 0x4b, 0xd9, 0x12, 0x98, 0x47, 0x43, 0x82, 0xca, + 0x21, 0x8c, 0x75, 0xdb, 0x3d, 0x83, 0xe1, 0x14, + 0xb3, 0xd4, 0x36, 0x77, 0x76, 0xd1, 0x4d, 0x35, + 0x51, 0x28, 0x9e, 0x75, 0xe8, 0x20, 0x9c, 0xd4, + 0xb7, 0x92, 0x30, 0x28, 0x40, 0x23, 0x4a, 0xdc, + }, + }, + { + .data_len = 1, + .key_len = 1, + .mac = { + 0xe5, 0x20, 0x5e, 0xd4, 0x0a, 0xd8, 0x37, 0xff, + 0xf9, 0x0e, 0x2b, 0xf2, 0xca, 0x15, 0x65, 0xec, + 0xb3, 0xfb, 0x14, 0xa1, 0xc3, 0xdc, 0x9e, 0xa0, + 0x96, 0x99, 0xeb, 0x18, 0x24, 0x90, 0x42, 0xef, + 0x63, 0x3c, 0x38, 0xd3, 0x19, 0x6b, 0x7f, 0xef, + 0x07, 0xb3, 0xb8, 0xb0, 0x43, 0x5f, 0xef, 0xed, + }, + }, + { + .data_len = 2, + .key_len = 31, + .mac = { + 0x47, 0x08, 0x60, 0x37, 0xe9, 0x47, 0x0d, 0x56, + 0xa9, 0x81, 0xa1, 0xdf, 0x05, 0x1e, 0x41, 0x4e, + 0x7f, 0xf9, 0x51, 0xb3, 0x47, 0x7e, 0x04, 0x4f, + 0x0a, 0x05, 0x13, 0x6e, 0xd8, 0x4e, 0x6d, 0x98, + 0x91, 0x89, 0xe1, 0xdc, 0x7f, 0x23, 0x03, 0x2e, + 0x47, 0x9e, 0x7c, 0xe0, 0x68, 0x08, 0xd2, 0x57, + }, + }, + { + .data_len = 3, + .key_len = 32, + .mac = { + 0xb9, 0x83, 0xdc, 0x7c, 0xb2, 0x48, 0x04, 0xc7, + 0xdf, 0x9f, 0x8b, 0xbe, 0x17, 0x80, 0xc5, 0x13, + 0x24, 0x10, 0x1c, 0xf1, 0x38, 0x75, 0x87, 0xe6, + 0x3a, 0x2e, 0xa2, 0xed, 0x33, 0xdb, 0xfc, 0xd7, + 0x0c, 0x8e, 0x89, 0x92, 0x14, 0x19, 0xef, 0x43, + 0xef, 0x6b, 0xc7, 0xc5, 0x4d, 0x3f, 0xa4, 0x41, + }, + }, + { + .data_len = 16, + .key_len = 33, + .mac = { + 0xee, 0x3b, 0x6b, 0x7f, 0xec, 0xe1, 0xc4, 0x8f, + 0x01, 0x91, 0xc9, 0x1a, 0x18, 0xb3, 0x0f, 0x34, + 0xad, 0xe5, 0x1f, 0x51, 0x9a, 0x0b, 0xec, 0xa3, + 0xc1, 0x0e, 0xf7, 0x7e, 0xb5, 0xd5, 0xe6, 0x22, + 0x44, 0x23, 0x85, 0x2c, 0xe0, 0xb6, 0x81, 0xac, + 0x7b, 0x41, 0x49, 0x18, 0x92, 0x0a, 0xd5, 0xc1, + }, + }, + { + .data_len = 32, + .key_len = 64, + .mac = { + 0x4d, 0x1f, 0xd0, 0x53, 0x5e, 0x04, 0x2b, 0xd6, + 0xfd, 0xd6, 0xa2, 0xed, 0xa2, 0x49, 0x36, 0xbf, + 0x44, 0x8e, 0x42, 0x1c, 0x8a, 0xde, 0x44, 0xbb, + 0x43, 0x84, 0xec, 0x70, 0xbb, 0x2d, 0xd3, 0x66, + 0x51, 0xc0, 0xed, 0xd1, 0xcd, 0x5b, 0x11, 0xf2, + 0x1c, 0xe6, 0x7d, 0xe9, 0xcd, 0xab, 0xff, 0x02, + }, + }, + { + .data_len = 48, + .key_len = 65, + .mac = { + 0x12, 0xfc, 0x9f, 0x95, 0xcd, 0x88, 0xed, 0x8a, + 0x6a, 0x87, 0x36, 0x45, 0x63, 0xb9, 0xc8, 0x46, + 0xf5, 0x06, 0x97, 0x24, 0x19, 0xc8, 0xfa, 0xfd, + 0xcf, 0x2b, 0x78, 0x5d, 0x44, 0xf1, 0x82, 0xbf, + 0x93, 0xea, 0x9c, 0x84, 0xe5, 0xba, 0x27, 0x21, + 0x2f, 0x3b, 0xd0, 0xfe, 0x2c, 0x53, 0x72, 0x31, + }, + }, + { + .data_len = 49, + .key_len = 66, + .mac = { + 0x8e, 0x8f, 0x3d, 0xd7, 0xe6, 0x14, 0x0c, 0xf2, + 0xf6, 0x9a, 0x19, 0xda, 0x4c, 0xb2, 0xc4, 0x84, + 0x63, 0x76, 0x5b, 0xae, 0x17, 0xe0, 0xdf, 0x92, + 0x82, 0xcf, 0x85, 0xbd, 0xce, 0xde, 0x3b, 0x49, + 0xfe, 0x0a, 0xfb, 0xdc, 0x9a, 0xc0, 0x9e, 0xc7, + 0x4f, 0x2c, 0x0f, 0xd3, 0xb9, 0x82, 0x1a, 0xaa, + }, + }, + { + .data_len = 63, + .key_len = 127, + .mac = { + 0xc9, 0x17, 0xbb, 0x8f, 0x4f, 0x13, 0xba, 0x99, + 0x4e, 0x48, 0x6a, 0x23, 0x12, 0x61, 0x7b, 0xa0, + 0x63, 0xcb, 0x47, 0xfd, 0xbd, 0xd3, 0xfd, 0x94, + 0xe7, 0x0b, 0xec, 0x04, 0x44, 0x5a, 0xfe, 0xb0, + 0x97, 0x5b, 0x80, 0x4c, 0x02, 0x5c, 0x92, 0x05, + 0x45, 0xe6, 0xe3, 0x0d, 0x21, 0xa5, 0x9a, 0x11, + }, + }, + { + .data_len = 64, + .key_len = 128, + .mac = { + 0xbf, 0x20, 0x44, 0xe1, 0x91, 0xcf, 0x2b, 0x53, + 0xcb, 0xcb, 0x89, 0xc2, 0x1b, 0x8e, 0xcb, 0xb0, + 0x12, 0xd2, 0x77, 0x21, 0x7e, 0x8f, 0x40, 0x0f, + 0x1e, 0xa4, 0xe7, 0x38, 0x69, 0x0f, 0x58, 0xba, + 0x42, 0x78, 0x57, 0x4e, 0x7a, 0xf0, 0xb0, 0xf2, + 0xe0, 0x17, 0x17, 0xcf, 0xee, 0x26, 0x53, 0x81, + }, + }, + { + .data_len = 65, + .key_len = 129, + .mac = { + 0x44, 0xe7, 0x53, 0x94, 0xaa, 0x33, 0xb0, 0xde, + 0x8e, 0xef, 0x85, 0x19, 0x69, 0x1e, 0xba, 0x69, + 0x7f, 0xe1, 0x17, 0xc3, 0x91, 0xd6, 0x7b, 0x07, + 0x61, 0xed, 0x81, 0x4c, 0x01, 0x65, 0x36, 0xbd, + 0x7d, 0x4f, 0x70, 0xd7, 0x0d, 0xb8, 0xfc, 0xaf, + 0x48, 0x1c, 0x96, 0x37, 0xf9, 0xc8, 0x72, 0x00, + }, + }, + { + .data_len = 127, + .key_len = 1000, + .mac = { + 0x98, 0x11, 0x57, 0xfe, 0xa5, 0xd0, 0xed, 0x5e, + 0xc5, 0x7e, 0xb3, 0x53, 0x9d, 0x12, 0x38, 0x41, + 0x0a, 0x78, 0x75, 0xd4, 0x0f, 0xa6, 0x9f, 0x05, + 0xd3, 0x2e, 0xcd, 0xad, 0x78, 0xea, 0x09, 0xdc, + 0xdc, 0x2b, 0x56, 0x41, 0xb1, 0x5a, 0x6b, 0xd8, + 0x3e, 0xe7, 0xac, 0x01, 0x4b, 0xb8, 0x52, 0x42, + }, + }, + { + .data_len = 128, + .key_len = 1024, + .mac = { + 0xaa, 0x48, 0xa9, 0x1a, 0x47, 0xbf, 0x87, 0xec, + 0x9e, 0xe6, 0x0f, 0x98, 0x2a, 0xb0, 0xa7, 0x84, + 0x9a, 0x87, 0x5c, 0x75, 0x7e, 0xb5, 0xf1, 0x0a, + 0x01, 0x20, 0x75, 0xfd, 0xbf, 0xb8, 0x59, 0xad, + 0x1d, 0xa6, 0x59, 0x2c, 0xf2, 0x5e, 0xfd, 0xdc, + 0x3c, 0x39, 0x4c, 0xcd, 0x0a, 0x5f, 0xb0, 0x1f, + }, + }, + { + .data_len = 129, + .key_len = 0, + .mac = { + 0x0a, 0xa9, 0x08, 0xdf, 0x29, 0xc4, 0x9e, 0xb3, + 0x80, 0x32, 0xab, 0xf5, 0x61, 0xb2, 0xdf, 0x31, + 0xc7, 0x7b, 0xb6, 0xb6, 0x30, 0x45, 0x85, 0x6b, + 0x76, 0xbc, 0x83, 0xfd, 0x94, 0xe5, 0x91, 0x33, + 0x40, 0x01, 0x2d, 0xcf, 0x22, 0x27, 0x35, 0x5e, + 0x25, 0xac, 0xfe, 0x14, 0xb4, 0xec, 0x13, 0xa7, + }, + }, + { + .data_len = 256, + .key_len = 1, + .mac = { + 0x0d, 0x94, 0xcc, 0xcd, 0xbd, 0x89, 0xdc, 0xb4, + 0xcf, 0x93, 0x02, 0x8c, 0x1d, 0x37, 0xbd, 0x00, + 0xa4, 0x9c, 0x24, 0xb2, 0xf7, 0xa5, 0xbf, 0x97, + 0x5a, 0x9b, 0x27, 0xc2, 0x28, 0xcf, 0xce, 0x3e, + 0x8d, 0xa0, 0x14, 0x03, 0x64, 0xc2, 0x80, 0xec, + 0x09, 0xcb, 0x57, 0x81, 0x2f, 0x70, 0x15, 0x1f, + }, + }, + { + .data_len = 511, + .key_len = 31, + .mac = { + 0xb7, 0x4b, 0x98, 0x94, 0x29, 0xfd, 0x21, 0xba, + 0x99, 0xc4, 0x36, 0x2b, 0x8d, 0x71, 0xa5, 0x15, + 0xd0, 0x2f, 0xc2, 0x4d, 0x15, 0x33, 0xa2, 0x52, + 0x58, 0x74, 0xe7, 0x40, 0x5e, 0x75, 0x32, 0x70, + 0x64, 0x7d, 0xce, 0x13, 0xf2, 0x01, 0x38, 0x71, + 0x0e, 0x8d, 0xea, 0x8b, 0x78, 0x23, 0x65, 0x28, + }, + }, + { + .data_len = 513, + .key_len = 32, + .mac = { + 0xba, 0xc4, 0x7a, 0xf3, 0x62, 0xfc, 0x56, 0x8b, + 0x77, 0xde, 0x56, 0x64, 0x51, 0x0d, 0xaa, 0x50, + 0x7c, 0x77, 0xbc, 0xd4, 0x44, 0x78, 0x0e, 0xae, + 0x8a, 0xa2, 0x52, 0x7b, 0x3b, 0x87, 0x5f, 0x66, + 0xf9, 0x28, 0x6a, 0x0a, 0xd6, 0xbf, 0x40, 0xcf, + 0xa3, 0xb0, 0x70, 0x60, 0xb1, 0x36, 0x4d, 0x3b, + }, + }, + { + .data_len = 1000, + .key_len = 33, + .mac = { + 0xf7, 0x54, 0x6b, 0x0d, 0x52, 0x46, 0x95, 0x1b, + 0xb2, 0xc1, 0xd9, 0x89, 0x8c, 0xdf, 0x56, 0xfc, + 0x05, 0xd5, 0x1c, 0x0a, 0x60, 0xef, 0x06, 0xfa, + 0x40, 0x18, 0xf7, 0xa3, 0xdb, 0x5e, 0xcb, 0x94, + 0xa7, 0x8f, 0x6f, 0x01, 0x8d, 0x40, 0x83, 0x1e, + 0x32, 0x22, 0xa5, 0xa6, 0x83, 0xb7, 0x57, 0x9e, + }, + }, + { + .data_len = 3333, + .key_len = 64, + .mac = { + 0x46, 0x1f, 0x32, 0xf7, 0x8e, 0x21, 0x52, 0x70, + 0xe6, 0x45, 0xa4, 0xb5, 0x13, 0x92, 0xbe, 0x5e, + 0x5b, 0x9e, 0xa8, 0x9a, 0x22, 0x3a, 0x49, 0xbb, + 0xc5, 0xab, 0xfb, 0x05, 0xed, 0x13, 0xcb, 0xe2, + 0x71, 0xc1, 0x22, 0xca, 0x3b, 0x65, 0x08, 0xb3, + 0x9c, 0x1a, 0x03, 0xd0, 0x8e, 0xf8, 0xf0, 0x8b, + }, + }, + { + .data_len = 4096, + .key_len = 65, + .mac = { + 0xb4, 0x0d, 0x90, 0x01, 0x69, 0x2b, 0xc8, 0xab, + 0x6b, 0xe9, 0x8c, 0xa5, 0xa9, 0x46, 0xc0, 0x90, + 0x84, 0xec, 0x6d, 0x6b, 0x64, 0x88, 0x66, 0x55, + 0x61, 0x04, 0x2d, 0xd8, 0x30, 0x9a, 0x2f, 0x3b, + 0x3d, 0xa3, 0x11, 0x50, 0xcc, 0x6a, 0xe4, 0xb2, + 0x41, 0x18, 0xd7, 0x70, 0x57, 0x01, 0x67, 0x1c, + }, + }, + { + .data_len = 4128, + .key_len = 66, + .mac = { + 0xbd, 0x68, 0x84, 0xea, 0x22, 0xbf, 0xe2, 0x0e, + 0x86, 0x61, 0x5e, 0x58, 0x38, 0xfd, 0xce, 0x91, + 0x3a, 0x67, 0xda, 0x2b, 0xce, 0x71, 0xaf, 0xbc, + 0xf2, 0x75, 0xa5, 0xa8, 0xa2, 0xe2, 0x45, 0x12, + 0xab, 0x67, 0x3d, 0x4e, 0x1c, 0x42, 0xe1, 0x5d, + 0x6c, 0xb1, 0xd2, 0xb0, 0x16, 0xd5, 0x5c, 0xaf, + }, + }, + { + .data_len = 4160, + .key_len = 127, + .mac = { + 0x91, 0xe1, 0x89, 0x46, 0x28, 0x01, 0xe1, 0xd3, + 0x21, 0x12, 0xda, 0x6e, 0xe0, 0x17, 0x14, 0xd0, + 0x07, 0x5a, 0x9f, 0xca, 0xad, 0x6a, 0x6b, 0x89, + 0xf3, 0x6e, 0x21, 0x92, 0x52, 0x18, 0x21, 0x9d, + 0xc6, 0xe6, 0x5d, 0xca, 0xc3, 0x4d, 0xed, 0xe7, + 0xb9, 0x51, 0x51, 0x13, 0x12, 0xff, 0x73, 0x91, + }, + }, + { + .data_len = 4224, + .key_len = 128, + .mac = { + 0x15, 0x8f, 0xae, 0x57, 0xa2, 0x69, 0xe0, 0xb7, + 0x15, 0xb2, 0xd9, 0x33, 0xfd, 0x62, 0x5d, 0xc9, + 0x38, 0xad, 0xc0, 0xbc, 0x9c, 0xd4, 0x8f, 0xed, + 0x93, 0x2d, 0x66, 0x6b, 0x57, 0x26, 0xda, 0xdc, + 0x4b, 0x14, 0x00, 0x82, 0x0d, 0x1a, 0x27, 0x37, + 0xa6, 0x91, 0x61, 0x04, 0x20, 0xc9, 0x6b, 0x61, + }, + }, + { + .data_len = 16384, + .key_len = 129, + .mac = { + 0x65, 0x25, 0x4f, 0xfc, 0x9b, 0x4d, 0xe5, 0xd7, + 0x2c, 0xb7, 0xb1, 0x2f, 0xf9, 0xb7, 0x7b, 0x98, + 0x80, 0x45, 0x23, 0xdc, 0x0b, 0xd1, 0x76, 0xc1, + 0x81, 0xfd, 0x89, 0x08, 0x96, 0x9a, 0x35, 0xbd, + 0x0c, 0x7c, 0x0e, 0x26, 0xab, 0xa4, 0x03, 0x55, + 0x4d, 0x3a, 0xc0, 0x0a, 0x10, 0x45, 0x1a, 0x46, + }, + }, +}; diff --git a/lib/crypto/tests/sha384_kunit.c b/lib/crypto/tests/sha384_kunit.c new file mode 100644 index 0000000000000..54c33dd3430d9 --- /dev/null +++ b/lib/crypto/tests/sha384_kunit.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright 2025 Google LLC + */ +#include <crypto/sha2.h> +#include "sha384-testvecs.h" + +#define HASH sha384 +#define HASH_CTX sha384_ctx +#define HASH_SIZE SHA384_DIGEST_SIZE +#define HASH_INIT sha384_init +#define HASH_UPDATE sha384_update +#define HASH_FINAL sha384_final +#define HASH_TESTVECS sha384_testvecs +#define HMAC_KEY hmac_sha384_key +#define HMAC_CTX hmac_sha384_ctx +#define HMAC_SETKEY hmac_sha384_preparekey +#define HMAC_INIT hmac_sha384_init +#define HMAC_UPDATE hmac_sha384_update +#define HMAC_FINAL hmac_sha384_final +#define HMAC hmac_sha384 +#define HMAC_USINGRAWKEY hmac_sha384_usingrawkey +#define HMAC_TESTVECS hmac_sha384_testvecs +#include "hash-test-template.h" + +static struct kunit_case hash_test_cases[] = { + KUNIT_CASE(test_hash_test_vectors), + KUNIT_CASE(test_hash_incremental_updates), + KUNIT_CASE(test_hash_buffer_overruns), + KUNIT_CASE(test_hash_overlaps), + KUNIT_CASE(test_hash_alignment_consistency), + KUNIT_CASE(test_hash_interrupt_context), + KUNIT_CASE(test_hash_ctx_zeroization), + KUNIT_CASE(test_hmac), + KUNIT_CASE(benchmark_hash), + {}, +}; + +static struct kunit_suite hash_test_suite = { + .name = "sha384", + .test_cases = hash_test_cases, + .suite_init = hash_suite_init, + .suite_exit = hash_suite_exit, +}; +kunit_test_suite(hash_test_suite); + +MODULE_DESCRIPTION("KUnit tests and benchmark for SHA-384 and HMAC-SHA384"); +MODULE_LICENSE("GPL"); diff --git a/lib/crypto/tests/sha512-testvecs.h b/lib/crypto/tests/sha512-testvecs.h new file mode 100644 index 0000000000000..d91e11fd6879e --- /dev/null +++ b/lib/crypto/tests/sha512-testvecs.h @@ -0,0 +1,662 @@ +/* This file was generated by: ./scripts/crypto/gen-hash-testvecs.py sha512 */ + +static const struct { + size_t data_len; + u8 digest[SHA512_DIGEST_SIZE]; +} sha512_testvecs[] = { + { + .data_len = 0, + .digest = { + 0xcf, 0x83, 0xe1, 0x35, 0x7e, 0xef, 0xb8, 0xbd, + 0xf1, 0x54, 0x28, 0x50, 0xd6, 0x6d, 0x80, 0x07, + 0xd6, 0x20, 0xe4, 0x05, 0x0b, 0x57, 0x15, 0xdc, + 0x83, 0xf4, 0xa9, 0x21, 0xd3, 0x6c, 0xe9, 0xce, + 0x47, 0xd0, 0xd1, 0x3c, 0x5d, 0x85, 0xf2, 0xb0, + 0xff, 0x83, 0x18, 0xd2, 0x87, 0x7e, 0xec, 0x2f, + 0x63, 0xb9, 0x31, 0xbd, 0x47, 0x41, 0x7a, 0x81, + 0xa5, 0x38, 0x32, 0x7a, 0xf9, 0x27, 0xda, 0x3e, + }, + }, + { + .data_len = 1, + .digest = { + 0x12, 0xf2, 0xb6, 0xec, 0x84, 0xa0, 0x8e, 0xcf, + 0x0d, 0xec, 0x17, 0xfd, 0x1f, 0x91, 0x15, 0x69, + 0xd2, 0xb9, 0x89, 0xff, 0x89, 0x9d, 0xd9, 0x0b, + 0x7a, 0x0f, 0x82, 0x94, 0x57, 0x5b, 0xf3, 0x08, + 0x42, 0x45, 0x23, 0x08, 0x44, 0x54, 0x35, 0x36, + 0xed, 0x4b, 0xb3, 0xa5, 0xbf, 0x17, 0xc1, 0x3c, + 0xdd, 0x25, 0x4a, 0x30, 0x64, 0xed, 0x66, 0x06, + 0x72, 0x05, 0xc2, 0x71, 0x5a, 0x6c, 0xd0, 0x75, + }, + }, + { + .data_len = 2, + .digest = { + 0x01, 0x37, 0x97, 0xcc, 0x0a, 0xcb, 0x61, 0xa4, + 0x93, 0x26, 0x36, 0x4b, 0xd2, 0x27, 0xea, 0xaf, + 0xda, 0xfa, 0x8f, 0x86, 0x12, 0x99, 0x7b, 0xc8, + 0x94, 0xa9, 0x1c, 0x70, 0x3f, 0x43, 0xf3, 0x9a, + 0x02, 0xc5, 0x0d, 0x8e, 0x01, 0xf8, 0x3a, 0xa6, + 0xec, 0xaa, 0xc5, 0xc7, 0x9d, 0x3d, 0x7f, 0x9d, + 0x47, 0x0e, 0x58, 0x2d, 0x9a, 0x2d, 0x51, 0x1d, + 0xc3, 0x77, 0xb2, 0x7f, 0x69, 0x9a, 0xc3, 0x50, + }, + }, + { + .data_len = 3, + .digest = { + 0xd4, 0xa3, 0xc2, 0x50, 0xa0, 0x33, 0xc6, 0xe4, + 0x50, 0x08, 0xea, 0xc9, 0xb8, 0x35, 0x55, 0x34, + 0x61, 0xb8, 0x2e, 0xa2, 0xe5, 0xdc, 0x04, 0x70, + 0x99, 0x86, 0x5f, 0xee, 0x2e, 0x1e, 0xd4, 0x40, + 0xf8, 0x88, 0x01, 0x84, 0x97, 0x16, 0x6a, 0xd3, + 0x0a, 0xb4, 0xe2, 0x34, 0xca, 0x1f, 0xfc, 0x6b, + 0x61, 0xf3, 0x7f, 0x72, 0xfa, 0x8d, 0x22, 0xcf, + 0x7f, 0x2d, 0x87, 0x9d, 0xbb, 0x59, 0xac, 0x53, + }, + }, + { + .data_len = 16, + .digest = { + 0x3b, 0xae, 0x66, 0x48, 0x0f, 0x35, 0x3c, 0xcd, + 0x92, 0xa9, 0xb6, 0xb9, 0xfe, 0x19, 0x90, 0x92, + 0x52, 0xa5, 0x02, 0xd1, 0x89, 0xcf, 0x98, 0x86, + 0x29, 0x28, 0xab, 0xc4, 0x9e, 0xcc, 0x75, 0x38, + 0x95, 0xa7, 0x59, 0x43, 0xef, 0x8c, 0x3a, 0xeb, + 0x40, 0xa3, 0xbe, 0x2b, 0x75, 0x0d, 0xfd, 0xc3, + 0xaf, 0x69, 0x08, 0xad, 0x9f, 0xc9, 0xf4, 0x96, + 0xa9, 0xc2, 0x2b, 0x1b, 0x66, 0x6f, 0x1d, 0x28, + }, + }, + { + .data_len = 32, + .digest = { + 0x9c, 0xfb, 0x3c, 0x40, 0xd5, 0x3b, 0xc4, 0xff, + 0x07, 0xa7, 0xf0, 0x24, 0xb7, 0xd6, 0x5e, 0x12, + 0x5b, 0x85, 0xb5, 0xa5, 0xe0, 0x82, 0xa6, 0xda, + 0x30, 0x13, 0x2f, 0x1a, 0xe3, 0xd0, 0x55, 0xcb, + 0x14, 0x19, 0xe2, 0x09, 0x91, 0x96, 0x26, 0xf9, + 0x38, 0xd7, 0xfa, 0x4a, 0xfb, 0x2f, 0x6f, 0xc0, + 0xf4, 0x95, 0xc3, 0x40, 0xf6, 0xdb, 0xe7, 0xc2, + 0x79, 0x23, 0xa4, 0x20, 0x96, 0x3a, 0x00, 0xbb, + }, + }, + { + .data_len = 48, + .digest = { + 0x92, 0x1a, 0x21, 0x06, 0x6e, 0x08, 0x84, 0x09, + 0x23, 0x8d, 0x63, 0xec, 0xd6, 0x72, 0xd3, 0x21, + 0x51, 0xe8, 0x65, 0x94, 0xf8, 0x1f, 0x5f, 0xa7, + 0xab, 0x6b, 0xae, 0x1c, 0x2c, 0xaf, 0xf9, 0x0c, + 0x7c, 0x5a, 0x74, 0x1d, 0x90, 0x26, 0x4a, 0xc3, + 0xa1, 0x60, 0xf4, 0x1d, 0xd5, 0x3c, 0x86, 0xe8, + 0x00, 0xb3, 0x99, 0x27, 0xb8, 0x9d, 0x3e, 0x17, + 0x32, 0x5a, 0x34, 0x3e, 0xc2, 0xb2, 0x6e, 0xbd, + }, + }, + { + .data_len = 49, + .digest = { + 0x5a, 0x1f, 0x40, 0x5f, 0xee, 0xf2, 0xdd, 0x67, + 0x01, 0xcb, 0x26, 0x58, 0xf5, 0x1b, 0xe8, 0x7e, + 0xeb, 0x7d, 0x9d, 0xef, 0xd3, 0x55, 0xd6, 0x89, + 0x2e, 0xfc, 0x14, 0xe2, 0x98, 0x4c, 0x31, 0xaa, + 0x69, 0x00, 0xf9, 0x4e, 0xb0, 0x75, 0x1b, 0x71, + 0x93, 0x60, 0xdf, 0xa1, 0xaf, 0xba, 0xc2, 0xd3, + 0x6a, 0x22, 0xa0, 0xff, 0xb5, 0x66, 0x15, 0x66, + 0xd2, 0x24, 0x9a, 0x7e, 0xe4, 0xe5, 0x84, 0xdb, + }, + }, + { + .data_len = 63, + .digest = { + 0x32, 0xbd, 0xcf, 0x72, 0xa9, 0x74, 0x87, 0xe6, + 0x2a, 0x53, 0x7e, 0x6d, 0xac, 0xc2, 0xdd, 0x2c, + 0x87, 0xb3, 0xf7, 0x90, 0x29, 0xc9, 0x16, 0x59, + 0xd2, 0x7e, 0x6e, 0x84, 0x1d, 0xa6, 0xaf, 0x3c, + 0xca, 0xd6, 0x1a, 0x24, 0xa4, 0xcd, 0x59, 0x44, + 0x20, 0xd7, 0xd2, 0x5b, 0x97, 0xda, 0xd5, 0xa9, + 0x23, 0xb1, 0xa4, 0x60, 0xb8, 0x05, 0x98, 0xdc, + 0xef, 0x89, 0x81, 0xe3, 0x3a, 0xf9, 0x24, 0x37, + }, + }, + { + .data_len = 64, + .digest = { + 0x96, 0x3a, 0x1a, 0xdd, 0x1b, 0xeb, 0x1a, 0x55, + 0x24, 0x52, 0x3d, 0xec, 0x9d, 0x52, 0x2e, 0xa6, + 0xfe, 0x81, 0xd6, 0x98, 0xac, 0xcc, 0x60, 0x56, + 0x04, 0x9d, 0xa3, 0xf3, 0x56, 0x05, 0xe4, 0x8a, + 0x61, 0xaf, 0x6f, 0x6e, 0x8e, 0x75, 0x67, 0x3a, + 0xd2, 0xb0, 0x85, 0x2d, 0x17, 0xd2, 0x86, 0x8c, + 0x50, 0x4b, 0xdd, 0xef, 0x35, 0x00, 0xde, 0x29, + 0x3d, 0x4b, 0x04, 0x12, 0x8a, 0x81, 0xe2, 0xcc, + }, + }, + { + .data_len = 65, + .digest = { + 0x9c, 0x6e, 0xf0, 0x6f, 0x71, 0x77, 0xd5, 0xd0, + 0xbb, 0x70, 0x1f, 0xcb, 0xbd, 0xd3, 0xfe, 0x23, + 0x71, 0x78, 0xad, 0x3a, 0xd2, 0x1e, 0x34, 0xf4, + 0x6d, 0xb4, 0xa2, 0x0a, 0x24, 0xcb, 0xe1, 0x99, + 0x07, 0xd0, 0x79, 0x8f, 0x7e, 0x69, 0x31, 0x68, + 0x29, 0xb5, 0x85, 0x82, 0x67, 0xdc, 0x4a, 0x8d, + 0x44, 0x04, 0x02, 0xc0, 0xfb, 0xd2, 0x19, 0x66, + 0x1e, 0x25, 0x8b, 0xd2, 0x5a, 0x59, 0x68, 0xc0, + }, + }, + { + .data_len = 127, + .digest = { + 0xb8, 0x8f, 0xa8, 0x29, 0x4d, 0xcf, 0x5f, 0x73, + 0x3c, 0x55, 0x43, 0xd9, 0x1c, 0xbc, 0x0c, 0x17, + 0x75, 0x0b, 0xc7, 0xb1, 0x1d, 0x9f, 0x7b, 0x2f, + 0x4c, 0x3d, 0x2a, 0x71, 0xfb, 0x1b, 0x0e, 0xca, + 0x4e, 0x96, 0xa0, 0x95, 0xee, 0xf4, 0x93, 0x76, + 0x36, 0xfb, 0x5d, 0xd3, 0x46, 0xc4, 0x1d, 0x41, + 0x32, 0x92, 0x9d, 0xed, 0xdb, 0x7f, 0xfa, 0xb3, + 0x91, 0x61, 0x3e, 0xd6, 0xb2, 0xca, 0x8d, 0x81, + }, + }, + { + .data_len = 128, + .digest = { + 0x54, 0xac, 0x1a, 0xa1, 0xa6, 0xa3, 0x47, 0x2a, + 0x5a, 0xac, 0x1a, 0x3a, 0x4b, 0xa1, 0x11, 0x08, + 0xa7, 0x90, 0xec, 0x52, 0xcb, 0xaf, 0x68, 0x41, + 0x38, 0x44, 0x53, 0x94, 0x93, 0x30, 0xaf, 0x3a, + 0xec, 0x99, 0x3a, 0x7d, 0x2a, 0xd5, 0xb6, 0x05, + 0xf5, 0xa6, 0xbb, 0x9b, 0x82, 0xc2, 0xbd, 0x98, + 0x28, 0x62, 0x98, 0x3e, 0xe4, 0x27, 0x9b, 0xaa, + 0xce, 0x0a, 0x6f, 0xab, 0x1b, 0x16, 0xf3, 0xdd, + }, + }, + { + .data_len = 129, + .digest = { + 0x04, 0x37, 0x60, 0xbc, 0xb3, 0xb1, 0xc6, 0x2d, + 0x42, 0xc5, 0xd7, 0x7e, 0xd9, 0x86, 0x82, 0xe0, + 0xf4, 0x62, 0xad, 0x75, 0x68, 0x0b, 0xc7, 0xa8, + 0xd6, 0x9a, 0x76, 0xe5, 0x29, 0xb8, 0x37, 0x30, + 0x0f, 0xc0, 0xbc, 0x81, 0x94, 0x7c, 0x13, 0xf4, + 0x9c, 0x27, 0xbc, 0x59, 0xa1, 0x70, 0x6a, 0x87, + 0x20, 0x12, 0x0a, 0x2a, 0x62, 0x5e, 0x6f, 0xca, + 0x91, 0x6b, 0x34, 0x7e, 0x4c, 0x0d, 0xf0, 0x6c, + }, + }, + { + .data_len = 256, + .digest = { + 0x4b, 0x7c, 0x1f, 0x53, 0x52, 0xcc, 0x30, 0xed, + 0x91, 0x44, 0x6f, 0x0d, 0xb5, 0x41, 0x79, 0x99, + 0xaf, 0x82, 0x65, 0x52, 0x03, 0xf8, 0x55, 0x74, + 0x7c, 0xd9, 0x41, 0xd6, 0xe8, 0x91, 0xa4, 0x85, + 0xcb, 0x0a, 0x60, 0x08, 0x76, 0x07, 0x60, 0x99, + 0x89, 0x76, 0xba, 0x84, 0xbd, 0x0b, 0xf2, 0xb3, + 0xdc, 0xf3, 0x33, 0xd1, 0x9b, 0x0b, 0x2e, 0x5d, + 0xf6, 0x9d, 0x0f, 0x67, 0xf4, 0x86, 0xb3, 0xd5, + }, + }, + { + .data_len = 511, + .digest = { + 0x7d, 0x83, 0x78, 0x6a, 0x5d, 0x52, 0x42, 0x2a, + 0xb1, 0x97, 0xc6, 0x62, 0xa2, 0x2a, 0x7c, 0x8b, + 0xcd, 0x4f, 0xa4, 0x86, 0x19, 0xa4, 0x5b, 0x1d, + 0xc7, 0x6f, 0x2f, 0x9c, 0x03, 0xc3, 0x45, 0x2e, + 0xa7, 0x8e, 0x38, 0xf2, 0x57, 0x55, 0x89, 0x47, + 0xed, 0xeb, 0x81, 0xe2, 0xe0, 0x55, 0x9f, 0xe6, + 0xca, 0x03, 0x59, 0xd3, 0xd4, 0xba, 0xc9, 0x2d, + 0xaf, 0xbb, 0x62, 0x2e, 0xe6, 0x89, 0xe4, 0x11, + }, + }, + { + .data_len = 513, + .digest = { + 0xe9, 0x14, 0xe7, 0x01, 0xd0, 0x81, 0x09, 0x51, + 0x78, 0x1c, 0x8e, 0x6c, 0x00, 0xd3, 0x28, 0xa0, + 0x2a, 0x7b, 0xd6, 0x25, 0xca, 0xd0, 0xf9, 0xb8, + 0xd8, 0xcf, 0xd0, 0xb7, 0x48, 0x25, 0xb7, 0x6a, + 0x53, 0x8e, 0xf8, 0x52, 0x9c, 0x1f, 0x7d, 0xae, + 0x4c, 0x22, 0xd5, 0x9d, 0xf0, 0xaf, 0x98, 0x91, + 0x19, 0x1f, 0x99, 0xbd, 0xa6, 0xc2, 0x0f, 0x05, + 0xa5, 0x9f, 0x3e, 0x87, 0xed, 0xc3, 0xab, 0x92, + }, + }, + { + .data_len = 1000, + .digest = { + 0x2e, 0xf4, 0x72, 0xd2, 0xd9, 0x4a, 0xd5, 0xf9, + 0x20, 0x03, 0x4a, 0xad, 0xed, 0xa9, 0x1b, 0x64, + 0x73, 0x38, 0xc6, 0x30, 0xa8, 0x7f, 0xf9, 0x3b, + 0x8c, 0xbc, 0xa1, 0x2d, 0x22, 0x7b, 0x84, 0x37, + 0xf5, 0xba, 0xee, 0xf0, 0x80, 0x9d, 0xe3, 0x82, + 0xbd, 0x07, 0x68, 0x15, 0x01, 0x22, 0xf6, 0x88, + 0x07, 0x0b, 0xfd, 0xb7, 0xb1, 0xc0, 0x68, 0x4b, + 0x8d, 0x05, 0xec, 0xfb, 0xcd, 0xde, 0xa4, 0x2a, + }, + }, + { + .data_len = 3333, + .digest = { + 0x73, 0xe3, 0xe5, 0x87, 0x01, 0x0a, 0x29, 0x4d, + 0xad, 0x92, 0x67, 0x64, 0xc7, 0x71, 0x0b, 0x22, + 0x80, 0x8a, 0x6e, 0x8b, 0x20, 0x73, 0xb2, 0xd7, + 0x98, 0x20, 0x35, 0x42, 0x42, 0x5d, 0x85, 0x12, + 0xb0, 0x06, 0x69, 0x63, 0x5f, 0x5b, 0xe7, 0x63, + 0x6f, 0xe6, 0x18, 0xa6, 0xc1, 0xa6, 0xae, 0x27, + 0xa7, 0x6a, 0x73, 0x6b, 0x27, 0xd5, 0x47, 0xe1, + 0xa2, 0x7d, 0xe4, 0x0d, 0xbd, 0x23, 0x7b, 0x7a, + }, + }, + { + .data_len = 4096, + .digest = { + 0x11, 0x5b, 0x77, 0x36, 0x6b, 0x3b, 0xe4, 0x42, + 0xe4, 0x92, 0x23, 0xcb, 0x0c, 0x06, 0xff, 0xb7, + 0x0c, 0x71, 0x64, 0xd9, 0x8a, 0x57, 0x75, 0x7b, + 0xa2, 0xd2, 0x17, 0x19, 0xbb, 0xb5, 0x3c, 0xb3, + 0x5f, 0xae, 0x35, 0x75, 0x8e, 0xa8, 0x97, 0x43, + 0xce, 0xfe, 0x41, 0x84, 0xfe, 0xcb, 0x18, 0x70, + 0x68, 0x2e, 0x16, 0x19, 0xd5, 0x10, 0x0d, 0x2f, + 0x61, 0x87, 0x79, 0xee, 0x5f, 0x24, 0xdd, 0x76, + }, + }, + { + .data_len = 4128, + .digest = { + 0x9e, 0x96, 0xe1, 0x0a, 0xb2, 0xd5, 0xba, 0xcf, + 0x27, 0xba, 0x6f, 0x85, 0xe9, 0xbf, 0x96, 0xb9, + 0x5a, 0x00, 0x00, 0x06, 0xdc, 0xb7, 0xaf, 0x0a, + 0x8f, 0x1d, 0x31, 0xf6, 0xce, 0xc3, 0x50, 0x2e, + 0x61, 0x3a, 0x8b, 0x28, 0xaf, 0xb2, 0x50, 0x0d, + 0x00, 0x98, 0x02, 0x11, 0x6b, 0xfa, 0x51, 0xc1, + 0xde, 0xe1, 0x34, 0x9f, 0xda, 0x11, 0x63, 0xfa, + 0x0a, 0xa0, 0xa2, 0x67, 0x39, 0xeb, 0x9b, 0xf1, + }, + }, + { + .data_len = 4160, + .digest = { + 0x46, 0x4e, 0x81, 0xd1, 0x08, 0x2a, 0x46, 0x12, + 0x4e, 0xae, 0x1f, 0x5d, 0x57, 0xe5, 0x19, 0xbc, + 0x76, 0x38, 0xb6, 0xa7, 0xe3, 0x72, 0x6d, 0xaf, + 0x80, 0x3b, 0xd0, 0xbc, 0x06, 0xe8, 0xab, 0xab, + 0x86, 0x4b, 0x0b, 0x7a, 0x61, 0xa6, 0x13, 0xff, + 0x64, 0x47, 0x89, 0xb7, 0x63, 0x8a, 0xa5, 0x4c, + 0x9f, 0x52, 0x70, 0xeb, 0x21, 0xe5, 0x2d, 0xe9, + 0xe7, 0xab, 0x1c, 0x0e, 0x74, 0xf5, 0x72, 0xec, + }, + }, + { + .data_len = 4224, + .digest = { + 0xfa, 0x6e, 0xff, 0x3c, 0xc1, 0x98, 0x49, 0x42, + 0x34, 0x67, 0xd4, 0xd3, 0xfa, 0xae, 0x27, 0xe4, + 0x77, 0x11, 0x84, 0xd2, 0x57, 0x99, 0xf8, 0xfd, + 0x41, 0x50, 0x84, 0x80, 0x7f, 0xf7, 0xb2, 0xd3, + 0x88, 0x21, 0x9c, 0xe8, 0xb9, 0x05, 0xd3, 0x48, + 0x64, 0xc5, 0xb7, 0x29, 0xd9, 0x21, 0x17, 0xad, + 0x89, 0x9c, 0x79, 0x55, 0x51, 0x0b, 0x96, 0x3e, + 0x10, 0x40, 0xe1, 0xdd, 0x7b, 0x39, 0x40, 0x86, + }, + }, + { + .data_len = 16384, + .digest = { + 0x41, 0xb3, 0xd2, 0x93, 0xcd, 0x79, 0x84, 0xc2, + 0xf5, 0xea, 0xf3, 0xb3, 0x94, 0x23, 0xaa, 0x76, + 0x87, 0x5f, 0xe3, 0xd2, 0x03, 0xd8, 0x00, 0xbb, + 0xa1, 0x55, 0xe4, 0xcb, 0x16, 0x04, 0x5b, 0xdf, + 0xf8, 0xd2, 0x63, 0x51, 0x02, 0x22, 0xc6, 0x0f, + 0x98, 0x2b, 0x12, 0x52, 0x25, 0x64, 0x93, 0xd9, + 0xab, 0xe9, 0x4d, 0x16, 0x4b, 0xf6, 0x09, 0x83, + 0x5c, 0x63, 0x1c, 0x41, 0x19, 0xf6, 0x76, 0xe3, + }, + }, +}; + +static const struct { + size_t data_len; + size_t key_len; + u8 mac[SHA512_DIGEST_SIZE]; +} hmac_sha512_testvecs[] = { + { + .data_len = 0, + .key_len = 0, + .mac = { + 0xb9, 0x36, 0xce, 0xe8, 0x6c, 0x9f, 0x87, 0xaa, + 0x5d, 0x3c, 0x6f, 0x2e, 0x84, 0xcb, 0x5a, 0x42, + 0x39, 0xa5, 0xfe, 0x50, 0x48, 0x0a, 0x6e, 0xc6, + 0x6b, 0x70, 0xab, 0x5b, 0x1f, 0x4a, 0xc6, 0x73, + 0x0c, 0x6c, 0x51, 0x54, 0x21, 0xb3, 0x27, 0xec, + 0x1d, 0x69, 0x40, 0x2e, 0x53, 0xdf, 0xb4, 0x9a, + 0xd7, 0x38, 0x1e, 0xb0, 0x67, 0xb3, 0x38, 0xfd, + 0x7b, 0x0c, 0xb2, 0x22, 0x47, 0x22, 0x5d, 0x47, + }, + }, + { + .data_len = 1, + .key_len = 1, + .mac = { + 0x1b, 0xee, 0x4e, 0x7f, 0x67, 0xb3, 0x40, 0x78, + 0xd6, 0x68, 0x16, 0xf9, 0x84, 0xf1, 0xcf, 0x0e, + 0xca, 0x51, 0x26, 0xb4, 0x50, 0x6d, 0xa9, 0xe4, + 0x78, 0x1b, 0xac, 0x7c, 0xe2, 0xe1, 0x5b, 0x75, + 0x85, 0xb8, 0xe3, 0xfa, 0x9a, 0x8d, 0x30, 0x46, + 0x78, 0x0c, 0x86, 0xc6, 0x72, 0x2b, 0xb9, 0xac, + 0xda, 0xe4, 0x37, 0x00, 0x8f, 0xad, 0x69, 0xf7, + 0xb7, 0x3a, 0x25, 0xcb, 0x3a, 0x24, 0xe3, 0x8e, + }, + }, + { + .data_len = 2, + .key_len = 31, + .mac = { + 0x39, 0xb9, 0xad, 0xca, 0xe2, 0xf1, 0x2f, 0x29, + 0x4c, 0x52, 0x55, 0x7f, 0x0c, 0xe0, 0x6f, 0x90, + 0x10, 0x4e, 0xe1, 0x2b, 0x08, 0x9c, 0x13, 0x9a, + 0x09, 0xb2, 0x0f, 0x26, 0x26, 0xa9, 0x41, 0x54, + 0x06, 0xa0, 0x8f, 0x81, 0xba, 0x22, 0xae, 0x01, + 0xf8, 0x3b, 0x50, 0x46, 0x46, 0xf6, 0x8a, 0xc4, + 0x17, 0x84, 0xf4, 0x8c, 0x4e, 0x40, 0xa2, 0x26, + 0x3e, 0x5b, 0x81, 0x42, 0xef, 0xee, 0xb9, 0xdb, + }, + }, + { + .data_len = 3, + .key_len = 32, + .mac = { + 0x90, 0x82, 0xb5, 0x56, 0xb1, 0x0b, 0x23, 0x38, + 0xb4, 0x26, 0x99, 0x7a, 0x4e, 0x3e, 0x3a, 0x0b, + 0x36, 0x0a, 0x03, 0xc9, 0x79, 0xba, 0x37, 0x8f, + 0xab, 0x42, 0x6f, 0x51, 0x5f, 0x8e, 0x75, 0x0d, + 0x7e, 0xd5, 0x2b, 0xa7, 0x0b, 0x53, 0xe7, 0xab, + 0x95, 0x8a, 0x01, 0x80, 0x8a, 0x55, 0x28, 0x30, + 0x2f, 0x4f, 0xef, 0x7e, 0x60, 0xe2, 0xe5, 0xf2, + 0x52, 0xc7, 0xae, 0xf2, 0xe4, 0x96, 0x16, 0xf2, + }, + }, + { + .data_len = 16, + .key_len = 33, + .mac = { + 0x51, 0x34, 0xa4, 0xe8, 0x53, 0xab, 0xf3, 0xa4, + 0x78, 0x78, 0xff, 0xbe, 0xaa, 0x8f, 0xf0, 0xad, + 0xb9, 0x6d, 0x87, 0x7d, 0x43, 0x76, 0x60, 0x71, + 0x10, 0xbc, 0x7b, 0x99, 0x48, 0xc8, 0xf4, 0x78, + 0xe5, 0xd0, 0x28, 0x06, 0x75, 0x60, 0xa0, 0xbd, + 0xd5, 0xdc, 0xb6, 0x74, 0x7f, 0x2e, 0x4b, 0xd8, + 0xc9, 0x58, 0x64, 0xe2, 0x40, 0xf0, 0xe8, 0xaf, + 0x2e, 0x4b, 0x2f, 0xe8, 0xa6, 0x29, 0xc4, 0xcf, + }, + }, + { + .data_len = 32, + .key_len = 64, + .mac = { + 0x08, 0x7e, 0xc1, 0x64, 0xe4, 0xa7, 0xe2, 0xb7, + 0x32, 0x86, 0xd8, 0x68, 0x96, 0x20, 0x6c, 0x88, + 0x62, 0x8f, 0xe4, 0x93, 0xd4, 0x18, 0x11, 0xce, + 0x2d, 0x58, 0xaa, 0x3b, 0xa0, 0xd7, 0x19, 0x67, + 0xb4, 0x5d, 0x43, 0x0d, 0x98, 0x09, 0x75, 0x73, + 0xbf, 0xb3, 0xa4, 0x68, 0x84, 0x47, 0x14, 0x65, + 0x11, 0xa8, 0xc6, 0x65, 0x19, 0x53, 0x31, 0x96, + 0x4e, 0x51, 0x42, 0x50, 0x76, 0x3a, 0xa3, 0x03, + }, + }, + { + .data_len = 48, + .key_len = 65, + .mac = { + 0x29, 0xd3, 0x6d, 0x5f, 0x4f, 0x3e, 0x99, 0xa7, + 0x70, 0x9e, 0xe8, 0xfb, 0x26, 0xbd, 0xcb, 0xff, + 0x45, 0xa2, 0x77, 0x4a, 0x5d, 0xaa, 0xd0, 0xa6, + 0xc5, 0xaf, 0xca, 0xda, 0xbc, 0x93, 0x5f, 0xd2, + 0x5d, 0x9d, 0x71, 0xb1, 0x5f, 0x92, 0x66, 0xc0, + 0xe8, 0x62, 0x69, 0x86, 0x1d, 0xb4, 0xcd, 0x53, + 0x3e, 0xf4, 0x51, 0xbc, 0x32, 0x65, 0x06, 0x6c, + 0x71, 0x2a, 0x12, 0xcc, 0x04, 0x10, 0x44, 0x1d, + }, + }, + { + .data_len = 49, + .key_len = 66, + .mac = { + 0x2f, 0xb9, 0x24, 0x49, 0x5c, 0x68, 0x92, 0x3e, + 0xfd, 0x3a, 0x47, 0x96, 0xb0, 0x76, 0xab, 0x3e, + 0x19, 0xb4, 0x64, 0xcf, 0x3a, 0x83, 0x18, 0xf7, + 0xb4, 0xa1, 0xb7, 0xcb, 0xd4, 0xea, 0x4b, 0x33, + 0x68, 0x5f, 0x7a, 0x29, 0xa8, 0x08, 0x3d, 0x64, + 0x09, 0x7a, 0x8e, 0xe1, 0x6f, 0xbb, 0x22, 0xba, + 0xd9, 0xec, 0xd8, 0x46, 0xd2, 0x8e, 0xd8, 0xf6, + 0xf8, 0x39, 0x55, 0x36, 0xe4, 0x8d, 0x7a, 0xb6, + }, + }, + { + .data_len = 63, + .key_len = 127, + .mac = { + 0xf7, 0x10, 0xdc, 0xc9, 0x08, 0xca, 0x65, 0x98, + 0xb5, 0xa5, 0x04, 0x1b, 0xce, 0x0f, 0xe6, 0x13, + 0x55, 0x93, 0x3b, 0x73, 0xc9, 0x83, 0xb2, 0x99, + 0x0a, 0xd6, 0xbb, 0x75, 0x92, 0x46, 0x96, 0xa0, + 0x28, 0x8f, 0xf0, 0xb0, 0x0c, 0x43, 0xcc, 0x45, + 0x77, 0xc9, 0xda, 0x0a, 0x63, 0x45, 0x4e, 0xc0, + 0x59, 0x53, 0xba, 0xbe, 0xbc, 0x56, 0x6c, 0xee, + 0x7a, 0x1e, 0x54, 0xd1, 0x6b, 0xc0, 0xe8, 0x58, + }, + }, + { + .data_len = 64, + .key_len = 128, + .mac = { + 0x3b, 0x7d, 0x49, 0x7b, 0x8e, 0x67, 0x2f, 0xe1, + 0x71, 0xd9, 0x3f, 0xbd, 0x61, 0xd1, 0x51, 0x4b, + 0xd7, 0xa8, 0x6d, 0x27, 0x94, 0x9c, 0x55, 0x87, + 0x51, 0xaa, 0xce, 0xbc, 0x0e, 0x13, 0x38, 0x85, + 0x80, 0x20, 0x9a, 0x86, 0x7c, 0x6f, 0x6d, 0x40, + 0xf9, 0xff, 0xde, 0x17, 0x38, 0x40, 0xe3, 0xc3, + 0xf2, 0x58, 0xd4, 0xf8, 0x0d, 0x2f, 0x8c, 0x1e, + 0xcd, 0x27, 0xac, 0x87, 0xd9, 0x47, 0x25, 0x52, + }, + }, + { + .data_len = 65, + .key_len = 129, + .mac = { + 0x74, 0xac, 0x10, 0x6b, 0x4d, 0x68, 0xbb, 0x6c, + 0xc8, 0x14, 0x23, 0xd9, 0xfa, 0xd1, 0xbe, 0x40, + 0xac, 0x85, 0x8c, 0xcd, 0x75, 0xbf, 0x4e, 0x51, + 0xe7, 0x72, 0x6e, 0x64, 0xb1, 0x36, 0x97, 0xee, + 0xc3, 0x1c, 0xdc, 0x8a, 0x07, 0x79, 0xc6, 0xac, + 0x4d, 0x2b, 0x53, 0xca, 0x91, 0xac, 0xa4, 0x85, + 0x7f, 0x08, 0x6c, 0x2c, 0x7a, 0xa8, 0x5c, 0xb3, + 0x28, 0x5f, 0x3c, 0xf1, 0x26, 0x6a, 0x2a, 0xaf, + }, + }, + { + .data_len = 127, + .key_len = 1000, + .mac = { + 0x68, 0xb9, 0x3b, 0x1c, 0x35, 0x75, 0x84, 0xe7, + 0x00, 0xcb, 0x23, 0xa6, 0x40, 0xb2, 0x4b, 0x2c, + 0x39, 0x63, 0x61, 0xf1, 0x71, 0x57, 0xd4, 0xd8, + 0xa3, 0xdd, 0xcb, 0xca, 0x7e, 0x7d, 0x14, 0xf7, + 0x85, 0xbe, 0xc6, 0xce, 0x51, 0x55, 0x60, 0xe0, + 0x84, 0x3e, 0xda, 0xec, 0x39, 0x19, 0x82, 0xc1, + 0x3e, 0xac, 0x0c, 0x5c, 0x9a, 0x40, 0x5e, 0xa2, + 0xfa, 0x4e, 0xe2, 0x65, 0xc3, 0x17, 0x7d, 0x60, + }, + }, + { + .data_len = 128, + .key_len = 1024, + .mac = { + 0x76, 0xe0, 0x17, 0x27, 0x0a, 0xed, 0xfa, 0xfb, + 0x51, 0xc7, 0x52, 0x19, 0xbe, 0xbe, 0xe3, 0x1a, + 0x28, 0xc0, 0x28, 0xe0, 0x0c, 0x94, 0xb6, 0x3a, + 0x50, 0x06, 0x78, 0x5f, 0x04, 0x2a, 0x98, 0x19, + 0x96, 0xb6, 0x98, 0xc7, 0x26, 0x50, 0x60, 0xd0, + 0x52, 0x3f, 0x48, 0xc0, 0x06, 0x2d, 0xf4, 0xcc, + 0xe9, 0x62, 0x5e, 0x12, 0xff, 0x4e, 0x8f, 0x41, + 0x48, 0xe6, 0x92, 0xac, 0x84, 0x82, 0x12, 0x92, + }, + }, + { + .data_len = 129, + .key_len = 0, + .mac = { + 0x9e, 0xe3, 0x94, 0xcb, 0x6d, 0x88, 0x3a, 0x47, + 0xc4, 0xdd, 0xdb, 0xf0, 0x38, 0x01, 0x22, 0x4c, + 0xcc, 0x5f, 0x2f, 0x73, 0xf6, 0x0d, 0xa9, 0xf2, + 0x29, 0xbe, 0xc9, 0x37, 0xeb, 0x5c, 0xb0, 0x90, + 0x86, 0x0a, 0x86, 0x48, 0xff, 0xf7, 0xd7, 0xd8, + 0x4d, 0x6e, 0xbf, 0x72, 0xa6, 0x67, 0xee, 0xf7, + 0x9d, 0x29, 0x96, 0x02, 0x4e, 0x17, 0x8a, 0x32, + 0x1e, 0x59, 0xeb, 0xfb, 0xd6, 0xd7, 0xaa, 0x7d, + }, + }, + { + .data_len = 256, + .key_len = 1, + .mac = { + 0x07, 0x2e, 0xcc, 0x0e, 0xd3, 0xd4, 0xf2, 0xbc, + 0xb1, 0xd1, 0x57, 0x66, 0x06, 0xce, 0x64, 0xd4, + 0x0a, 0x62, 0xd4, 0x84, 0x5c, 0x88, 0x27, 0xa1, + 0x5c, 0x0d, 0xb5, 0x1e, 0xf4, 0x3e, 0x79, 0x6a, + 0x6e, 0x50, 0x8f, 0x39, 0xe6, 0x8b, 0xed, 0x9b, + 0x0d, 0xe4, 0x32, 0xd6, 0x72, 0xfd, 0x17, 0x33, + 0x92, 0xb6, 0x88, 0xfd, 0xe0, 0xfb, 0x85, 0x39, + 0x27, 0xc7, 0x96, 0xad, 0x8a, 0x68, 0xf7, 0xde, + }, + }, + { + .data_len = 511, + .key_len = 31, + .mac = { + 0xc2, 0xef, 0x28, 0xbd, 0xf6, 0x30, 0x74, 0xed, + 0xfd, 0x2e, 0x52, 0x30, 0xf3, 0xcb, 0x42, 0x75, + 0x58, 0x35, 0x2c, 0xad, 0x2a, 0x5b, 0x73, 0xa3, + 0xe0, 0x18, 0x0b, 0x96, 0x7e, 0x07, 0xce, 0x1e, + 0xf1, 0xe3, 0x08, 0x31, 0x6f, 0x18, 0x79, 0xa0, + 0x5e, 0xbc, 0xad, 0x15, 0xce, 0x32, 0xef, 0x78, + 0x1c, 0x3e, 0x83, 0xb6, 0xa0, 0x41, 0xf0, 0x26, + 0xdd, 0xe2, 0x7d, 0xec, 0x99, 0x4a, 0x73, 0xe2, + }, + }, + { + .data_len = 513, + .key_len = 32, + .mac = { + 0x11, 0x35, 0x64, 0x72, 0x68, 0x9b, 0xd3, 0xd8, + 0x09, 0x54, 0x99, 0x32, 0x63, 0x6c, 0x45, 0x13, + 0x70, 0x71, 0x34, 0xd3, 0x56, 0x9c, 0xbd, 0x10, + 0x8f, 0x33, 0xbe, 0xe8, 0x60, 0x14, 0x59, 0x8b, + 0x23, 0xee, 0xeb, 0xc8, 0x72, 0xfe, 0x1b, 0x88, + 0x9e, 0xd7, 0xf3, 0x6c, 0xd8, 0xe9, 0x73, 0xd0, + 0xfe, 0xa2, 0x9a, 0xc8, 0xb1, 0xf7, 0x65, 0x48, + 0xd0, 0x53, 0x31, 0x82, 0x04, 0xd5, 0x9d, 0x44, + }, + }, + { + .data_len = 1000, + .key_len = 33, + .mac = { + 0x02, 0x25, 0xf7, 0x45, 0x56, 0xd1, 0x99, 0xf2, + 0xfb, 0x9b, 0x8c, 0x64, 0xac, 0x85, 0x6c, 0x6c, + 0x4b, 0x2e, 0x03, 0x0d, 0x78, 0x2d, 0xa4, 0x89, + 0x7e, 0x2e, 0x32, 0x7a, 0xce, 0x4f, 0x0d, 0xdb, + 0x54, 0xf2, 0xb3, 0x01, 0x9f, 0xc4, 0x61, 0x9e, + 0xa8, 0xb3, 0x72, 0xa9, 0x65, 0x37, 0xfa, 0xb3, + 0x57, 0xce, 0x41, 0xb1, 0x7c, 0xb9, 0x08, 0xab, + 0xfd, 0x0a, 0xdf, 0xc0, 0x07, 0xd9, 0xaa, 0x19, + }, + }, + { + .data_len = 3333, + .key_len = 64, + .mac = { + 0x22, 0xfe, 0x0d, 0xae, 0x67, 0x4f, 0xfb, 0x5d, + 0xa9, 0x89, 0xf7, 0xa4, 0xc6, 0xf2, 0xb2, 0xf0, + 0x7f, 0xfd, 0xa5, 0x69, 0xb2, 0x7f, 0xa4, 0xd7, + 0x9b, 0xbf, 0xca, 0xd9, 0x22, 0xd3, 0xca, 0x9f, + 0x22, 0x6e, 0x49, 0xbe, 0xf3, 0x38, 0xad, 0x47, + 0xc9, 0xfb, 0x58, 0x72, 0xc2, 0x0e, 0xc8, 0xca, + 0xcf, 0xc5, 0x49, 0xdb, 0xa7, 0xbe, 0x80, 0x69, + 0x58, 0xd8, 0x35, 0x7a, 0xf8, 0x33, 0x13, 0x29, + }, + }, + { + .data_len = 4096, + .key_len = 65, + .mac = { + 0xc4, 0x3b, 0x53, 0xe6, 0x98, 0x8c, 0xed, 0xca, + 0x5e, 0xb3, 0xac, 0xa1, 0x6e, 0xda, 0xb7, 0x25, + 0x94, 0x53, 0xad, 0xf5, 0x72, 0xa5, 0xd6, 0xd3, + 0x35, 0xce, 0x4a, 0xd9, 0x86, 0x9c, 0x8d, 0x28, + 0x4a, 0xfb, 0x2b, 0x04, 0x23, 0xd1, 0xe9, 0x03, + 0xfa, 0xf6, 0x60, 0x31, 0x85, 0x62, 0x48, 0x54, + 0x1e, 0x2d, 0xd2, 0x9f, 0xfb, 0xeb, 0xf6, 0x1c, + 0xc7, 0x72, 0x8d, 0x2a, 0x95, 0x38, 0xf0, 0x12, + }, + }, + { + .data_len = 4128, + .key_len = 66, + .mac = { + 0xf1, 0x1c, 0x03, 0xbb, 0x96, 0x3e, 0xe3, 0x42, + 0x6e, 0xcf, 0x6b, 0xed, 0x72, 0xc3, 0xf8, 0x9d, + 0xb9, 0x65, 0x03, 0xc0, 0xaf, 0x97, 0x3e, 0x8e, + 0x0f, 0x6b, 0x85, 0x59, 0xb7, 0x2b, 0x03, 0x9d, + 0x0c, 0x6b, 0xa6, 0x36, 0x84, 0xb1, 0x79, 0x02, + 0x12, 0x31, 0x66, 0x75, 0xb9, 0xa4, 0x7c, 0x61, + 0xce, 0xbf, 0x6e, 0x13, 0x40, 0xd2, 0x52, 0xc2, + 0xe1, 0x3c, 0xee, 0x58, 0xf6, 0xb4, 0x7e, 0x51, + }, + }, + { + .data_len = 4160, + .key_len = 127, + .mac = { + 0xb9, 0x05, 0x0a, 0xf6, 0x43, 0x3a, 0xc1, 0xf5, + 0xd7, 0x37, 0x8a, 0xaf, 0x05, 0x5f, 0x7e, 0x9f, + 0x59, 0xaf, 0xa7, 0x2b, 0x91, 0x47, 0x91, 0x5b, + 0xeb, 0xca, 0xab, 0x25, 0x66, 0x0c, 0x06, 0x13, + 0x23, 0x29, 0x18, 0x5c, 0x34, 0x66, 0x8b, 0x40, + 0x0e, 0x05, 0x0f, 0x31, 0x97, 0x24, 0x53, 0x60, + 0xba, 0x98, 0x7b, 0x85, 0xd6, 0x8b, 0xef, 0x99, + 0xb6, 0x49, 0x00, 0x1a, 0xb9, 0xef, 0x1b, 0xeb, + }, + }, + { + .data_len = 4224, + .key_len = 128, + .mac = { + 0x66, 0x52, 0x8d, 0xd1, 0x45, 0x3e, 0xde, 0x65, + 0x57, 0xc2, 0x45, 0x08, 0x2c, 0xe2, 0xcd, 0xb0, + 0xe5, 0x26, 0x04, 0x4c, 0x42, 0xac, 0x44, 0x25, + 0x59, 0x63, 0x42, 0x3c, 0xfe, 0x9b, 0xcd, 0xe0, + 0xf6, 0x8d, 0x59, 0x1b, 0x1c, 0xa0, 0xde, 0x67, + 0xb2, 0x3b, 0x4b, 0xec, 0x7b, 0x00, 0x67, 0xa9, + 0x3a, 0x69, 0x2e, 0x6d, 0x59, 0x4e, 0x2b, 0x87, + 0x9a, 0x90, 0x66, 0xab, 0x12, 0xc4, 0x90, 0x23, + }, + }, + { + .data_len = 16384, + .key_len = 129, + .mac = { + 0x9e, 0x4e, 0xad, 0x30, 0xc4, 0xf1, 0x48, 0xf8, + 0x9d, 0x1d, 0x01, 0x4f, 0xa2, 0xf2, 0x12, 0x08, + 0x8f, 0xca, 0xc3, 0x31, 0x6c, 0x51, 0xe1, 0xc4, + 0x46, 0x75, 0x78, 0x83, 0xbe, 0x29, 0x66, 0xb7, + 0x7b, 0x91, 0x09, 0xc5, 0xb3, 0xd7, 0xc7, 0x78, + 0xc3, 0x48, 0x63, 0x2f, 0x15, 0x7b, 0xe3, 0x7c, + 0xe5, 0x45, 0x7b, 0xd3, 0x8f, 0xf6, 0x2b, 0x4b, + 0x93, 0x72, 0xe9, 0x01, 0xf8, 0xe3, 0xfa, 0x2b, + }, + }, +}; diff --git a/lib/crypto/tests/sha512_kunit.c b/lib/crypto/tests/sha512_kunit.c new file mode 100644 index 0000000000000..8a93b86c36657 --- /dev/null +++ b/lib/crypto/tests/sha512_kunit.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright 2025 Google LLC + */ +#include <crypto/sha2.h> +#include "sha512-testvecs.h" + +#define HASH sha512 +#define HASH_CTX sha512_ctx +#define HASH_SIZE SHA512_DIGEST_SIZE +#define HASH_INIT sha512_init +#define HASH_UPDATE sha512_update +#define HASH_FINAL sha512_final +#define HASH_TESTVECS sha512_testvecs +#define HMAC_KEY hmac_sha512_key +#define HMAC_CTX hmac_sha512_ctx +#define HMAC_SETKEY hmac_sha512_preparekey +#define HMAC_INIT hmac_sha512_init +#define HMAC_UPDATE hmac_sha512_update +#define HMAC_FINAL hmac_sha512_final +#define HMAC hmac_sha512 +#define HMAC_USINGRAWKEY hmac_sha512_usingrawkey +#define HMAC_TESTVECS hmac_sha512_testvecs +#include "hash-test-template.h" + +static struct kunit_case hash_test_cases[] = { + KUNIT_CASE(test_hash_test_vectors), + KUNIT_CASE(test_hash_incremental_updates), + KUNIT_CASE(test_hash_buffer_overruns), + KUNIT_CASE(test_hash_overlaps), + KUNIT_CASE(test_hash_alignment_consistency), + KUNIT_CASE(test_hash_interrupt_context), + KUNIT_CASE(test_hash_ctx_zeroization), + KUNIT_CASE(test_hmac), + KUNIT_CASE(benchmark_hash), + {}, +}; + +static struct kunit_suite hash_test_suite = { + .name = "sha512", + .test_cases = hash_test_cases, + .suite_init = hash_suite_init, + .suite_exit = hash_suite_exit, +}; +kunit_test_suite(hash_test_suite); + +MODULE_DESCRIPTION("KUnit tests and benchmark for SHA-512 and HMAC-SHA512"); +MODULE_LICENSE("GPL"); diff --git a/scripts/crypto/gen-hash-testvecs.py b/scripts/crypto/gen-hash-testvecs.py new file mode 100755 index 0000000000000..9e4baa9201579 --- /dev/null +++ b/scripts/crypto/gen-hash-testvecs.py @@ -0,0 +1,83 @@ +#!/usr/bin/env python3 +# SPDX-License-Identifier: GPL-2.0-or-later +# +# Script that generates test vectors for the given cryptographic hash function. +# +# Copyright 2025 Google LLC + +import hashlib +import hmac +import sys + +DATA_LENS = [0, 1, 2, 3, 16, 32, 48, 49, 63, 64, 65, 127, 128, 129, 256, 511, 513, + 1000, 3333, 4096, 4128, 4160, 4224, 16384] +KEY_LENS = [0, 1, 31, 32, 33, 64, 65, 66, 127, 128, 129, 1000, 1024] + +# Generate the given number of random bytes, using the length itself as the seed +# for a simple random number generator. The test uses the same seed and random +# number generator to reconstruct the data, so it doesn't have to be explicitly +# included in the test vector (as long as all test vectors use random data). +def rand_bytes(length): + seed = length + out = [] + for _ in range(length): + seed = (seed * 25214903917 + 11) % 2**48 + out.append((seed >> 16) % 256) + return bytes(out) + +def gen_unkeyed_testvecs(alg): + print('') + print('static const struct {') + print('\tsize_t data_len;') + print(f'\tu8 digest[{alg.upper()}_DIGEST_SIZE];') + print(f'}} {alg}_testvecs[] = {{') + for data_len in DATA_LENS: + data = rand_bytes(data_len) + h = hashlib.new(alg) + h.update(data) + digest = h.digest() + + print('\t{') + print(f'\t\t.data_len = {data_len},') + print('\t\t.digest = {') + for i in range(0, len(digest), 8): + line = '\t\t\t' + ''.join(f'0x{b:02x}, ' for b in digest[i:i+8]) + print(f'{line.rstrip()}') + print('\t\t},') + print('\t},') + print('};') + +def gen_hmac_testvecs(alg): + print('') + print('static const struct {') + print('\tsize_t data_len;') + print('\tsize_t key_len;') + print(f'\tu8 mac[{alg.upper()}_DIGEST_SIZE];') + print(f'}} hmac_{alg}_testvecs[] = {{') + for (i, data_len) in enumerate(DATA_LENS): + key_len = KEY_LENS[i % len(KEY_LENS)] + data = rand_bytes(data_len) + key = rand_bytes(key_len) + mac = hmac.digest(key, data, alg) + + print('\t{') + print(f'\t\t.data_len = {data_len},') + print(f'\t\t.key_len = {key_len},') + print('\t\t.mac = {') + for i in range(0, len(mac), 8): + line = '\t\t\t' + ''.join(f'0x{b:02x}, ' for b in mac[i:i+8]) + print(f'{line.rstrip()}') + print('\t\t},') + print('\t},') + print('};') + +if len(sys.argv) != 2: + sys.stderr.write('Usage: gen-hash-testvecs.py ALGORITHM\n') + sys.stderr.write('ALGORITHM may be any supported by Python hashlib.\n') + sys.stderr.write('Example: gen-hash-testvecs.py sha512\n') + sys.exit(1) + +alg = sys.argv[1] +print(f'/* This file was generated by: {sys.argv[0]} {" ".join(sys.argv[1:])} */') +gen_unkeyed_testvecs(alg) +gen_hmac_testvecs(alg) -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 05/16] lib/crypto/sha256: add KUnit tests for SHA-224 and SHA-256 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (3 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 04/16] lib/crypto/sha512: add KUnit tests for SHA-384 and SHA-512 Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 06/16] crypto: riscv/sha512 - stop depending on sha512_generic_block_fn Eric Biggers ` (10 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Add KUnit tests for the SHA-224 and SHA-256 library functions, using the test template that was added by the previous commit. Signed-off-by: Eric Biggers <ebiggers@google.com> --- lib/crypto/tests/Kconfig | 10 +- lib/crypto/tests/Makefile | 2 + lib/crypto/tests/sha224-testvecs.h | 223 +++++++++++++++++++++++++++++ lib/crypto/tests/sha224_kunit.c | 50 +++++++ lib/crypto/tests/sha256-testvecs.h | 223 +++++++++++++++++++++++++++++ lib/crypto/tests/sha256_kunit.c | 39 +++++ 6 files changed, 546 insertions(+), 1 deletion(-) create mode 100644 lib/crypto/tests/sha224-testvecs.h create mode 100644 lib/crypto/tests/sha224_kunit.c create mode 100644 lib/crypto/tests/sha256-testvecs.h create mode 100644 lib/crypto/tests/sha256_kunit.c diff --git a/lib/crypto/tests/Kconfig b/lib/crypto/tests/Kconfig index 90be320c25bd2..c056238423221 100644 --- a/lib/crypto/tests/Kconfig +++ b/lib/crypto/tests/Kconfig @@ -1,7 +1,15 @@ # SPDX-License-Identifier: GPL-2.0-only +config CRYPTO_LIB_SHA256_KUNIT_TEST + tristate "KUnit tests for SHA-224 and SHA-256" if !KUNIT_ALL_TESTS + depends on KUNIT + default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS + select CRYPTO_LIB_SHA256 + help + KUnit tests for the SHA-224 and SHA-256 cryptographic hash functions. + config CRYPTO_LIB_SHA512_KUNIT_TEST tristate "KUnit tests for SHA-384 and SHA-512" if !KUNIT_ALL_TESTS depends on KUNIT default KUNIT_ALL_TESTS || CRYPTO_SELFTESTS select CRYPTO_LIB_SHA512 @@ -9,8 +17,8 @@ config CRYPTO_LIB_SHA512_KUNIT_TEST KUnit tests for the SHA-384 and SHA-512 cryptographic hash functions and their corresponding HMACs. config CRYPTO_LIB_BENCHMARK bool "Include benchmarks in KUnit tests for cryptographic functions" - depends on CRYPTO_LIB_SHA512_KUNIT_TEST + depends on CRYPTO_LIB_SHA256_KUNIT_TEST || CRYPTO_LIB_SHA512_KUNIT_TEST help Include benchmarks in the KUnit tests for cryptographic functions. diff --git a/lib/crypto/tests/Makefile b/lib/crypto/tests/Makefile index 3925dcb6513d8..95bb919aff6b4 100644 --- a/lib/crypto/tests/Makefile +++ b/lib/crypto/tests/Makefile @@ -1,4 +1,6 @@ # SPDX-License-Identifier: GPL-2.0-only +obj-$(CONFIG_CRYPTO_LIB_SHA256_KUNIT_TEST) += sha224_kunit.o +obj-$(CONFIG_CRYPTO_LIB_SHA256_KUNIT_TEST) += sha256_kunit.o obj-$(CONFIG_CRYPTO_LIB_SHA512_KUNIT_TEST) += sha384_kunit.o obj-$(CONFIG_CRYPTO_LIB_SHA512_KUNIT_TEST) += sha512_kunit.o diff --git a/lib/crypto/tests/sha224-testvecs.h b/lib/crypto/tests/sha224-testvecs.h new file mode 100644 index 0000000000000..bbab439490682 --- /dev/null +++ b/lib/crypto/tests/sha224-testvecs.h @@ -0,0 +1,223 @@ +/* This file was generated by: ./scripts/crypto/gen-hash-testvecs.py sha224 */ + +static const struct { + size_t data_len; + u8 digest[SHA224_DIGEST_SIZE]; +} sha224_testvecs[] = { + { + .data_len = 0, + .digest = { + 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, + 0x47, 0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, + 0x15, 0xa2, 0xb0, 0x1f, 0x82, 0x8e, 0xa6, 0x2a, + 0xc5, 0xb3, 0xe4, 0x2f, + }, + }, + { + .data_len = 1, + .digest = { + 0xe3, 0x4d, 0x79, 0x17, 0x75, 0x35, 0xdc, 0xd2, + 0x27, 0xc9, 0x9d, 0x0b, 0x90, 0x0f, 0x21, 0x5d, + 0x95, 0xfb, 0x9c, 0x6d, 0xa8, 0xec, 0x19, 0x15, + 0x12, 0xef, 0xf5, 0x0f, + }, + }, + { + .data_len = 2, + .digest = { + 0x81, 0xc7, 0x60, 0x0d, 0x6d, 0x13, 0x75, 0x70, + 0x4b, 0xc0, 0xab, 0xea, 0x04, 0xe3, 0x78, 0x7e, + 0x73, 0xb9, 0x0f, 0xb6, 0xae, 0x90, 0xf3, 0x94, + 0xb2, 0x56, 0xda, 0xc8, + }, + }, + { + .data_len = 3, + .digest = { + 0x24, 0xf0, 0x8c, 0x6e, 0x9d, 0xd6, 0x06, 0x80, + 0x0a, 0x03, 0xee, 0x9b, 0x33, 0xec, 0x83, 0x42, + 0x2c, 0x8b, 0xe7, 0xc7, 0xc6, 0x04, 0xfb, 0xc6, + 0xa3, 0x3a, 0x4d, 0xc9, + }, + }, + { + .data_len = 16, + .digest = { + 0x1c, 0x08, 0xa8, 0x55, 0x8f, 0xc6, 0x0a, 0xea, + 0x2f, 0x1b, 0x54, 0xff, 0x8d, 0xd2, 0xa3, 0xc7, + 0x42, 0xc2, 0x93, 0x3d, 0x73, 0x18, 0x84, 0xba, + 0x75, 0x49, 0x34, 0xfd, + }, + }, + { + .data_len = 32, + .digest = { + 0x45, 0xdd, 0xb5, 0xf0, 0x3c, 0xda, 0xe6, 0xd4, + 0x6c, 0x86, 0x91, 0x29, 0x11, 0x2f, 0x88, 0x7d, + 0xd8, 0x3c, 0xa3, 0xd6, 0xdd, 0x1e, 0xac, 0x98, + 0xff, 0xf0, 0x14, 0x69, + }, + }, + { + .data_len = 48, + .digest = { + 0x0b, 0xfb, 0x71, 0x4c, 0x06, 0x7a, 0xd5, 0x89, + 0x76, 0x0a, 0x43, 0x8b, 0x2b, 0x47, 0x12, 0x56, + 0xa7, 0x64, 0x33, 0x1d, 0xd3, 0x44, 0x17, 0x95, + 0x23, 0xe7, 0x53, 0x01, + }, + }, + { + .data_len = 49, + .digest = { + 0xc4, 0xae, 0x9c, 0x33, 0xd5, 0x1d, 0xf4, 0xa7, + 0xfd, 0xb7, 0xd4, 0x6b, 0xc3, 0xeb, 0xa8, 0xbf, + 0xfb, 0x07, 0x89, 0x4b, 0x07, 0x15, 0x22, 0xec, + 0xe1, 0x45, 0x84, 0xba, + }, + }, + { + .data_len = 63, + .digest = { + 0xad, 0x01, 0x34, 0x2a, 0xe2, 0x3b, 0x58, 0x06, + 0x9f, 0x20, 0xc8, 0xfb, 0xf3, 0x20, 0x82, 0xa6, + 0x9f, 0xee, 0x7a, 0xbe, 0xdf, 0xf3, 0x5d, 0x57, + 0x9b, 0xce, 0x79, 0x96, + }, + }, + { + .data_len = 64, + .digest = { + 0xa7, 0xa6, 0x47, 0xf7, 0xed, 0x2a, 0xe5, 0xe3, + 0xc0, 0x1e, 0x7b, 0x40, 0xe4, 0xf7, 0x40, 0x65, + 0x42, 0xc1, 0x6f, 0x7d, 0x8d, 0x0d, 0x17, 0x4f, + 0xd3, 0xbc, 0x0d, 0x85, + }, + }, + { + .data_len = 65, + .digest = { + 0xc4, 0x9c, 0xb5, 0x6a, 0x01, 0x2d, 0x10, 0xa9, + 0x5f, 0xa4, 0x5a, 0xe1, 0xba, 0x40, 0x12, 0x09, + 0x7b, 0xea, 0xdb, 0xa6, 0x7b, 0xcb, 0x56, 0xf0, + 0xfd, 0x5b, 0xe2, 0xe7, + }, + }, + { + .data_len = 127, + .digest = { + 0x14, 0xda, 0x0e, 0x01, 0xca, 0x78, 0x7d, 0x2d, + 0x85, 0xa3, 0xca, 0x0e, 0x80, 0xf9, 0x95, 0x10, + 0xa1, 0x7b, 0xa5, 0xaa, 0xfc, 0x95, 0x05, 0x08, + 0x53, 0xda, 0x52, 0xee, + }, + }, + { + .data_len = 128, + .digest = { + 0xa5, 0x24, 0xc4, 0x54, 0xe1, 0x50, 0xab, 0xee, + 0x22, 0xc1, 0xa7, 0x27, 0x15, 0x2c, 0x6f, 0xf7, + 0x4c, 0x31, 0xe5, 0x15, 0x25, 0x4e, 0x71, 0xc6, + 0x7e, 0xa0, 0x11, 0x5d, + }, + }, + { + .data_len = 129, + .digest = { + 0x73, 0xd0, 0x8c, 0xce, 0xed, 0xed, 0x9f, 0xaa, + 0x21, 0xaf, 0xa2, 0x08, 0x80, 0x16, 0x15, 0x59, + 0x3f, 0x1d, 0x7f, 0x0a, 0x79, 0x3d, 0x7b, 0x58, + 0xf8, 0xc8, 0x5c, 0x27, + }, + }, + { + .data_len = 256, + .digest = { + 0x31, 0xa7, 0xa1, 0xca, 0x49, 0x72, 0x75, 0xcc, + 0x6e, 0x02, 0x9e, 0xad, 0xea, 0x86, 0x5c, 0x91, + 0x02, 0xe4, 0xc9, 0xf9, 0xd3, 0x9e, 0x74, 0x50, + 0xd8, 0x43, 0x6b, 0x85, + }, + }, + { + .data_len = 511, + .digest = { + 0x40, 0x60, 0x8b, 0xb0, 0x03, 0xa9, 0x75, 0xab, + 0x2d, 0x5b, 0x20, 0x9a, 0x05, 0x72, 0xb7, 0xa8, + 0xce, 0xf2, 0x4f, 0x66, 0x62, 0xe3, 0x7e, 0x24, + 0xd6, 0xe2, 0xea, 0xfa, + }, + }, + { + .data_len = 513, + .digest = { + 0x4f, 0x5f, 0x9f, 0x1e, 0xb3, 0x66, 0x81, 0xdb, + 0x41, 0x5d, 0x65, 0x97, 0x00, 0x8d, 0xdc, 0x62, + 0x03, 0xb0, 0x4d, 0x6b, 0x5c, 0x7f, 0x1e, 0xa0, + 0xfe, 0xfc, 0x0e, 0xb8, + }, + }, + { + .data_len = 1000, + .digest = { + 0x08, 0xa8, 0xa1, 0xc0, 0xd8, 0xf9, 0xb4, 0xaa, + 0x53, 0x22, 0xa1, 0x73, 0x0b, 0x45, 0xa0, 0x20, + 0x72, 0xf3, 0xa9, 0xbc, 0x51, 0xd0, 0x20, 0x79, + 0x69, 0x97, 0xf7, 0xe3, + }, + }, + { + .data_len = 3333, + .digest = { + 0xe8, 0x60, 0x5f, 0xb9, 0x12, 0xe1, 0x6b, 0x24, + 0xc5, 0xe8, 0x43, 0xa9, 0x5c, 0x3f, 0x65, 0xed, + 0xbe, 0xfd, 0x77, 0xf5, 0x47, 0xf2, 0x75, 0x21, + 0xc2, 0x8f, 0x54, 0x8f, + }, + }, + { + .data_len = 4096, + .digest = { + 0xc7, 0xdf, 0x50, 0x16, 0x10, 0x01, 0xb7, 0xdf, + 0x34, 0x1d, 0x18, 0xa2, 0xd5, 0xad, 0x1f, 0x50, + 0xf7, 0xa8, 0x9a, 0x72, 0xfb, 0xfd, 0xd9, 0x1c, + 0x57, 0xac, 0x08, 0x97, + }, + }, + { + .data_len = 4128, + .digest = { + 0xdf, 0x16, 0x76, 0x7f, 0xc0, 0x16, 0x84, 0x63, + 0xac, 0xcf, 0xd0, 0x78, 0x1e, 0x96, 0x67, 0xc5, + 0x3c, 0x06, 0xe9, 0xdb, 0x6e, 0x7d, 0xd0, 0x07, + 0xaa, 0xb1, 0x56, 0xc9, + }, + }, + { + .data_len = 4160, + .digest = { + 0x49, 0xec, 0x5c, 0x18, 0xd7, 0x5b, 0xda, 0xed, + 0x5b, 0x59, 0xde, 0x09, 0x34, 0xb2, 0x49, 0x43, + 0x62, 0x6a, 0x0a, 0x63, 0x6a, 0x51, 0x08, 0x37, + 0x8c, 0xb6, 0x29, 0x84, + }, + }, + { + .data_len = 4224, + .digest = { + 0x3d, 0xc2, 0xc8, 0x43, 0xcf, 0xb7, 0x33, 0x14, + 0x04, 0x93, 0xed, 0xe2, 0xcd, 0x8a, 0x69, 0x5c, + 0x5a, 0xd5, 0x9b, 0x52, 0xdf, 0x48, 0xa7, 0xaa, + 0x28, 0x2b, 0x5d, 0x27, + }, + }, + { + .data_len = 16384, + .digest = { + 0xa7, 0xaf, 0xda, 0x92, 0xe2, 0xe7, 0x61, 0xdc, + 0xa1, 0x32, 0x53, 0x2a, 0x3f, 0x41, 0x5c, 0x7e, + 0xc9, 0x89, 0xda, 0x1c, 0xf7, 0x8d, 0x00, 0xbd, + 0x21, 0x73, 0xb1, 0x69, + }, + }, +}; diff --git a/lib/crypto/tests/sha224_kunit.c b/lib/crypto/tests/sha224_kunit.c new file mode 100644 index 0000000000000..5015861a55112 --- /dev/null +++ b/lib/crypto/tests/sha224_kunit.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright 2025 Google LLC + */ +#include <crypto/sha2.h> +#include "sha224-testvecs.h" + +/* TODO: add sha224() to the library itself */ +static inline void sha224(const u8 *data, size_t len, + u8 out[SHA224_DIGEST_SIZE]) +{ + struct sha256_state state; + + sha224_init(&state); + sha256_update(&state, data, len); + sha224_final(&state, out); +} + +#define HASH sha224 +#define HASH_CTX sha256_state +#define HASH_SIZE SHA224_DIGEST_SIZE +#define HASH_INIT sha224_init +#define HASH_UPDATE sha256_update +#define HASH_FINAL sha224_final +#define HASH_TESTVECS sha224_testvecs +/* TODO: add HMAC-SHA224 support to the library, then enable the tests for it */ +#include "hash-test-template.h" + +static struct kunit_case hash_test_cases[] = { + KUNIT_CASE(test_hash_test_vectors), + KUNIT_CASE(test_hash_incremental_updates), + KUNIT_CASE(test_hash_buffer_overruns), + KUNIT_CASE(test_hash_overlaps), + KUNIT_CASE(test_hash_alignment_consistency), + KUNIT_CASE(test_hash_interrupt_context), + KUNIT_CASE(test_hash_ctx_zeroization), + KUNIT_CASE(benchmark_hash), + {}, +}; + +static struct kunit_suite hash_test_suite = { + .name = "sha224", + .test_cases = hash_test_cases, + .suite_init = hash_suite_init, + .suite_exit = hash_suite_exit, +}; +kunit_test_suite(hash_test_suite); + +MODULE_DESCRIPTION("KUnit tests and benchmark for SHA-224"); +MODULE_LICENSE("GPL"); diff --git a/lib/crypto/tests/sha256-testvecs.h b/lib/crypto/tests/sha256-testvecs.h new file mode 100644 index 0000000000000..2b0912a101833 --- /dev/null +++ b/lib/crypto/tests/sha256-testvecs.h @@ -0,0 +1,223 @@ +/* This file was generated by: ./scripts/crypto/gen-hash-testvecs.py sha256 */ + +static const struct { + size_t data_len; + u8 digest[SHA256_DIGEST_SIZE]; +} sha256_testvecs[] = { + { + .data_len = 0, + .digest = { + 0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, + 0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, + 0x27, 0xae, 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, + 0xa4, 0x95, 0x99, 0x1b, 0x78, 0x52, 0xb8, 0x55, + }, + }, + { + .data_len = 1, + .digest = { + 0x45, 0xf8, 0x3d, 0x17, 0xe1, 0x0b, 0x34, 0xfc, + 0xa0, 0x1e, 0xb8, 0xf4, 0x45, 0x4d, 0xac, 0x34, + 0xa7, 0x77, 0xd9, 0x40, 0x4a, 0x46, 0x4e, 0x73, + 0x2c, 0xf4, 0xab, 0xf2, 0xc0, 0xda, 0x94, 0xc4, + }, + }, + { + .data_len = 2, + .digest = { + 0xf9, 0xd3, 0x52, 0x2f, 0xd5, 0xe0, 0x99, 0x15, + 0x1c, 0xd6, 0xa9, 0x24, 0x4f, 0x40, 0xba, 0x25, + 0x33, 0x43, 0x3e, 0xe1, 0x78, 0x6a, 0xfe, 0x7d, + 0x07, 0xe2, 0x29, 0x7b, 0x6d, 0xc5, 0x73, 0xf5, + }, + }, + { + .data_len = 3, + .digest = { + 0x71, 0xf7, 0xa1, 0xef, 0x69, 0x86, 0x0e, 0xe4, + 0x87, 0x25, 0x58, 0x4c, 0x07, 0x2c, 0xfc, 0x60, + 0xc5, 0xf6, 0xe2, 0x44, 0xaa, 0xfb, 0x41, 0xc7, + 0x2b, 0xc5, 0x01, 0x8c, 0x39, 0x98, 0x30, 0x37, + }, + }, + { + .data_len = 16, + .digest = { + 0x09, 0x95, 0x9a, 0xfa, 0x25, 0x18, 0x86, 0x06, + 0xfe, 0x65, 0xc9, 0x2f, 0x91, 0x15, 0x74, 0x06, + 0x6c, 0xbf, 0xef, 0x7b, 0x0b, 0xc7, 0x2c, 0x05, + 0xdd, 0x17, 0x5d, 0x6f, 0x8a, 0xa5, 0xde, 0x3c, + }, + }, + { + .data_len = 32, + .digest = { + 0xe5, 0x52, 0x3c, 0x85, 0xea, 0x1b, 0xe1, 0x6c, + 0xe0, 0xdb, 0xc3, 0xef, 0xf0, 0xca, 0xc2, 0xe1, + 0xb9, 0x36, 0xa1, 0x28, 0xb6, 0x9e, 0xf5, 0x6e, + 0x70, 0xf7, 0xf9, 0xa7, 0x1c, 0xd3, 0x22, 0xd0, + }, + }, + { + .data_len = 48, + .digest = { + 0x5f, 0x84, 0xd4, 0xd7, 0x2e, 0x80, 0x09, 0xef, + 0x1c, 0x77, 0x7c, 0x25, 0x59, 0x63, 0x88, 0x64, + 0xfd, 0x56, 0xea, 0x23, 0xf4, 0x4f, 0x2e, 0x49, + 0xcd, 0xb4, 0xaa, 0xc7, 0x5c, 0x8b, 0x75, 0x84, + }, + }, + { + .data_len = 49, + .digest = { + 0x22, 0x6e, 0xca, 0xda, 0x00, 0x2d, 0x90, 0x96, + 0x24, 0xf8, 0x55, 0x17, 0x11, 0xda, 0x42, 0x1c, + 0x78, 0x4e, 0xbf, 0xd9, 0xc5, 0xcf, 0xf3, 0xe3, + 0xaf, 0xd3, 0x60, 0xcd, 0xaa, 0xe2, 0xc7, 0x22, + }, + }, + { + .data_len = 63, + .digest = { + 0x97, 0xe2, 0x74, 0xdc, 0x6b, 0xa4, 0xaf, 0x32, + 0x3b, 0x50, 0x6d, 0x80, 0xb5, 0xd3, 0x0c, 0x36, + 0xea, 0x3f, 0x5d, 0x36, 0xa7, 0x49, 0x51, 0xf3, + 0xbd, 0x69, 0x68, 0x60, 0x9b, 0xde, 0x73, 0xf5, + }, + }, + { + .data_len = 64, + .digest = { + 0x13, 0x74, 0xb1, 0x72, 0xd6, 0x53, 0x48, 0x28, + 0x42, 0xd8, 0xba, 0x64, 0x20, 0x60, 0xb6, 0x4c, + 0xc3, 0xac, 0x5d, 0x93, 0x8c, 0xb9, 0xd4, 0xcc, + 0xb4, 0x9f, 0x31, 0x1f, 0xeb, 0x68, 0x35, 0x58, + }, + }, + { + .data_len = 65, + .digest = { + 0xda, 0xbe, 0xd7, 0xbc, 0x6e, 0xe6, 0x5a, 0x57, + 0xeb, 0x9a, 0x93, 0xaa, 0x66, 0xd0, 0xe0, 0xc4, + 0x29, 0x7f, 0xe9, 0x3b, 0x8e, 0xdf, 0x81, 0x82, + 0x8d, 0x15, 0x11, 0x59, 0x4e, 0x13, 0xa5, 0x58, + }, + }, + { + .data_len = 127, + .digest = { + 0x8c, 0x1a, 0xba, 0x40, 0x66, 0x94, 0x19, 0xf4, + 0x2e, 0xa2, 0xae, 0x94, 0x53, 0x18, 0xb6, 0xfd, + 0xa0, 0x12, 0xc5, 0xef, 0xd5, 0xd6, 0x1b, 0xa1, + 0x37, 0xea, 0x19, 0x44, 0x35, 0x54, 0x85, 0x74, + }, + }, + { + .data_len = 128, + .digest = { + 0xfd, 0x07, 0xd8, 0x77, 0x7d, 0x8b, 0x4f, 0xee, + 0x60, 0x60, 0x26, 0xef, 0x2a, 0x86, 0xfb, 0x67, + 0xeb, 0x31, 0x27, 0x03, 0x99, 0x3c, 0xde, 0xe5, + 0x84, 0x72, 0x71, 0x4c, 0x33, 0x7b, 0x87, 0x13, + }, + }, + { + .data_len = 129, + .digest = { + 0x97, 0xc5, 0x58, 0x38, 0x20, 0xc7, 0xde, 0xfa, + 0xdd, 0x9b, 0x10, 0xc6, 0xc2, 0x2f, 0x94, 0xb5, + 0xc0, 0x33, 0xc0, 0x20, 0x1c, 0x2f, 0xb4, 0x28, + 0x5e, 0x36, 0xfa, 0x8c, 0x24, 0x1c, 0x18, 0x27, + }, + }, + { + .data_len = 256, + .digest = { + 0x62, 0x17, 0x84, 0x26, 0x98, 0x30, 0x57, 0xca, + 0x4f, 0x32, 0xd9, 0x09, 0x09, 0x34, 0xe2, 0xcb, + 0x92, 0x45, 0xd5, 0xeb, 0x8b, 0x9b, 0x3c, 0xd8, + 0xaa, 0xc7, 0xd2, 0x2b, 0x04, 0xab, 0xb3, 0x35, + }, + }, + { + .data_len = 511, + .digest = { + 0x7f, 0xe1, 0x09, 0x78, 0x5d, 0x61, 0xfa, 0x5e, + 0x9b, 0x8c, 0xb1, 0xa9, 0x09, 0x69, 0xb4, 0x24, + 0x54, 0xf2, 0x1c, 0xc9, 0x5f, 0xfb, 0x59, 0x9d, + 0x36, 0x1b, 0x37, 0x44, 0xfc, 0x64, 0x79, 0xb6, + }, + }, + { + .data_len = 513, + .digest = { + 0xd2, 0x3b, 0x3a, 0xe7, 0x13, 0x4f, 0xbd, 0x29, + 0x6b, 0xd2, 0x79, 0x26, 0x6c, 0xd2, 0x22, 0x43, + 0x25, 0x34, 0x9b, 0x9b, 0x22, 0xb0, 0x9f, 0x61, + 0x1d, 0xf4, 0xe2, 0x65, 0x68, 0x95, 0x02, 0x6c, + }, + }, + { + .data_len = 1000, + .digest = { + 0x0c, 0x34, 0x53, 0x3f, 0x0f, 0x8a, 0x39, 0x8d, + 0x63, 0xe4, 0x83, 0x6e, 0x11, 0x7d, 0x14, 0x8e, + 0x5b, 0xf0, 0x4d, 0xca, 0x23, 0x24, 0xb5, 0xd2, + 0x13, 0x3f, 0xd9, 0xde, 0x84, 0x74, 0x26, 0x59, + }, + }, + { + .data_len = 3333, + .digest = { + 0xa8, 0xb8, 0x83, 0x01, 0x1b, 0x38, 0x7a, 0xca, + 0x59, 0xe9, 0x5b, 0x37, 0x6a, 0xab, 0xb4, 0x85, + 0x94, 0x73, 0x26, 0x04, 0xef, 0xed, 0xf4, 0x0d, + 0xd6, 0x09, 0x21, 0x09, 0x96, 0x78, 0xe3, 0xcf, + }, + }, + { + .data_len = 4096, + .digest = { + 0x0b, 0x12, 0x66, 0x96, 0x78, 0x4f, 0x2c, 0x35, + 0xa4, 0xed, 0xbc, 0xb8, 0x30, 0xa6, 0x37, 0x9b, + 0x94, 0x13, 0xae, 0x86, 0xf0, 0x20, 0xfb, 0x49, + 0x8f, 0x5d, 0x20, 0x70, 0x60, 0x2b, 0x02, 0x70, + }, + }, + { + .data_len = 4128, + .digest = { + 0xe4, 0xbd, 0xe4, 0x3b, 0x85, 0xf4, 0x6f, 0x11, + 0xad, 0xc4, 0x79, 0xcc, 0x8e, 0x6d, 0x8b, 0x15, + 0xbb, 0xf9, 0xd3, 0x65, 0xe1, 0xf8, 0x8d, 0x22, + 0x65, 0x66, 0x66, 0xb3, 0xf5, 0xd0, 0x9c, 0xaf, + }, + }, + { + .data_len = 4160, + .digest = { + 0x90, 0x5f, 0xe0, 0xfc, 0xb1, 0xdc, 0x38, 0x1b, + 0xe5, 0x37, 0x3f, 0xd2, 0xcc, 0x48, 0xc4, 0xbc, + 0xb4, 0xfd, 0xf7, 0x71, 0x5f, 0x6b, 0xf4, 0xc4, + 0xa6, 0x08, 0x7e, 0xfc, 0x4e, 0x96, 0xf7, 0xc2, + }, + }, + { + .data_len = 4224, + .digest = { + 0x1f, 0x34, 0x0a, 0x3b, 0xdb, 0xf7, 0x7a, 0xdb, + 0x3d, 0x89, 0x85, 0x0c, 0xd2, 0xf0, 0x0c, 0xbd, + 0x25, 0x39, 0x14, 0x06, 0x28, 0x0f, 0x6b, 0x5f, + 0xe3, 0x1f, 0x2a, 0xb6, 0xca, 0x56, 0x41, 0xa1, + }, + }, + { + .data_len = 16384, + .digest = { + 0x7b, 0x01, 0x2d, 0x84, 0x70, 0xee, 0xe0, 0x77, + 0x3c, 0x17, 0x63, 0xfe, 0x40, 0xd7, 0xfd, 0xa1, + 0x75, 0x90, 0xb8, 0x3e, 0x50, 0xcd, 0x06, 0xb7, + 0xb9, 0xb9, 0x2b, 0x91, 0x4f, 0xba, 0xe4, 0x4c, + }, + }, +}; diff --git a/lib/crypto/tests/sha256_kunit.c b/lib/crypto/tests/sha256_kunit.c new file mode 100644 index 0000000000000..4002acfbe66b0 --- /dev/null +++ b/lib/crypto/tests/sha256_kunit.c @@ -0,0 +1,39 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Copyright 2025 Google LLC + */ +#include <crypto/sha2.h> +#include "sha256-testvecs.h" + +#define HASH sha256 +#define HASH_CTX sha256_state +#define HASH_SIZE SHA256_DIGEST_SIZE +#define HASH_INIT sha256_init +#define HASH_UPDATE sha256_update +#define HASH_FINAL sha256_final +#define HASH_TESTVECS sha256_testvecs +/* TODO: add HMAC-SHA256 support to the library, then enable the tests for it */ +#include "hash-test-template.h" + +static struct kunit_case hash_test_cases[] = { + KUNIT_CASE(test_hash_test_vectors), + KUNIT_CASE(test_hash_incremental_updates), + KUNIT_CASE(test_hash_buffer_overruns), + KUNIT_CASE(test_hash_overlaps), + KUNIT_CASE(test_hash_alignment_consistency), + KUNIT_CASE(test_hash_interrupt_context), + KUNIT_CASE(test_hash_ctx_zeroization), + KUNIT_CASE(benchmark_hash), + {}, +}; + +static struct kunit_suite hash_test_suite = { + .name = "sha256", + .test_cases = hash_test_cases, + .suite_init = hash_suite_init, + .suite_exit = hash_suite_exit, +}; +kunit_test_suite(hash_test_suite); + +MODULE_DESCRIPTION("KUnit tests and benchmark for SHA-256"); +MODULE_LICENSE("GPL"); -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 06/16] crypto: riscv/sha512 - stop depending on sha512_generic_block_fn 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (4 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 05/16] lib/crypto/sha256: add KUnit tests for SHA-224 and SHA-256 Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library Eric Biggers ` (9 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> sha512_generic_block_fn() will no longer be available when the SHA-512 support in the old-school crypto API is changed to just wrap the SHA-512 library. Replace the use of sha512_generic_block_fn() in sha512-riscv64-glue.c with temporary code that uses the library's __sha512_update(). This is just a temporary workaround to keep the kernel building and functional at each commit; this code gets superseded when the RISC-V optimized SHA-512 is migrated to lib/crypto/ anyway. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/riscv/crypto/Kconfig | 1 + arch/riscv/crypto/sha512-riscv64-glue.c | 8 +++++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index cd9b776602f89..53e4e1eacf554 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -29,10 +29,11 @@ config CRYPTO_GHASH_RISCV64 - Zvkg vector crypto extension config CRYPTO_SHA512_RISCV64 tristate "Hash functions: SHA-384 and SHA-512" depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + select CRYPTO_LIB_SHA512 select CRYPTO_SHA512 help SHA-384 and SHA-512 secure hash algorithm (FIPS 180) Architecture: riscv64 using: diff --git a/arch/riscv/crypto/sha512-riscv64-glue.c b/arch/riscv/crypto/sha512-riscv64-glue.c index 4634fca78ae24..b3dbc71de07b0 100644 --- a/arch/riscv/crypto/sha512-riscv64-glue.c +++ b/arch/riscv/crypto/sha512-riscv64-glue.c @@ -36,11 +36,17 @@ static void sha512_block(struct sha512_state *state, const u8 *data, if (crypto_simd_usable()) { kernel_vector_begin(); sha512_transform_zvknhb_zvkb(state, data, num_blocks); kernel_vector_end(); } else { - sha512_generic_block_fn(state, data, num_blocks); + struct __sha512_ctx ctx = {}; + + static_assert(sizeof(ctx.state) == sizeof(state->state)); + memcpy(&ctx.state, state->state, sizeof(ctx.state)); + __sha512_update(&ctx, data, + (size_t)num_blocks * SHA512_BLOCK_SIZE); + memcpy(state->state, &ctx.state, sizeof(state->state)); } } static int riscv64_sha512_update(struct shash_desc *desc, const u8 *data, unsigned int len) -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (5 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 06/16] crypto: riscv/sha512 - stop depending on sha512_generic_block_fn Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:24 ` Herbert Xu 2025-06-11 2:09 ` [PATCH 08/16] lib/crypto/sha512: migrate arm-optimized SHA-512 code to library Eric Biggers ` (8 subsequent siblings) 15 siblings, 1 reply; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Delete crypto/sha512_generic.c, which provided "generic" SHA-384 and SHA-512 crypto_shash algorithms. Replace it with crypto/sha512.c which provides SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 crypto_shash algorithms using the corresponding library functions. This is a prerequisite for migrating all the arch-optimized SHA-512 code (which is almost 3000 lines) to lib/crypto/ rather than duplicating it. Since the replacement crypto_shash algorithms are implemented using the (potentially arch-optimized) library functions, give them cra_driver_names ending with "-lib" rather than "-generic". Update crypto/testmgr.c and one odd driver to take this change in driver name into account. Besides these cases which are accounted for, there are no known cases where the cra_driver_name was being depended on. This change does mean that the abstract partial block handling code in crypto/shash.c, which got added in 6.16, no longer gets used. But that's fine; the library has to implement the partial block handling anyway, and it's better to do it in the library since the block size and other properties of the algorithm are all fixed at compile time there, resulting in more streamlined code. Signed-off-by: Eric Biggers <ebiggers@google.com> --- crypto/Kconfig | 4 +- crypto/Makefile | 2 +- crypto/sha512.c | 254 ++++++++++++++++++++++++++ crypto/sha512_generic.c | 217 ---------------------- crypto/testmgr.c | 16 ++ drivers/crypto/starfive/jh7110-hash.c | 8 +- include/crypto/sha512_base.h | 3 - 7 files changed, 278 insertions(+), 226 deletions(-) create mode 100644 crypto/sha512.c delete mode 100644 crypto/sha512_generic.c diff --git a/crypto/Kconfig b/crypto/Kconfig index e9fee7818e270..509641ed30ce1 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -983,12 +983,14 @@ config CRYPTO_SHA256 Used by the btrfs filesystem, Ceph, NFS, and SMB. config CRYPTO_SHA512 tristate "SHA-384 and SHA-512" select CRYPTO_HASH + select CRYPTO_LIB_SHA512 help - SHA-384 and SHA-512 secure hash algorithms (FIPS 180, ISO/IEC 10118-3) + SHA-384 and SHA-512 secure hash algorithms (FIPS 180, ISO/IEC + 10118-3), including HMAC support. config CRYPTO_SHA3 tristate "SHA-3" select CRYPTO_HASH help diff --git a/crypto/Makefile b/crypto/Makefile index 017df3a2e4bb3..271c77462cec9 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -76,11 +76,11 @@ obj-$(CONFIG_CRYPTO_MD4) += md4.o obj-$(CONFIG_CRYPTO_MD5) += md5.o obj-$(CONFIG_CRYPTO_RMD160) += rmd160.o obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o obj-$(CONFIG_CRYPTO_SHA256) += sha256.o CFLAGS_sha256.o += -DARCH=$(ARCH) -obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o +obj-$(CONFIG_CRYPTO_SHA512) += sha512.o obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o obj-$(CONFIG_CRYPTO_SM3_GENERIC) += sm3_generic.o obj-$(CONFIG_CRYPTO_STREEBOG) += streebog_generic.o obj-$(CONFIG_CRYPTO_WP512) += wp512.o CFLAGS_wp512.o := $(call cc-option,-fno-schedule-insns) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 diff --git a/crypto/sha512.c b/crypto/sha512.c new file mode 100644 index 0000000000000..ad9c8b2ddb129 --- /dev/null +++ b/crypto/sha512.c @@ -0,0 +1,254 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Crypto API support for SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 + * + * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com> + * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> + * Copyright (c) 2003 Kyle McMartin <kyle@debian.org> + * Copyright 2025 Google LLC + */ +#include <crypto/internal/hash.h> +#include <crypto/sha2.h> +#include <linux/kernel.h> +#include <linux/module.h> + +/* SHA-384 */ + +const u8 sha384_zero_message_hash[SHA384_DIGEST_SIZE] = { + 0x38, 0xb0, 0x60, 0xa7, 0x51, 0xac, 0x96, 0x38, + 0x4c, 0xd9, 0x32, 0x7e, 0xb1, 0xb1, 0xe3, 0x6a, + 0x21, 0xfd, 0xb7, 0x11, 0x14, 0xbe, 0x07, 0x43, + 0x4c, 0x0c, 0xc7, 0xbf, 0x63, 0xf6, 0xe1, 0xda, + 0x27, 0x4e, 0xde, 0xbf, 0xe7, 0x6f, 0x65, 0xfb, + 0xd5, 0x1a, 0xd2, 0xf1, 0x48, 0x98, 0xb9, 0x5b +}; +EXPORT_SYMBOL_GPL(sha384_zero_message_hash); + +#define SHA384_CTX(desc) ((struct sha384_ctx *)shash_desc_ctx(desc)) + +static int crypto_sha384_init(struct shash_desc *desc) +{ + sha384_init(SHA384_CTX(desc)); + return 0; +} + +static int crypto_sha384_update(struct shash_desc *desc, + const u8 *data, unsigned int len) +{ + sha384_update(SHA384_CTX(desc), data, len); + return 0; +} + +static int crypto_sha384_final(struct shash_desc *desc, u8 *out) +{ + sha384_final(SHA384_CTX(desc), out); + return 0; +} + +static int crypto_sha384_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *out) +{ + sha384(data, len, out); + return 0; +} + +/* SHA-512 */ + +const u8 sha512_zero_message_hash[SHA512_DIGEST_SIZE] = { + 0xcf, 0x83, 0xe1, 0x35, 0x7e, 0xef, 0xb8, 0xbd, + 0xf1, 0x54, 0x28, 0x50, 0xd6, 0x6d, 0x80, 0x07, + 0xd6, 0x20, 0xe4, 0x05, 0x0b, 0x57, 0x15, 0xdc, + 0x83, 0xf4, 0xa9, 0x21, 0xd3, 0x6c, 0xe9, 0xce, + 0x47, 0xd0, 0xd1, 0x3c, 0x5d, 0x85, 0xf2, 0xb0, + 0xff, 0x83, 0x18, 0xd2, 0x87, 0x7e, 0xec, 0x2f, + 0x63, 0xb9, 0x31, 0xbd, 0x47, 0x41, 0x7a, 0x81, + 0xa5, 0x38, 0x32, 0x7a, 0xf9, 0x27, 0xda, 0x3e +}; +EXPORT_SYMBOL_GPL(sha512_zero_message_hash); + +#define SHA512_CTX(desc) ((struct sha512_ctx *)shash_desc_ctx(desc)) + +static int crypto_sha512_init(struct shash_desc *desc) +{ + sha512_init(SHA512_CTX(desc)); + return 0; +} + +static int crypto_sha512_update(struct shash_desc *desc, + const u8 *data, unsigned int len) +{ + sha512_update(SHA512_CTX(desc), data, len); + return 0; +} + +static int crypto_sha512_final(struct shash_desc *desc, u8 *out) +{ + sha512_final(SHA512_CTX(desc), out); + return 0; +} + +static int crypto_sha512_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *out) +{ + sha512(data, len, out); + return 0; +} + +/* HMAC-SHA384 */ + +#define HMAC_SHA384_KEY(tfm) ((struct hmac_sha384_key *)crypto_shash_ctx(tfm)) +#define HMAC_SHA384_CTX(desc) ((struct hmac_sha384_ctx *)shash_desc_ctx(desc)) + +static int crypto_hmac_sha384_setkey(struct crypto_shash *tfm, + const u8 *raw_key, unsigned int keylen) +{ + hmac_sha384_preparekey(HMAC_SHA384_KEY(tfm), raw_key, keylen); + return 0; +} + +static int crypto_hmac_sha384_init(struct shash_desc *desc) +{ + hmac_sha384_init(HMAC_SHA384_CTX(desc), HMAC_SHA384_KEY(desc->tfm)); + return 0; +} + +static int crypto_hmac_sha384_update(struct shash_desc *desc, + const u8 *data, unsigned int len) +{ + hmac_sha384_update(HMAC_SHA384_CTX(desc), data, len); + return 0; +} + +static int crypto_hmac_sha384_final(struct shash_desc *desc, u8 *out) +{ + hmac_sha384_final(HMAC_SHA384_CTX(desc), out); + return 0; +} + +static int crypto_hmac_sha384_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, + u8 *out) +{ + hmac_sha384(HMAC_SHA384_KEY(desc->tfm), data, len, out); + return 0; +} + +/* HMAC-SHA512 */ + +#define HMAC_SHA512_KEY(tfm) ((struct hmac_sha512_key *)crypto_shash_ctx(tfm)) +#define HMAC_SHA512_CTX(desc) ((struct hmac_sha512_ctx *)shash_desc_ctx(desc)) + +static int crypto_hmac_sha512_setkey(struct crypto_shash *tfm, + const u8 *raw_key, unsigned int keylen) +{ + hmac_sha512_preparekey(HMAC_SHA512_KEY(tfm), raw_key, keylen); + return 0; +} + +static int crypto_hmac_sha512_init(struct shash_desc *desc) +{ + hmac_sha512_init(HMAC_SHA512_CTX(desc), HMAC_SHA512_KEY(desc->tfm)); + return 0; +} + +static int crypto_hmac_sha512_update(struct shash_desc *desc, + const u8 *data, unsigned int len) +{ + hmac_sha512_update(HMAC_SHA512_CTX(desc), data, len); + return 0; +} + +static int crypto_hmac_sha512_final(struct shash_desc *desc, u8 *out) +{ + hmac_sha512_final(HMAC_SHA512_CTX(desc), out); + return 0; +} + +static int crypto_hmac_sha512_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, + u8 *out) +{ + hmac_sha512(HMAC_SHA512_KEY(desc->tfm), data, len, out); + return 0; +} + +/* Algorithm definitions */ + +static struct shash_alg algs[] = { + { + .base.cra_name = "sha384", + .base.cra_driver_name = "sha384-lib", + .base.cra_priority = 100, + .base.cra_blocksize = SHA384_BLOCK_SIZE, + .base.cra_module = THIS_MODULE, + .digestsize = SHA384_DIGEST_SIZE, + .init = crypto_sha384_init, + .update = crypto_sha384_update, + .final = crypto_sha384_final, + .digest = crypto_sha384_digest, + .descsize = sizeof(struct sha384_ctx), + }, + { + .base.cra_name = "sha512", + .base.cra_driver_name = "sha512-lib", + .base.cra_priority = 100, + .base.cra_blocksize = SHA512_BLOCK_SIZE, + .base.cra_module = THIS_MODULE, + .digestsize = SHA512_DIGEST_SIZE, + .init = crypto_sha512_init, + .update = crypto_sha512_update, + .final = crypto_sha512_final, + .digest = crypto_sha512_digest, + .descsize = sizeof(struct sha512_ctx), + }, + { + .base.cra_name = "hmac(sha384)", + .base.cra_driver_name = "hmac-sha384-lib", + .base.cra_priority = 100, + .base.cra_blocksize = SHA384_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct hmac_sha384_key), + .base.cra_module = THIS_MODULE, + .digestsize = SHA384_DIGEST_SIZE, + .setkey = crypto_hmac_sha384_setkey, + .init = crypto_hmac_sha384_init, + .update = crypto_hmac_sha384_update, + .final = crypto_hmac_sha384_final, + .digest = crypto_hmac_sha384_digest, + .descsize = sizeof(struct hmac_sha384_ctx), + }, + { + .base.cra_name = "hmac(sha512)", + .base.cra_driver_name = "hmac-sha512-lib", + .base.cra_priority = 100, + .base.cra_blocksize = SHA512_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct hmac_sha512_key), + .base.cra_module = THIS_MODULE, + .digestsize = SHA512_DIGEST_SIZE, + .setkey = crypto_hmac_sha512_setkey, + .init = crypto_hmac_sha512_init, + .update = crypto_hmac_sha512_update, + .final = crypto_hmac_sha512_final, + .digest = crypto_hmac_sha512_digest, + .descsize = sizeof(struct hmac_sha512_ctx), + }, +}; + +static int __init crypto_sha512_mod_init(void) +{ + return crypto_register_shashes(algs, ARRAY_SIZE(algs)); +} +module_init(crypto_sha512_mod_init); + +static void __exit crypto_sha512_mod_exit(void) +{ + crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); +} +module_exit(crypto_sha512_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Crypto API support for SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512"); + +MODULE_ALIAS_CRYPTO("sha384"); +MODULE_ALIAS_CRYPTO("sha512"); +MODULE_ALIAS_CRYPTO("hmac(sha384)"); +MODULE_ALIAS_CRYPTO("hmac(sha512)"); diff --git a/crypto/sha512_generic.c b/crypto/sha512_generic.c deleted file mode 100644 index 7368173f545eb..0000000000000 --- a/crypto/sha512_generic.c +++ /dev/null @@ -1,217 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* SHA-512 code by Jean-Luc Cooke <jlcooke@certainkey.com> - * - * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com> - * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> - * Copyright (c) 2003 Kyle McMartin <kyle@debian.org> - */ -#include <crypto/internal/hash.h> -#include <crypto/sha2.h> -#include <crypto/sha512_base.h> -#include <linux/kernel.h> -#include <linux/module.h> -#include <linux/unaligned.h> - -const u8 sha384_zero_message_hash[SHA384_DIGEST_SIZE] = { - 0x38, 0xb0, 0x60, 0xa7, 0x51, 0xac, 0x96, 0x38, - 0x4c, 0xd9, 0x32, 0x7e, 0xb1, 0xb1, 0xe3, 0x6a, - 0x21, 0xfd, 0xb7, 0x11, 0x14, 0xbe, 0x07, 0x43, - 0x4c, 0x0c, 0xc7, 0xbf, 0x63, 0xf6, 0xe1, 0xda, - 0x27, 0x4e, 0xde, 0xbf, 0xe7, 0x6f, 0x65, 0xfb, - 0xd5, 0x1a, 0xd2, 0xf1, 0x48, 0x98, 0xb9, 0x5b -}; -EXPORT_SYMBOL_GPL(sha384_zero_message_hash); - -const u8 sha512_zero_message_hash[SHA512_DIGEST_SIZE] = { - 0xcf, 0x83, 0xe1, 0x35, 0x7e, 0xef, 0xb8, 0xbd, - 0xf1, 0x54, 0x28, 0x50, 0xd6, 0x6d, 0x80, 0x07, - 0xd6, 0x20, 0xe4, 0x05, 0x0b, 0x57, 0x15, 0xdc, - 0x83, 0xf4, 0xa9, 0x21, 0xd3, 0x6c, 0xe9, 0xce, - 0x47, 0xd0, 0xd1, 0x3c, 0x5d, 0x85, 0xf2, 0xb0, - 0xff, 0x83, 0x18, 0xd2, 0x87, 0x7e, 0xec, 0x2f, - 0x63, 0xb9, 0x31, 0xbd, 0x47, 0x41, 0x7a, 0x81, - 0xa5, 0x38, 0x32, 0x7a, 0xf9, 0x27, 0xda, 0x3e -}; -EXPORT_SYMBOL_GPL(sha512_zero_message_hash); - -static inline u64 Ch(u64 x, u64 y, u64 z) -{ - return z ^ (x & (y ^ z)); -} - -static inline u64 Maj(u64 x, u64 y, u64 z) -{ - return (x & y) | (z & (x | y)); -} - -static const u64 sha512_K[80] = { - 0x428a2f98d728ae22ULL, 0x7137449123ef65cdULL, 0xb5c0fbcfec4d3b2fULL, - 0xe9b5dba58189dbbcULL, 0x3956c25bf348b538ULL, 0x59f111f1b605d019ULL, - 0x923f82a4af194f9bULL, 0xab1c5ed5da6d8118ULL, 0xd807aa98a3030242ULL, - 0x12835b0145706fbeULL, 0x243185be4ee4b28cULL, 0x550c7dc3d5ffb4e2ULL, - 0x72be5d74f27b896fULL, 0x80deb1fe3b1696b1ULL, 0x9bdc06a725c71235ULL, - 0xc19bf174cf692694ULL, 0xe49b69c19ef14ad2ULL, 0xefbe4786384f25e3ULL, - 0x0fc19dc68b8cd5b5ULL, 0x240ca1cc77ac9c65ULL, 0x2de92c6f592b0275ULL, - 0x4a7484aa6ea6e483ULL, 0x5cb0a9dcbd41fbd4ULL, 0x76f988da831153b5ULL, - 0x983e5152ee66dfabULL, 0xa831c66d2db43210ULL, 0xb00327c898fb213fULL, - 0xbf597fc7beef0ee4ULL, 0xc6e00bf33da88fc2ULL, 0xd5a79147930aa725ULL, - 0x06ca6351e003826fULL, 0x142929670a0e6e70ULL, 0x27b70a8546d22ffcULL, - 0x2e1b21385c26c926ULL, 0x4d2c6dfc5ac42aedULL, 0x53380d139d95b3dfULL, - 0x650a73548baf63deULL, 0x766a0abb3c77b2a8ULL, 0x81c2c92e47edaee6ULL, - 0x92722c851482353bULL, 0xa2bfe8a14cf10364ULL, 0xa81a664bbc423001ULL, - 0xc24b8b70d0f89791ULL, 0xc76c51a30654be30ULL, 0xd192e819d6ef5218ULL, - 0xd69906245565a910ULL, 0xf40e35855771202aULL, 0x106aa07032bbd1b8ULL, - 0x19a4c116b8d2d0c8ULL, 0x1e376c085141ab53ULL, 0x2748774cdf8eeb99ULL, - 0x34b0bcb5e19b48a8ULL, 0x391c0cb3c5c95a63ULL, 0x4ed8aa4ae3418acbULL, - 0x5b9cca4f7763e373ULL, 0x682e6ff3d6b2b8a3ULL, 0x748f82ee5defb2fcULL, - 0x78a5636f43172f60ULL, 0x84c87814a1f0ab72ULL, 0x8cc702081a6439ecULL, - 0x90befffa23631e28ULL, 0xa4506cebde82bde9ULL, 0xbef9a3f7b2c67915ULL, - 0xc67178f2e372532bULL, 0xca273eceea26619cULL, 0xd186b8c721c0c207ULL, - 0xeada7dd6cde0eb1eULL, 0xf57d4f7fee6ed178ULL, 0x06f067aa72176fbaULL, - 0x0a637dc5a2c898a6ULL, 0x113f9804bef90daeULL, 0x1b710b35131c471bULL, - 0x28db77f523047d84ULL, 0x32caab7b40c72493ULL, 0x3c9ebe0a15c9bebcULL, - 0x431d67c49c100d4cULL, 0x4cc5d4becb3e42b6ULL, 0x597f299cfc657e2aULL, - 0x5fcb6fab3ad6faecULL, 0x6c44198c4a475817ULL, -}; - -#define e0(x) (ror64(x,28) ^ ror64(x,34) ^ ror64(x,39)) -#define e1(x) (ror64(x,14) ^ ror64(x,18) ^ ror64(x,41)) -#define s0(x) (ror64(x, 1) ^ ror64(x, 8) ^ (x >> 7)) -#define s1(x) (ror64(x,19) ^ ror64(x,61) ^ (x >> 6)) - -static inline void LOAD_OP(int I, u64 *W, const u8 *input) -{ - W[I] = get_unaligned_be64((__u64 *)input + I); -} - -static inline void BLEND_OP(int I, u64 *W) -{ - W[I & 15] += s1(W[(I-2) & 15]) + W[(I-7) & 15] + s0(W[(I-15) & 15]); -} - -static void -sha512_transform(u64 *state, const u8 *input) -{ - u64 a, b, c, d, e, f, g, h, t1, t2; - - int i; - u64 W[16]; - - /* load the state into our registers */ - a=state[0]; b=state[1]; c=state[2]; d=state[3]; - e=state[4]; f=state[5]; g=state[6]; h=state[7]; - - /* now iterate */ - for (i=0; i<80; i+=8) { - if (!(i & 8)) { - int j; - - if (i < 16) { - /* load the input */ - for (j = 0; j < 16; j++) - LOAD_OP(i + j, W, input); - } else { - for (j = 0; j < 16; j++) { - BLEND_OP(i + j, W); - } - } - } - - t1 = h + e1(e) + Ch(e,f,g) + sha512_K[i ] + W[(i & 15)]; - t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2; - t1 = g + e1(d) + Ch(d,e,f) + sha512_K[i+1] + W[(i & 15) + 1]; - t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2; - t1 = f + e1(c) + Ch(c,d,e) + sha512_K[i+2] + W[(i & 15) + 2]; - t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2; - t1 = e + e1(b) + Ch(b,c,d) + sha512_K[i+3] + W[(i & 15) + 3]; - t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2; - t1 = d + e1(a) + Ch(a,b,c) + sha512_K[i+4] + W[(i & 15) + 4]; - t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2; - t1 = c + e1(h) + Ch(h,a,b) + sha512_K[i+5] + W[(i & 15) + 5]; - t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2; - t1 = b + e1(g) + Ch(g,h,a) + sha512_K[i+6] + W[(i & 15) + 6]; - t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2; - t1 = a + e1(f) + Ch(f,g,h) + sha512_K[i+7] + W[(i & 15) + 7]; - t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2; - } - - state[0] += a; state[1] += b; state[2] += c; state[3] += d; - state[4] += e; state[5] += f; state[6] += g; state[7] += h; -} - -void sha512_generic_block_fn(struct sha512_state *sst, u8 const *src, - int blocks) -{ - do { - sha512_transform(sst->state, src); - src += SHA512_BLOCK_SIZE; - } while (--blocks); -} -EXPORT_SYMBOL_GPL(sha512_generic_block_fn); - -static int crypto_sha512_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha512_base_do_update_blocks(desc, data, len, - sha512_generic_block_fn); -} - -static int crypto_sha512_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *hash) -{ - sha512_base_do_finup(desc, data, len, sha512_generic_block_fn); - return sha512_base_finish(desc, hash); -} - -static struct shash_alg sha512_algs[2] = { { - .digestsize = SHA512_DIGEST_SIZE, - .init = sha512_base_init, - .update = crypto_sha512_update, - .finup = crypto_sha512_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha512", - .cra_driver_name = "sha512-generic", - .cra_priority = 100, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -}, { - .digestsize = SHA384_DIGEST_SIZE, - .init = sha384_base_init, - .update = crypto_sha512_update, - .finup = crypto_sha512_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha384", - .cra_driver_name = "sha384-generic", - .cra_priority = 100, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -} }; - -static int __init sha512_generic_mod_init(void) -{ - return crypto_register_shashes(sha512_algs, ARRAY_SIZE(sha512_algs)); -} - -static void __exit sha512_generic_mod_fini(void) -{ - crypto_unregister_shashes(sha512_algs, ARRAY_SIZE(sha512_algs)); -} - -module_init(sha512_generic_mod_init); -module_exit(sha512_generic_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-512 and SHA-384 Secure Hash Algorithms"); - -MODULE_ALIAS_CRYPTO("sha384"); -MODULE_ALIAS_CRYPTO("sha384-generic"); -MODULE_ALIAS_CRYPTO("sha512"); -MODULE_ALIAS_CRYPTO("sha512-generic"); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 72005074a5c26..9b4235adcb036 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -4304,59 +4304,69 @@ static const struct alg_test_desc alg_test_descs[] = { .alg = "authenc(hmac(sha256),rfc3686(ctr(aes)))", .test = alg_test_null, .fips_allowed = 1, }, { .alg = "authenc(hmac(sha384),cbc(des))", + .generic_driver = "authenc(hmac-sha384-lib,cbc(des-generic))", .test = alg_test_aead, .suite = { .aead = __VECS(hmac_sha384_des_cbc_tv_temp) } }, { .alg = "authenc(hmac(sha384),cbc(des3_ede))", + .generic_driver = "authenc(hmac-sha384-lib,cbc(des3_ede-generic))", .test = alg_test_aead, .suite = { .aead = __VECS(hmac_sha384_des3_ede_cbc_tv_temp) } }, { .alg = "authenc(hmac(sha384),ctr(aes))", + .generic_driver = "authenc(hmac-sha384-lib,ctr(aes-generic))", .test = alg_test_null, .fips_allowed = 1, }, { .alg = "authenc(hmac(sha384),cts(cbc(aes)))", + .generic_driver = "authenc(hmac-sha384-lib,cts(cbc(aes-generic)))", .test = alg_test_aead, .suite = { .aead = __VECS(krb5_test_aes256_cts_hmac_sha384_192) } }, { .alg = "authenc(hmac(sha384),rfc3686(ctr(aes)))", + .generic_driver = "authenc(hmac-sha384-lib,rfc3686(ctr(aes-generic)))", .test = alg_test_null, .fips_allowed = 1, }, { .alg = "authenc(hmac(sha512),cbc(aes))", + .generic_driver = "authenc(hmac-sha512-lib,cbc(aes-generic))", .fips_allowed = 1, .test = alg_test_aead, .suite = { .aead = __VECS(hmac_sha512_aes_cbc_tv_temp) } }, { .alg = "authenc(hmac(sha512),cbc(des))", + .generic_driver = "authenc(hmac-sha512-lib,cbc(des-generic))", .test = alg_test_aead, .suite = { .aead = __VECS(hmac_sha512_des_cbc_tv_temp) } }, { .alg = "authenc(hmac(sha512),cbc(des3_ede))", + .generic_driver = "authenc(hmac-sha512-lib,cbc(des3_ede-generic))", .test = alg_test_aead, .suite = { .aead = __VECS(hmac_sha512_des3_ede_cbc_tv_temp) } }, { .alg = "authenc(hmac(sha512),ctr(aes))", + .generic_driver = "authenc(hmac-sha512-lib,ctr(aes-generic))", .test = alg_test_null, .fips_allowed = 1, }, { .alg = "authenc(hmac(sha512),rfc3686(ctr(aes)))", + .generic_driver = "authenc(hmac-sha512-lib,rfc3686(ctr(aes-generic)))", .test = alg_test_null, .fips_allowed = 1, }, { .alg = "blake2b-160", .test = alg_test_hash, @@ -5146,17 +5156,19 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .hash = __VECS(hmac_sha3_512_tv_template) } }, { .alg = "hmac(sha384)", + .generic_driver = "hmac-sha384-lib", .test = alg_test_hash, .fips_allowed = 1, .suite = { .hash = __VECS(hmac_sha384_tv_template) } }, { .alg = "hmac(sha512)", + .generic_driver = "hmac-sha512-lib", .test = alg_test_hash, .fips_allowed = 1, .suite = { .hash = __VECS(hmac_sha512_tv_template) } @@ -5338,14 +5350,16 @@ static const struct alg_test_desc alg_test_descs[] = { .alg = "pkcs1(rsa,sha3-512)", .test = alg_test_null, .fips_allowed = 1, }, { .alg = "pkcs1(rsa,sha384)", + .generic_driver = "pkcs1(rsa,sha384-lib)", .test = alg_test_null, .fips_allowed = 1, }, { .alg = "pkcs1(rsa,sha512)", + .generic_driver = "pkcs1(rsa,sha512-lib)", .test = alg_test_null, .fips_allowed = 1, }, { .alg = "pkcs1pad(rsa)", .test = alg_test_null, @@ -5482,17 +5496,19 @@ static const struct alg_test_desc alg_test_descs[] = { .suite = { .hash = __VECS(sha3_512_tv_template) } }, { .alg = "sha384", + .generic_driver = "sha384-lib", .test = alg_test_hash, .fips_allowed = 1, .suite = { .hash = __VECS(sha384_tv_template) } }, { .alg = "sha512", + .generic_driver = "sha512-lib", .test = alg_test_hash, .fips_allowed = 1, .suite = { .hash = __VECS(sha512_tv_template) } diff --git a/drivers/crypto/starfive/jh7110-hash.c b/drivers/crypto/starfive/jh7110-hash.c index 2c60a1047bc39..4abbff07412ff 100644 --- a/drivers/crypto/starfive/jh7110-hash.c +++ b/drivers/crypto/starfive/jh7110-hash.c @@ -503,17 +503,17 @@ static int starfive_sha256_init_tfm(struct crypto_ahash *hash) STARFIVE_HASH_SHA256, 0); } static int starfive_sha384_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "sha384-generic", + return starfive_hash_init_tfm(hash, "sha384-lib", STARFIVE_HASH_SHA384, 0); } static int starfive_sha512_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "sha512-generic", + return starfive_hash_init_tfm(hash, "sha512-lib", STARFIVE_HASH_SHA512, 0); } static int starfive_sm3_init_tfm(struct crypto_ahash *hash) { @@ -533,17 +533,17 @@ static int starfive_hmac_sha256_init_tfm(struct crypto_ahash *hash) STARFIVE_HASH_SHA256, 1); } static int starfive_hmac_sha384_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "hmac(sha384-generic)", + return starfive_hash_init_tfm(hash, "hmac-sha384-lib", STARFIVE_HASH_SHA384, 1); } static int starfive_hmac_sha512_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "hmac(sha512-generic)", + return starfive_hash_init_tfm(hash, "hmac-sha512-lib", STARFIVE_HASH_SHA512, 1); } static int starfive_hmac_sm3_init_tfm(struct crypto_ahash *hash) { diff --git a/include/crypto/sha512_base.h b/include/crypto/sha512_base.h index aa814bab442d4..d1361b3eb70b0 100644 --- a/include/crypto/sha512_base.h +++ b/include/crypto/sha512_base.h @@ -112,9 +112,6 @@ static inline int sha512_base_finish(struct shash_desc *desc, u8 *out) for (i = 0; digest_size > 0; i++, digest_size -= sizeof(__be64)) put_unaligned_be64(sctx->state[i], digest++); return 0; } -void sha512_generic_block_fn(struct sha512_state *sst, u8 const *src, - int blocks); - #endif /* _CRYPTO_SHA512_BASE_H */ -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-11 2:09 ` [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library Eric Biggers @ 2025-06-11 2:24 ` Herbert Xu 2025-06-11 3:39 ` Eric Biggers 0 siblings, 1 reply; 34+ messages in thread From: Herbert Xu @ 2025-06-11 2:24 UTC (permalink / raw) To: Eric Biggers Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds Eric Biggers <ebiggers@kernel.org> wrote: > > + { > + .base.cra_name = "sha512", > + .base.cra_driver_name = "sha512-lib", > + .base.cra_priority = 100, > + .base.cra_blocksize = SHA512_BLOCK_SIZE, > + .base.cra_module = THIS_MODULE, > + .digestsize = SHA512_DIGEST_SIZE, > + .init = crypto_sha512_init, > + .update = crypto_sha512_update, > + .final = crypto_sha512_final, > + .digest = crypto_sha512_digest, > + .descsize = sizeof(struct sha512_ctx), > + }, This changes the export format which breaks fallback support for ahash drivers. You need to retain the existing export format. Cheers, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-11 2:24 ` Herbert Xu @ 2025-06-11 3:39 ` Eric Biggers 2025-06-11 3:46 ` Herbert Xu 0 siblings, 1 reply; 34+ messages in thread From: Eric Biggers @ 2025-06-11 3:39 UTC (permalink / raw) To: Herbert Xu Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds On Wed, Jun 11, 2025 at 10:24:41AM +0800, Herbert Xu wrote: > Eric Biggers <ebiggers@kernel.org> wrote: > > > > + { > > + .base.cra_name = "sha512", > > + .base.cra_driver_name = "sha512-lib", > > + .base.cra_priority = 100, > > + .base.cra_blocksize = SHA512_BLOCK_SIZE, > > + .base.cra_module = THIS_MODULE, > > + .digestsize = SHA512_DIGEST_SIZE, > > + .init = crypto_sha512_init, > > + .update = crypto_sha512_update, > > + .final = crypto_sha512_final, > > + .digest = crypto_sha512_digest, > > + .descsize = sizeof(struct sha512_ctx), > > + }, > > This changes the export format which breaks fallback support > for ahash drivers. > > You need to retain the existing export format. Do you have a concrete example (meaning, a specific driver) where this actually matters? Historically, export and import have always had to be paired for the same transformation object, i.e. import was called only with the output of export. There is, and has never been, any test that tests otherwise. This seems like a brand new "requirement" that you've made up unnecessarily. It also makes much more sense for the export format to simply be the struct used by the library (e.g. sha512_ctx), not some undocumented struct generated by pointer arithmetic. And drivers should just use the library as their fallback, or else just do what they did before when they must not have been depending on a particular format. I'll add export and import functions if you insist, but it seems pointless. Could you at least provide proper definitions for the legacy structs so that I don't have to do pointer arithmetic to generate them? - Eric _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-11 3:39 ` Eric Biggers @ 2025-06-11 3:46 ` Herbert Xu 2025-06-11 3:58 ` Eric Biggers 0 siblings, 1 reply; 34+ messages in thread From: Herbert Xu @ 2025-06-11 3:46 UTC (permalink / raw) To: Eric Biggers Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds On Tue, Jun 10, 2025 at 08:39:57PM -0700, Eric Biggers wrote: > > Do you have a concrete example (meaning, a specific driver) where this actually > matters? Historically, export and import have always had to be paired for the > same transformation object, i.e. import was called only with the output of > export. There is, and has never been, any test that tests otherwise. This > seems like a brand new "requirement" that you've made up unnecessarily. It's not just drivers that may be using fallbacks, the ahash API code itself now relies on this to provide fallbacks for cases that drivers can't handle, such as linear addresses. I did add the testing for it, which revealed a few problems with s390 so it was reverted for 6.16. But I will be adding it back after the s390 issues have been resolved. > I'll add export and import functions if you insist, but it seems pointless. > > Could you at least provide proper definitions for the legacy structs so that I > don't have to do pointer arithmetic to generate them? Just expose the sha512 block functions and use them as is. There is no need to do the export/import dance. Cheers, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-11 3:46 ` Herbert Xu @ 2025-06-11 3:58 ` Eric Biggers 2025-06-13 5:36 ` Eric Biggers 0 siblings, 1 reply; 34+ messages in thread From: Eric Biggers @ 2025-06-11 3:58 UTC (permalink / raw) To: Herbert Xu Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds On Wed, Jun 11, 2025 at 11:46:47AM +0800, Herbert Xu wrote: > On Tue, Jun 10, 2025 at 08:39:57PM -0700, Eric Biggers wrote: > > > > Do you have a concrete example (meaning, a specific driver) where this actually > > matters? Historically, export and import have always had to be paired for the > > same transformation object, i.e. import was called only with the output of > > export. There is, and has never been, any test that tests otherwise. This > > seems like a brand new "requirement" that you've made up unnecessarily. > > It's not just drivers that may be using fallbacks, the ahash API > code itself now relies on this to provide fallbacks for cases that > drivers can't handle, such as linear addresses. > > I did add the testing for it, which revealed a few problems with > s390 so it was reverted for 6.16. But I will be adding it back > after the s390 issues have been resolved. Okay, so it sounds like in practice this is specific to ahash_do_req_chain() which you recently added. I'm not sure what it's meant to be doing. > > I'll add export and import functions if you insist, but it seems pointless. > > > > Could you at least provide proper definitions for the legacy structs so that I > > don't have to do pointer arithmetic to generate them? > > Just expose the sha512 block functions and use them as is. There > is no need to do the export/import dance. We're not going to support direct access to the SHA-512 compression function as part of the library API. It's just unnecessary and error-prone. crypto/ will just use the same well-documented and well-tested public API as everyone else. - Eric _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-11 3:58 ` Eric Biggers @ 2025-06-13 5:36 ` Eric Biggers 2025-06-13 5:38 ` Herbert Xu 0 siblings, 1 reply; 34+ messages in thread From: Eric Biggers @ 2025-06-13 5:36 UTC (permalink / raw) To: Herbert Xu Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds On Tue, Jun 10, 2025 at 08:58:42PM -0700, Eric Biggers wrote: > On Wed, Jun 11, 2025 at 11:46:47AM +0800, Herbert Xu wrote: > > On Tue, Jun 10, 2025 at 08:39:57PM -0700, Eric Biggers wrote: > > > > > > Do you have a concrete example (meaning, a specific driver) where this actually > > > matters? Historically, export and import have always had to be paired for the > > > same transformation object, i.e. import was called only with the output of > > > export. There is, and has never been, any test that tests otherwise. This > > > seems like a brand new "requirement" that you've made up unnecessarily. > > > > It's not just drivers that may be using fallbacks, the ahash API > > code itself now relies on this to provide fallbacks for cases that > > drivers can't handle, such as linear addresses. > > > > I did add the testing for it, which revealed a few problems with > > s390 so it was reverted for 6.16. But I will be adding it back > > after the s390 issues have been resolved. > > Okay, so it sounds like in practice this is specific to ahash_do_req_chain() > which you recently added. I'm not sure what it's meant to be doing. You do know that most of the sha512 asynchronous hash drivers use custom state formats and not your new one, right? So your code in ahash_do_req_chain() is broken for most asynchronous hash drivers anyway. - Eric _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-13 5:36 ` Eric Biggers @ 2025-06-13 5:38 ` Herbert Xu 2025-06-13 5:54 ` Eric Biggers 0 siblings, 1 reply; 34+ messages in thread From: Herbert Xu @ 2025-06-13 5:38 UTC (permalink / raw) To: Eric Biggers Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds On Thu, Jun 12, 2025 at 10:36:24PM -0700, Eric Biggers wrote: > > You do know that most of the sha512 asynchronous hash drivers use custom state > formats and not your new one, right? So your code in ahash_do_req_chain() is > broken for most asynchronous hash drivers anyway. Every driver needs to be converted by hand. Once a driver has been converted it'll be marked as block-only which activates the fallback path in ahash. Cheers, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-13 5:38 ` Herbert Xu @ 2025-06-13 5:54 ` Eric Biggers 2025-06-13 7:38 ` Ard Biesheuvel 2025-06-13 8:51 ` [PATCH] crypto: ahash - Stop legacy tfms from using the set_virt fallback path Herbert Xu 0 siblings, 2 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-13 5:54 UTC (permalink / raw) To: Herbert Xu Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds On Fri, Jun 13, 2025 at 01:38:59PM +0800, Herbert Xu wrote: > On Thu, Jun 12, 2025 at 10:36:24PM -0700, Eric Biggers wrote: > > > > You do know that most of the sha512 asynchronous hash drivers use custom state > > formats and not your new one, right? So your code in ahash_do_req_chain() is > > broken for most asynchronous hash drivers anyway. > > Every driver needs to be converted by hand. Once a driver has > been converted it'll be marked as block-only which activates > the fallback path in ahash. Actually, crypto_ahash::base::fb is initialized if CRYPTO_ALG_NEED_FALLBACK, which many of the drivers already set. Then crypto_ahash_update() calls ahash_do_req_chain() if the algorithm does *not* have CRYPTO_AHASH_ALG_BLOCK_ONLY set. Which then exports the driver's custom state and tries to import it into the fallback. As far as I can tell, it's just broken for most of the existing drivers. - Eric _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-13 5:54 ` Eric Biggers @ 2025-06-13 7:38 ` Ard Biesheuvel 2025-06-13 8:39 ` Herbert Xu 2025-06-13 8:51 ` [PATCH] crypto: ahash - Stop legacy tfms from using the set_virt fallback path Herbert Xu 1 sibling, 1 reply; 34+ messages in thread From: Ard Biesheuvel @ 2025-06-13 7:38 UTC (permalink / raw) To: Eric Biggers Cc: Herbert Xu, linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Jason, torvalds On Fri, 13 Jun 2025 at 07:55, Eric Biggers <ebiggers@kernel.org> wrote: > > On Fri, Jun 13, 2025 at 01:38:59PM +0800, Herbert Xu wrote: > > On Thu, Jun 12, 2025 at 10:36:24PM -0700, Eric Biggers wrote: > > > > > > You do know that most of the sha512 asynchronous hash drivers use custom state > > > formats and not your new one, right? So your code in ahash_do_req_chain() is > > > broken for most asynchronous hash drivers anyway. > > > > Every driver needs to be converted by hand. Once a driver has > > been converted it'll be marked as block-only which activates > > the fallback path in ahash. > Perhaps I am just slow, but could you please explain again what the point is of all these changes? Where is h/w accelerated ahash being used to the extent that it justifies changing all this existing code to accommodate it? _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-13 7:38 ` Ard Biesheuvel @ 2025-06-13 8:39 ` Herbert Xu 2025-06-13 14:51 ` Eric Biggers 2025-06-13 16:35 ` Linus Torvalds 0 siblings, 2 replies; 34+ messages in thread From: Herbert Xu @ 2025-06-13 8:39 UTC (permalink / raw) To: Ard Biesheuvel Cc: Eric Biggers, linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Jason, torvalds On Fri, Jun 13, 2025 at 09:38:11AM +0200, Ard Biesheuvel wrote: > > Perhaps I am just slow, but could you please explain again what the > point is of all these changes? > > Where is h/w accelerated ahash being used to the extent that it > justifies changing all this existing code to accommodate it? There are two separate changes. First of all the export format is being made consistent so that any hardware hash can switch over to a software fallback after it has started, e.g., in the event of a memory allocation failure. The partial block API handling on the other hand is about simplifying the drivers so that they are less error-prone. Cheers, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-13 8:39 ` Herbert Xu @ 2025-06-13 14:51 ` Eric Biggers 2025-06-13 16:35 ` Linus Torvalds 1 sibling, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-13 14:51 UTC (permalink / raw) To: Herbert Xu Cc: Ard Biesheuvel, linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Jason, torvalds On Fri, Jun 13, 2025 at 04:39:10PM +0800, Herbert Xu wrote: > On Fri, Jun 13, 2025 at 09:38:11AM +0200, Ard Biesheuvel wrote: > > > > Perhaps I am just slow, but could you please explain again what the > > point is of all these changes? > > > > Where is h/w accelerated ahash being used to the extent that it > > justifies changing all this existing code to accommodate it? > > There are two separate changes. > > First of all the export format is being made consistent so that > any hardware hash can switch over to a software fallback after > it has started, e.g., in the event of a memory allocation failure. > > The partial block API handling on the other hand is about simplifying > the drivers so that they are less error-prone. Is it perhaps time to reconsider your plan, given that it's causing problems for the librarification effort which is much more useful, and also most of the legacy hardware offload drivers seem to be incompatible with it too? - Eric _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library 2025-06-13 8:39 ` Herbert Xu 2025-06-13 14:51 ` Eric Biggers @ 2025-06-13 16:35 ` Linus Torvalds 1 sibling, 0 replies; 34+ messages in thread From: Linus Torvalds @ 2025-06-13 16:35 UTC (permalink / raw) To: Herbert Xu Cc: Ard Biesheuvel, Eric Biggers, linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Jason On Fri, 13 Jun 2025 at 01:39, Herbert Xu <herbert@gondor.apana.org.au> wrote: > > First of all the export format is being made consistent so that > any hardware hash can switch over to a software fallback after > it has started, e.g., in the event of a memory allocation failure. Can we please instead aim to *simplify* the crypto thing? Just say that hw accelerators that have this kind of issue shouldn't be used. At all. And certainly not be catered to by generic code. The whole hw acceleration is very dubious to begin with unless it's directly tied to the source (or destination) of the data in the first place, so that there isn't extra data movement. And if there are any software fallbacks, that "dubious to begin with" pretty much becomes "entirely pointless". If the point is that there are existing stupid hw drivers that already do that fallback internally, then please just *keep* that kind of idiocy and workarounds in the drivers. It's actually *better* to have a broken garbage hardware driver - that you can easily just disable on its own - than having a broken garbage generic crypto layer that people just don't want to use at all because it's such a ess. This whole "make the mess that is the crypto layer EVEN MORE OF A MESS" model of development is completely broken in my opinion. There's a reason people prefer to have just the sw library without any of the indirection or complexity of the crypto layer. Linus _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH] crypto: ahash - Stop legacy tfms from using the set_virt fallback path 2025-06-13 5:54 ` Eric Biggers 2025-06-13 7:38 ` Ard Biesheuvel @ 2025-06-13 8:51 ` Herbert Xu 2025-06-15 3:18 ` Eric Biggers 1 sibling, 1 reply; 34+ messages in thread From: Herbert Xu @ 2025-06-13 8:51 UTC (permalink / raw) To: Eric Biggers Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds On Thu, Jun 12, 2025 at 10:54:39PM -0700, Eric Biggers wrote: > > Actually, crypto_ahash::base::fb is initialized if CRYPTO_ALG_NEED_FALLBACK, > which many of the drivers already set. Then crypto_ahash_update() calls > ahash_do_req_chain() if the algorithm does *not* have > CRYPTO_AHASH_ALG_BLOCK_ONLY set. Which then exports the driver's custom state > and tries to import it into the fallback. > > As far as I can tell, it's just broken for most of the existing drivers. This fallback path is only meant to be used for drivers that have been converted. But you're right there is a check missing in there. Thanks, ---8<--- Ensure that drivers that have not been converted to the ahash API do not use the ahash_request_set_virt fallback path as they cannot use the software fallback. Reported-by: Eric Biggers <ebiggers@kernel.org> Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> diff --git a/crypto/ahash.c b/crypto/ahash.c index e10bc2659ae4..992228a9f283 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -347,6 +347,9 @@ static int ahash_do_req_chain(struct ahash_request *req, if (crypto_ahash_statesize(tfm) > HASH_MAX_STATESIZE) return -ENOSYS; + if (crypto_hash_no_export_core(tfm)) + return -ENOSYS; + { u8 state[HASH_MAX_STATESIZE]; diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h index 0f85c543f80b..f052afa6e7b0 100644 --- a/include/crypto/internal/hash.h +++ b/include/crypto/internal/hash.h @@ -91,6 +91,12 @@ static inline bool crypto_hash_alg_needs_key(struct hash_alg_common *alg) !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY); } +static inline bool crypto_hash_no_export_core(struct crypto_ahash *tfm) +{ + return crypto_hash_alg_common(tfm)->base.cra_flags & + CRYPTO_AHASH_ALG_NO_EXPORT_CORE; +} + int crypto_grab_ahash(struct crypto_ahash_spawn *spawn, struct crypto_instance *inst, const char *name, u32 type, u32 mask); -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH] crypto: ahash - Stop legacy tfms from using the set_virt fallback path 2025-06-13 8:51 ` [PATCH] crypto: ahash - Stop legacy tfms from using the set_virt fallback path Herbert Xu @ 2025-06-15 3:18 ` Eric Biggers 2025-06-15 7:22 ` Ard Biesheuvel 2025-06-16 4:09 ` [PATCH] crypto: ahash - Fix infinite recursion in ahash_def_finup Herbert Xu 0 siblings, 2 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-15 3:18 UTC (permalink / raw) To: Herbert Xu Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds On Fri, Jun 13, 2025 at 04:51:38PM +0800, Herbert Xu wrote: > On Thu, Jun 12, 2025 at 10:54:39PM -0700, Eric Biggers wrote: > > > > Actually, crypto_ahash::base::fb is initialized if CRYPTO_ALG_NEED_FALLBACK, > > which many of the drivers already set. Then crypto_ahash_update() calls > > ahash_do_req_chain() if the algorithm does *not* have > > CRYPTO_AHASH_ALG_BLOCK_ONLY set. Which then exports the driver's custom state > > and tries to import it into the fallback. > > > > As far as I can tell, it's just broken for most of the existing drivers. > > This fallback path is only meant to be used for drivers that have > been converted. But you're right there is a check missing in there. > > Thanks, > > ---8<--- > Ensure that drivers that have not been converted to the ahash API > do not use the ahash_request_set_virt fallback path as they cannot > use the software fallback. > > Reported-by: Eric Biggers <ebiggers@kernel.org> > Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API") > Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Okay. Out of curiosity I decided to actually test the Qualcomm Crypto Engine driver on a development board that has a Qualcomm SoC, using latest mainline. Even with your patch applied, it overflows the stack when running the crypto self-tests, apparently due to crypto/ahash.c calling into itself recursively: [ 9.230887] Insufficient stack space to handle exception! [ 9.230889] ESR: 0x0000000096000047 -- DABT (current EL) [ 9.230891] FAR: 0xffff800084927fe0 [ 9.230891] Task stack: [0xffff800084928000..0xffff80008492c000] [ 9.230893] IRQ stack: [0xffff800080030000..0xffff800080034000] [ 9.230894] Overflow stack: [0xffff000a72dd2100..0xffff000a72dd3100] [ 9.230896] CPU: 6 UID: 0 PID: 747 Comm: cryptomgr_test Tainted: G S 6.16.0-rc1-00237-g84ffcd88616f #7 PREEMPT [ 9.230900] Tainted: [S]=CPU_OUT_OF_SPEC [ 9.230901] Hardware name: Qualcomm Technologies, Inc. SM8650 HDK (DT) [ 9.230901] pstate: 01400005 (nzcv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) [ 9.230903] pc : qce_ahash_update+0x4/0x1f4 [ 9.230910] lr : ahash_do_req_chain+0xb4/0x19c [ 9.230915] sp : ffff800084928030 [ 9.230915] x29: ffff8000849281a0 x28: 0000000000000003 x27: 0000000000000001 [ 9.230918] x26: ffff0008022d8060 x25: ffff000800a33500 x24: ffff80008492b8d8 [ 9.230920] x23: ffff80008492b918 x22: 0000000000000400 x21: ffff000800a33510 [ 9.230922] x20: ffff000800b62030 x19: ffff00080122d400 x18: 00000000ffffffff [ 9.230923] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 [ 9.230925] x14: 0000000000000001 x13: 0000000000000000 x12: 0000000000000000 [ 9.230927] x11: eee1c132902c61e2 x10: 0000000000000063 x9 : 0000000000000000 [ 9.230928] x8 : 0000000000000062 x7 : a54ff53a3c6ef372 x6 : 0000000000000400 [ 9.230930] x5 : fefefefefefefefe x4 : ffff000800a33510 x3 : 0000000000000000 [ 9.230931] x2 : ffff000805d76900 x1 : ffffcea2349738cc x0 : ffff00080122d400 [ 9.230933] Kernel panic - not syncing: kernel stack overflow [ 9.230934] CPU: 6 UID: 0 PID: 747 Comm: cryptomgr_test Tainted: G S 6.16.0-rc1-00237-g84ffcd88616f #7 PREEMPT [ 9.230936] Tainted: [S]=CPU_OUT_OF_SPEC [ 9.230937] Hardware name: Qualcomm Technologies, Inc. SM8650 HDK (DT) [ 9.230938] Call trace: [ 9.230939] show_stack+0x18/0x24 (C) [ 9.230943] dump_stack_lvl+0x60/0x80 [ 9.230947] dump_stack+0x18/0x24 [ 9.230949] panic+0x168/0x360 [ 9.230952] add_taint+0x0/0xbc [ 9.230955] panic_bad_stack+0x108/0x120 [ 9.230958] handle_bad_stack+0x34/0x40 [ 9.230962] __bad_stack+0x80/0x84 [ 9.230963] qce_ahash_update+0x4/0x1f4 (P) [ 9.230965] crypto_ahash_update+0x17c/0x18c [ 9.230967] crypto_ahash_finup+0x184/0x1e4 [ 9.230969] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230970] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230972] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230973] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230974] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230976] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230977] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230979] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230980] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230981] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230983] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230984] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230986] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230988] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230989] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230991] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230993] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230995] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230996] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230998] crypto_ahash_finup+0x1ac/0x1e4 [ 9.230999] crypto_ahash_finup+0x1ac/0x1e4 [ 9.231001] crypto_ahash_finup+0x1ac/0x1e4 [ 9.231002] crypto_ahash_finup+0x1ac/0x1e4 [ 9.231004] crypto_ahash_finup+0x1ac/0x1e4 [ 9.231005] crypto_ahash_finup+0x1ac/0x1e4 [the above line repeated a few hundred times more...] [ 9.231571] test_ahash_vec_cfg+0x508/0x8f8 [ 9.231573] test_hash_vec+0xb8/0x21c [ 9.231575] __alg_test_hash+0x144/0x2e0 [ 9.231577] alg_test_hash+0xc0/0x178 [ 9.231578] alg_test+0x148/0x5ec [ 9.231579] cryptomgr_test+0x24/0x40 [ 9.231581] kthread+0x12c/0x204 [ 9.231583] ret_from_fork+0x10/0x20 [ 9.231587] SMP: stopping secondary CPUs [ 9.240072] Kernel Offset: 0x4ea1b2a80000 from 0xffff800080000000 [ 9.240073] PHYS_OFFSET: 0xfff1000080000000 [ 9.240074] CPU features: 0x6000,000001c0,62130cb1,357e7667 [ 9.240075] Memory Limit: none [ 11.373410] ---[ end Kernel panic - not syncing: kernel stack overflow ]--- After disabling the crypto self-tests, I was then able to run a benchmark of SHA-256 hashing 4096-byte messages, which fortunately didn't encounter the recursion bug. I got the following results: ARMv8 crypto extensions: 1864 MB/s Generic C code: 358 MB/s Qualcomm Crypto Engine: 55 MB/s So just to clarify, you believe that asynchronous hash drivers like the Qualcomm Crypto Engine one are useful, and the changes that you're requiring to the CPU-based code are to support these drivers? - Eric _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH] crypto: ahash - Stop legacy tfms from using the set_virt fallback path 2025-06-15 3:18 ` Eric Biggers @ 2025-06-15 7:22 ` Ard Biesheuvel 2025-06-15 18:46 ` Eric Biggers 2025-06-16 4:09 ` [PATCH] crypto: ahash - Fix infinite recursion in ahash_def_finup Herbert Xu 1 sibling, 1 reply; 34+ messages in thread From: Ard Biesheuvel @ 2025-06-15 7:22 UTC (permalink / raw) To: Eric Biggers Cc: Herbert Xu, linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Jason, torvalds On Sun, 15 Jun 2025 at 05:18, Eric Biggers <ebiggers@kernel.org> wrote: > ... > After disabling the crypto self-tests, I was then able to run a benchmark of > SHA-256 hashing 4096-byte messages, which fortunately didn't encounter the > recursion bug. I got the following results: > > ARMv8 crypto extensions: 1864 MB/s > Generic C code: 358 MB/s > Qualcomm Crypto Engine: 55 MB/s > > So just to clarify, you believe that asynchronous hash drivers like the Qualcomm > Crypto Engine one are useful, and the changes that you're requiring to the > CPU-based code are to support these drivers? > And this offload engine only has one internal queue, right? Whereas the CPU results may be multiplied by the number of cores on the soc. It would still be interesting how much of this is due to latency rather than limited throughput but it seems highly unlikely that there are any message sizes large enough where QCE would catch up with the CPUs. (AIUI, the only use case we have in the kernel today for message sizes that are substantially larger than this is kTLS, but I'm not sure how well it works with crypto_aead compared to offload at a more suitable level in the networking stack, and this driver does not implement GCM in the first place) On ARM socs, these offload engines usually exist primarily for the benefit of the verified boot implementation in mask ROM, which obviously needs to be minimal but doesn't have to be very fast in order to get past the first boot stages and hand over to software. Then, since the IP block is there, it's listed as a feature in the data sheet, even though it is not very useful when running under the OS. _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH] crypto: ahash - Stop legacy tfms from using the set_virt fallback path 2025-06-15 7:22 ` Ard Biesheuvel @ 2025-06-15 18:46 ` Eric Biggers 2025-06-15 19:37 ` Linus Torvalds 0 siblings, 1 reply; 34+ messages in thread From: Eric Biggers @ 2025-06-15 18:46 UTC (permalink / raw) To: Ard Biesheuvel Cc: Herbert Xu, linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Jason, torvalds On Sun, Jun 15, 2025 at 09:22:51AM +0200, Ard Biesheuvel wrote: > On Sun, 15 Jun 2025 at 05:18, Eric Biggers <ebiggers@kernel.org> wrote: > > > ... > > After disabling the crypto self-tests, I was then able to run a benchmark of > > SHA-256 hashing 4096-byte messages, which fortunately didn't encounter the > > recursion bug. I got the following results: > > > > ARMv8 crypto extensions: 1864 MB/s > > Generic C code: 358 MB/s > > Qualcomm Crypto Engine: 55 MB/s > > > > So just to clarify, you believe that asynchronous hash drivers like the Qualcomm > > Crypto Engine one are useful, and the changes that you're requiring to the > > CPU-based code are to support these drivers? > > > > And this offload engine only has one internal queue, right? Whereas > the CPU results may be multiplied by the number of cores on the soc. > It would still be interesting how much of this is due to latency > rather than limited throughput but it seems highly unlikely that there > are any message sizes large enough where QCE would catch up with the > CPUs. (AIUI, the only use case we have in the kernel today for message > sizes that are substantially larger than this is kTLS, but I'm not > sure how well it works with crypto_aead compared to offload at a more > suitable level in the networking stack, and this driver does not > implement GCM in the first place) > > On ARM socs, these offload engines usually exist primarily for the > benefit of the verified boot implementation in mask ROM, which > obviously needs to be minimal but doesn't have to be very fast in > order to get past the first boot stages and hand over to software. > Then, since the IP block is there, it's listed as a feature in the > data sheet, even though it is not very useful when running under the > OS. With 1 MiB messages, I get 1913 MB/s with ARMv8 CE and 142 MB/s with QCE. (BTW, that's single-buffer ARMv8 CE. My two-buffer code is over 3000 MB/s.) I then changed my benchmark code to take full advantage of the async API and submit as many requests as the hardware can handle. (This would be a best-case scenario for QCE; in many real use cases this is not possible.) Result with QCE was 58 MB/s with 4 KiB messages or 155 MB/s for 1 MiB messages. So yes, QCE seems to have only one queue, and even that one queue is *much* slower than just using the CPU. It's even slower than the generic C code. And until I fixed it recently, the Crypto API defaulted to using QCE instead of ARMv8 CE. But this seems to be a common pattern among the offload engines. I noticed a similar issue with Intel QAT, which I elaborate on in this patch: https://lore.kernel.org/r/20250615045145.224567-1-ebiggers@kernel.org - Eric _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH] crypto: ahash - Stop legacy tfms from using the set_virt fallback path 2025-06-15 18:46 ` Eric Biggers @ 2025-06-15 19:37 ` Linus Torvalds 0 siblings, 0 replies; 34+ messages in thread From: Linus Torvalds @ 2025-06-15 19:37 UTC (permalink / raw) To: Eric Biggers Cc: Ard Biesheuvel, Herbert Xu, linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Jason On Sun, 15 Jun 2025 at 11:47, Eric Biggers <ebiggers@kernel.org> wrote: > > So yes, QCE seems to have only one queue, and even that one queue is *much* > slower than just using the CPU. It's even slower than the generic C code. Honestly, I have *NEVER* seen an external crypto accelerator that is worth using unless it's integrated with the target IO. Now, it's not my area of expertise either, so there may well be some random case that I haven't heard about, but the only sensible use-case I'm aware of is when the network card just does all the offloading and just does the whole SSL thing (or IPsec or whatever, but if you care about performance you'd be better off using wireguard and doing it all on the CPU anyway) And even then, people tend to not be happy with the results, because the hardware is too inflexible or too rare. (Replace "network card" with "disk controller" if that's your thing - the basic idea is the same: it's worthwhile if it's done natively by the IO target, not done by some third party accelerator - and while I'm convinced encryption on the disk controller makes sense, I'm not sure I'd actually *trust* it from a real cryptographic standpoint if you really care about it, because some of those are most definitely black boxes with the trust model seemingly being based on the "Trust me, Bro" approach to security). The other case is the "key is physically separate and isn't even under kernel control at all", but then it's never about performance in the first place (ie security keys etc). Even if the hardware crypto engine is fast - and as you see, no they aren't - any possible performance is absolutely killed by lack of caches and the IO overhead. This seems to also be pretty much true of async SMP crypto on the CPU as well. You can get better benchmarks by offloading the crypto to other CPU's, but I'm not convinced it's actually a good trade-off in reality. The cost of scheduling and just all the overhead of synchronization is very very real, and the benchmarks where it looks good tend to be the "we do nothing else, and we don't actually touch the data anyway, it's just purely about pointless benchmarking". Just the set-up costs for doing things asynchronously can be higher than the cost of just doing the operation itself. Linus _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH] crypto: ahash - Fix infinite recursion in ahash_def_finup 2025-06-15 3:18 ` Eric Biggers 2025-06-15 7:22 ` Ard Biesheuvel @ 2025-06-16 4:09 ` Herbert Xu 1 sibling, 0 replies; 34+ messages in thread From: Herbert Xu @ 2025-06-16 4:09 UTC (permalink / raw) To: Eric Biggers Cc: linux-crypto, linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, ardb, Jason, torvalds On Sat, Jun 14, 2025 at 08:18:07PM -0700, Eric Biggers wrote: > > Even with your patch applied, it overflows the stack when running the crypto > self-tests, apparently due to crypto/ahash.c calling into itself recursively: Thanks for the report. This driver doesn't provide a finup function which triggered a bug in the default finup implementation: ---8<--- Invoke the final function directly in the default finup implementation since crypto_ahash_final is now just a wrapper around finup. Reported-by: Eric Biggers <ebiggers@kernel.org> Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> diff --git a/crypto/ahash.c b/crypto/ahash.c index bd9e49950201..3878b4da3cfd 100644 --- a/crypto/ahash.c +++ b/crypto/ahash.c @@ -603,12 +603,14 @@ static void ahash_def_finup_done2(void *data, int err) static int ahash_def_finup_finish1(struct ahash_request *req, int err) { + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + if (err) goto out; req->base.complete = ahash_def_finup_done2; - err = crypto_ahash_final(req); + err = crypto_ahash_alg(tfm)->final(req); if (err == -EINPROGRESS || err == -EBUSY) return err; -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 08/16] lib/crypto/sha512: migrate arm-optimized SHA-512 code to library 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (6 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 09/16] lib/crypto/sha512: migrate arm64-optimized " Eric Biggers ` (7 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Instead of exposing the arm-optimized SHA-512 code via arm-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be arm-optimized, and it fixes the longstanding issue where the arm-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/arm/configs/exynos_defconfig | 1 - arch/arm/configs/milbeaut_m10v_defconfig | 1 - arch/arm/configs/multi_v7_defconfig | 1 - arch/arm/configs/omap2plus_defconfig | 1 - arch/arm/configs/pxa_defconfig | 1 - arch/arm/crypto/Kconfig | 10 -- arch/arm/crypto/Makefile | 15 --- arch/arm/crypto/sha512-glue.c | 110 ------------------ arch/arm/crypto/sha512-neon-glue.c | 75 ------------ arch/arm/crypto/sha512.h | 3 - lib/crypto/Kconfig | 1 + lib/crypto/Makefile | 14 +++ lib/crypto/arm/.gitignore | 2 + .../crypto => lib/crypto/arm}/sha512-armv4.pl | 0 lib/crypto/arm/sha512.h | 38 ++++++ 15 files changed, 55 insertions(+), 218 deletions(-) delete mode 100644 arch/arm/crypto/sha512-glue.c delete mode 100644 arch/arm/crypto/sha512-neon-glue.c delete mode 100644 arch/arm/crypto/sha512.h create mode 100644 lib/crypto/arm/.gitignore rename {arch/arm/crypto => lib/crypto/arm}/sha512-armv4.pl (100%) create mode 100644 lib/crypto/arm/sha512.h diff --git a/arch/arm/configs/exynos_defconfig b/arch/arm/configs/exynos_defconfig index f71af368674cf..d58e300693045 100644 --- a/arch/arm/configs/exynos_defconfig +++ b/arch/arm/configs/exynos_defconfig @@ -362,11 +362,10 @@ CONFIG_CRYPTO_LZ4=m CONFIG_CRYPTO_USER_API_HASH=m CONFIG_CRYPTO_USER_API_SKCIPHER=m CONFIG_CRYPTO_USER_API_RNG=m CONFIG_CRYPTO_USER_API_AEAD=m CONFIG_CRYPTO_SHA1_ARM_NEON=m -CONFIG_CRYPTO_SHA512_ARM=m CONFIG_CRYPTO_AES_ARM_BS=m CONFIG_CRYPTO_CHACHA20_NEON=m CONFIG_CRYPTO_DEV_EXYNOS_RNG=y CONFIG_CRYPTO_DEV_S5P=y CONFIG_DMA_CMA=y diff --git a/arch/arm/configs/milbeaut_m10v_defconfig b/arch/arm/configs/milbeaut_m10v_defconfig index 242e7d5a3f682..8ebf8bd872fe8 100644 --- a/arch/arm/configs/milbeaut_m10v_defconfig +++ b/arch/arm/configs/milbeaut_m10v_defconfig @@ -98,11 +98,10 @@ CONFIG_CRYPTO_SELFTESTS=y CONFIG_CRYPTO_AES=y CONFIG_CRYPTO_SEQIV=m CONFIG_CRYPTO_GHASH_ARM_CE=m CONFIG_CRYPTO_SHA1_ARM_NEON=m CONFIG_CRYPTO_SHA1_ARM_CE=m -CONFIG_CRYPTO_SHA512_ARM=m CONFIG_CRYPTO_AES_ARM=m CONFIG_CRYPTO_AES_ARM_BS=m CONFIG_CRYPTO_AES_ARM_CE=m CONFIG_CRYPTO_CHACHA20_NEON=m # CONFIG_CRYPTO_HW is not set diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v7_defconfig index 50c170b4619f7..3fd07e864ca85 100644 --- a/arch/arm/configs/multi_v7_defconfig +++ b/arch/arm/configs/multi_v7_defconfig @@ -1280,11 +1280,10 @@ CONFIG_CRYPTO_USER_API_SKCIPHER=m CONFIG_CRYPTO_USER_API_RNG=m CONFIG_CRYPTO_USER_API_AEAD=m CONFIG_CRYPTO_GHASH_ARM_CE=m CONFIG_CRYPTO_SHA1_ARM_NEON=m CONFIG_CRYPTO_SHA1_ARM_CE=m -CONFIG_CRYPTO_SHA512_ARM=m CONFIG_CRYPTO_AES_ARM=m CONFIG_CRYPTO_AES_ARM_BS=m CONFIG_CRYPTO_AES_ARM_CE=m CONFIG_CRYPTO_CHACHA20_NEON=m CONFIG_CRYPTO_DEV_SUN4I_SS=m diff --git a/arch/arm/configs/omap2plus_defconfig b/arch/arm/configs/omap2plus_defconfig index 9f9780c8e62aa..530dfb8338c98 100644 --- a/arch/arm/configs/omap2plus_defconfig +++ b/arch/arm/configs/omap2plus_defconfig @@ -703,11 +703,10 @@ CONFIG_NLS_CODEPAGE_437=y CONFIG_NLS_ISO8859_1=y CONFIG_SECURITY=y CONFIG_CRYPTO_MICHAEL_MIC=y CONFIG_CRYPTO_GHASH_ARM_CE=m CONFIG_CRYPTO_SHA1_ARM_NEON=m -CONFIG_CRYPTO_SHA512_ARM=m CONFIG_CRYPTO_AES_ARM=m CONFIG_CRYPTO_AES_ARM_BS=m CONFIG_CRYPTO_CHACHA20_NEON=m CONFIG_CRYPTO_DEV_OMAP=m CONFIG_CRYPTO_DEV_OMAP_SHAM=m diff --git a/arch/arm/configs/pxa_defconfig b/arch/arm/configs/pxa_defconfig index ff29c5b0e9c93..eaa44574d4a64 100644 --- a/arch/arm/configs/pxa_defconfig +++ b/arch/arm/configs/pxa_defconfig @@ -657,11 +657,10 @@ CONFIG_CRYPTO_WP512=m CONFIG_CRYPTO_ANUBIS=m CONFIG_CRYPTO_XCBC=m CONFIG_CRYPTO_DEFLATE=y CONFIG_CRYPTO_LZO=y CONFIG_CRYPTO_SHA1_ARM=m -CONFIG_CRYPTO_SHA512_ARM=m CONFIG_CRYPTO_AES_ARM=m CONFIG_FONTS=y CONFIG_FONT_8x8=y CONFIG_FONT_8x16=y CONFIG_FONT_6x11=y diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index 7efb9a8596e4e..a18f97f1597cb 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -91,20 +91,10 @@ config CRYPTO_SHA1_ARM_CE help SHA-1 secure hash algorithm (FIPS 180) Architecture: arm using ARMv8 Crypto Extensions -config CRYPTO_SHA512_ARM - tristate "Hash functions: SHA-384 and SHA-512 (NEON)" - select CRYPTO_HASH - depends on !CPU_V7M - help - SHA-384 and SHA-512 secure hash algorithms (FIPS 180) - - Architecture: arm using - - NEON (Advanced SIMD) extensions - config CRYPTO_AES_ARM tristate "Ciphers: AES" select CRYPTO_ALGAPI select CRYPTO_AES help diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile index 8479137c6e800..78a4042d8761c 100644 --- a/arch/arm/crypto/Makefile +++ b/arch/arm/crypto/Makefile @@ -5,11 +5,10 @@ obj-$(CONFIG_CRYPTO_AES_ARM) += aes-arm.o obj-$(CONFIG_CRYPTO_AES_ARM_BS) += aes-arm-bs.o obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o -obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o obj-$(CONFIG_CRYPTO_BLAKE2B_NEON) += blake2b-neon.o obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) += nhpoly1305-neon.o obj-$(CONFIG_CRYPTO_CURVE25519_NEON) += curve25519-neon.o obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o @@ -18,25 +17,11 @@ obj-$(CONFIG_CRYPTO_GHASH_ARM_CE) += ghash-arm-ce.o aes-arm-y := aes-cipher-core.o aes-cipher-glue.o aes-arm-bs-y := aes-neonbs-core.o aes-neonbs-glue.o sha1-arm-y := sha1-armv4-large.o sha1_glue.o sha1-arm-neon-y := sha1-armv7-neon.o sha1_neon_glue.o -sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha512-neon-glue.o -sha512-arm-y := sha512-core.o sha512-glue.o $(sha512-arm-neon-y) blake2b-neon-y := blake2b-neon-core.o blake2b-neon-glue.o sha1-arm-ce-y := sha1-ce-core.o sha1-ce-glue.o aes-arm-ce-y := aes-ce-core.o aes-ce-glue.o ghash-arm-ce-y := ghash-ce-core.o ghash-ce-glue.o nhpoly1305-neon-y := nh-neon-core.o nhpoly1305-neon-glue.o curve25519-neon-y := curve25519-core.o curve25519-glue.o - -quiet_cmd_perl = PERL $@ - cmd_perl = $(PERL) $(<) > $(@) - -$(obj)/%-core.S: $(src)/%-armv4.pl - $(call cmd,perl) - -clean-files += sha512-core.S - -aflags-thumb2-$(CONFIG_THUMB2_KERNEL) := -U__thumb2__ -D__thumb2__=1 - -AFLAGS_sha512-core.o += $(aflags-thumb2-y) diff --git a/arch/arm/crypto/sha512-glue.c b/arch/arm/crypto/sha512-glue.c deleted file mode 100644 index f8a6480889b1b..0000000000000 --- a/arch/arm/crypto/sha512-glue.c +++ /dev/null @@ -1,110 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * sha512-glue.c - accelerated SHA-384/512 for ARM - * - * Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org> - */ - -#include <asm/hwcap.h> -#include <asm/neon.h> -#include <crypto/internal/hash.h> -#include <crypto/sha2.h> -#include <crypto/sha512_base.h> -#include <linux/kernel.h> -#include <linux/module.h> - -#include "sha512.h" - -MODULE_DESCRIPTION("Accelerated SHA-384/SHA-512 secure hash for ARM"); -MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); -MODULE_LICENSE("GPL v2"); - -MODULE_ALIAS_CRYPTO("sha384"); -MODULE_ALIAS_CRYPTO("sha512"); -MODULE_ALIAS_CRYPTO("sha384-arm"); -MODULE_ALIAS_CRYPTO("sha512-arm"); - -asmlinkage void sha512_block_data_order(struct sha512_state *state, - u8 const *src, int blocks); - -static int sha512_arm_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha512_base_do_update_blocks(desc, data, len, - sha512_block_data_order); -} - -static int sha512_arm_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - sha512_base_do_finup(desc, data, len, sha512_block_data_order); - return sha512_base_finish(desc, out); -} - -static struct shash_alg sha512_arm_algs[] = { { - .init = sha384_base_init, - .update = sha512_arm_update, - .finup = sha512_arm_finup, - .descsize = SHA512_STATE_SIZE, - .digestsize = SHA384_DIGEST_SIZE, - .base = { - .cra_name = "sha384", - .cra_driver_name = "sha384-arm", - .cra_priority = 250, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -}, { - .init = sha512_base_init, - .update = sha512_arm_update, - .finup = sha512_arm_finup, - .descsize = SHA512_STATE_SIZE, - .digestsize = SHA512_DIGEST_SIZE, - .base = { - .cra_name = "sha512", - .cra_driver_name = "sha512-arm", - .cra_priority = 250, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -} }; - -static int __init sha512_arm_mod_init(void) -{ - int err; - - err = crypto_register_shashes(sha512_arm_algs, - ARRAY_SIZE(sha512_arm_algs)); - if (err) - return err; - - if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_has_neon()) { - err = crypto_register_shashes(sha512_neon_algs, - ARRAY_SIZE(sha512_neon_algs)); - if (err) - goto err_unregister; - } - return 0; - -err_unregister: - crypto_unregister_shashes(sha512_arm_algs, - ARRAY_SIZE(sha512_arm_algs)); - - return err; -} - -static void __exit sha512_arm_mod_fini(void) -{ - crypto_unregister_shashes(sha512_arm_algs, - ARRAY_SIZE(sha512_arm_algs)); - if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_has_neon()) - crypto_unregister_shashes(sha512_neon_algs, - ARRAY_SIZE(sha512_neon_algs)); -} - -module_init(sha512_arm_mod_init); -module_exit(sha512_arm_mod_fini); diff --git a/arch/arm/crypto/sha512-neon-glue.c b/arch/arm/crypto/sha512-neon-glue.c deleted file mode 100644 index bd528077fefbf..0000000000000 --- a/arch/arm/crypto/sha512-neon-glue.c +++ /dev/null @@ -1,75 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * sha512-neon-glue.c - accelerated SHA-384/512 for ARM NEON - * - * Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org> - */ - -#include <asm/neon.h> -#include <crypto/internal/hash.h> -#include <crypto/sha2.h> -#include <crypto/sha512_base.h> -#include <linux/kernel.h> -#include <linux/module.h> - -#include "sha512.h" - -MODULE_ALIAS_CRYPTO("sha384-neon"); -MODULE_ALIAS_CRYPTO("sha512-neon"); - -asmlinkage void sha512_block_data_order_neon(struct sha512_state *state, - const u8 *src, int blocks); - -static int sha512_neon_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - int remain; - - kernel_neon_begin(); - remain = sha512_base_do_update_blocks(desc, data, len, - sha512_block_data_order_neon); - kernel_neon_end(); - return remain; -} - -static int sha512_neon_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - kernel_neon_begin(); - sha512_base_do_finup(desc, data, len, sha512_block_data_order_neon); - kernel_neon_end(); - return sha512_base_finish(desc, out); -} - -struct shash_alg sha512_neon_algs[] = { { - .init = sha384_base_init, - .update = sha512_neon_update, - .finup = sha512_neon_finup, - .descsize = SHA512_STATE_SIZE, - .digestsize = SHA384_DIGEST_SIZE, - .base = { - .cra_name = "sha384", - .cra_driver_name = "sha384-neon", - .cra_priority = 300, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_module = THIS_MODULE, - - } -}, { - .init = sha512_base_init, - .update = sha512_neon_update, - .finup = sha512_neon_finup, - .descsize = SHA512_STATE_SIZE, - .digestsize = SHA512_DIGEST_SIZE, - .base = { - .cra_name = "sha512", - .cra_driver_name = "sha512-neon", - .cra_priority = 300, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -} }; diff --git a/arch/arm/crypto/sha512.h b/arch/arm/crypto/sha512.h deleted file mode 100644 index eeaee52cda69b..0000000000000 --- a/arch/arm/crypto/sha512.h +++ /dev/null @@ -1,3 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ - -extern struct shash_alg sha512_neon_algs[2]; diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 34b249ca3db23..83054a496f9cd 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -177,10 +177,11 @@ config CRYPTO_LIB_SHA512 <crypto/sha2.h>. config CRYPTO_LIB_SHA512_ARCH bool depends on CRYPTO_LIB_SHA512 + default y if ARM && !CPU_V7M config CRYPTO_LIB_SM3 tristate if !KMSAN # avoid false positives from assembly diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 7df76ab5fe692..41513bc29a5e6 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -1,9 +1,14 @@ # SPDX-License-Identifier: GPL-2.0 obj-y += tests/ +aflags-thumb2-$(CONFIG_THUMB2_KERNEL) := -U__thumb2__ -D__thumb2__=1 + +quiet_cmd_perlasm = PERLASM $@ + cmd_perlasm = $(PERL) $(<) > $(@) + obj-$(CONFIG_CRYPTO_LIB_UTILS) += libcryptoutils.o libcryptoutils-y := memneq.o utils.o # chacha is used by the /dev/random driver which is always builtin obj-y += chacha.o @@ -64,10 +69,19 @@ libsha256-generic-y := sha256-generic.o obj-$(CONFIG_CRYPTO_LIB_SHA512) += libsha512.o libsha512-y := sha512.o ifeq ($(CONFIG_CRYPTO_LIB_SHA512_ARCH),y) CFLAGS_sha512.o += -I$(src)/$(SRCARCH) + +ifeq ($(CONFIG_ARM),y) +libsha512-y += arm/sha512-core.o +$(obj)/arm/sha512-core.S: $(src)/arm/sha512-armv4.pl + $(call cmd,perlasm) +clean-files += arm/sha512-core.S +AFLAGS_arm/sha512-core.o += $(aflags-thumb2-y) +endif + endif # CONFIG_CRYPTO_LIB_SHA512_ARCH obj-$(CONFIG_MPILIB) += mpi/ obj-$(CONFIG_CRYPTO_SELFTESTS) += simd.o diff --git a/lib/crypto/arm/.gitignore b/lib/crypto/arm/.gitignore new file mode 100644 index 0000000000000..670a4d97b5684 --- /dev/null +++ b/lib/crypto/arm/.gitignore @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +sha512-core.S diff --git a/arch/arm/crypto/sha512-armv4.pl b/lib/crypto/arm/sha512-armv4.pl similarity index 100% rename from arch/arm/crypto/sha512-armv4.pl rename to lib/crypto/arm/sha512-armv4.pl diff --git a/lib/crypto/arm/sha512.h b/lib/crypto/arm/sha512.h new file mode 100644 index 0000000000000..f147b6490d6cd --- /dev/null +++ b/lib/crypto/arm/sha512.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * arm32-optimized SHA-512 block function + * + * Copyright 2025 Google LLC + */ + +#include <asm/neon.h> +#include <crypto/internal/simd.h> + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); + +asmlinkage void sha512_block_data_order(struct sha512_block_state *state, + const u8 *data, size_t nblocks); +asmlinkage void sha512_block_data_order_neon(struct sha512_block_state *state, + const u8 *data, size_t nblocks); + +static void sha512_blocks(struct sha512_block_state *state, + const u8 *data, size_t nblocks) +{ + if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && + static_branch_likely(&have_neon) && likely(crypto_simd_usable())) { + kernel_neon_begin(); + sha512_block_data_order_neon(state, data, nblocks); + kernel_neon_end(); + } else { + sha512_block_data_order(state, data, nblocks); + } +} + +#ifdef CONFIG_KERNEL_MODE_NEON +#define sha512_mod_init_arch sha512_mod_init_arch +static inline void sha512_mod_init_arch(void) +{ + if (cpu_has_neon()) + static_branch_enable(&have_neon); +} +#endif /* CONFIG_KERNEL_MODE_NEON */ -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 09/16] lib/crypto/sha512: migrate arm64-optimized SHA-512 code to library 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (7 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 08/16] lib/crypto/sha512: migrate arm-optimized SHA-512 code to library Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 10/16] mips: cavium-octeon: move octeon-crypto.h into asm directory Eric Biggers ` (6 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Instead of exposing the arm64-optimized SHA-512 code via arm64-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be arm64-optimized, and it fixes the longstanding issue where the arm64-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly functions from int or 'unsigned int' to size_t. Update the ARMv8 CE assembly function accordingly. The scalar assembly function actually already treated it as size_t. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/arm64/configs/defconfig | 1 - arch/arm64/crypto/Kconfig | 19 ---- arch/arm64/crypto/Makefile | 14 --- arch/arm64/crypto/sha512-ce-glue.c | 96 ------------------- arch/arm64/crypto/sha512-glue.c | 83 ---------------- lib/crypto/Kconfig | 1 + lib/crypto/Makefile | 10 ++ lib/crypto/arm64/.gitignore | 2 + .../crypto/arm64}/sha512-ce-core.S | 10 +- lib/crypto/arm64/sha512.h | 46 +++++++++ 10 files changed, 64 insertions(+), 218 deletions(-) delete mode 100644 arch/arm64/crypto/sha512-ce-glue.c delete mode 100644 arch/arm64/crypto/sha512-glue.c create mode 100644 lib/crypto/arm64/.gitignore rename {arch/arm64/crypto => lib/crypto/arm64}/sha512-ce-core.S (97%) create mode 100644 lib/crypto/arm64/sha512.h diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 897fc686e6a91..b612b78b3b091 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -1742,11 +1742,10 @@ CONFIG_CRYPTO_ECHAINIV=y CONFIG_CRYPTO_MICHAEL_MIC=m CONFIG_CRYPTO_ANSI_CPRNG=y CONFIG_CRYPTO_USER_API_RNG=m CONFIG_CRYPTO_GHASH_ARM64_CE=y CONFIG_CRYPTO_SHA1_ARM64_CE=y -CONFIG_CRYPTO_SHA512_ARM64_CE=m CONFIG_CRYPTO_SHA3_ARM64=m CONFIG_CRYPTO_SM3_ARM64_CE=m CONFIG_CRYPTO_AES_ARM64_CE_BLK=y CONFIG_CRYPTO_AES_ARM64_BS=m CONFIG_CRYPTO_AES_ARM64_CE_CCM=y diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index c44b0f202a1f5..a9ead99f72c28 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -34,29 +34,10 @@ config CRYPTO_SHA1_ARM64_CE SHA-1 secure hash algorithm (FIPS 180) Architecture: arm64 using: - ARMv8 Crypto Extensions -config CRYPTO_SHA512_ARM64 - tristate "Hash functions: SHA-384 and SHA-512" - select CRYPTO_HASH - help - SHA-384 and SHA-512 secure hash algorithms (FIPS 180) - - Architecture: arm64 - -config CRYPTO_SHA512_ARM64_CE - tristate "Hash functions: SHA-384 and SHA-512 (ARMv8 Crypto Extensions)" - depends on KERNEL_MODE_NEON - select CRYPTO_HASH - select CRYPTO_SHA512_ARM64 - help - SHA-384 and SHA-512 secure hash algorithms (FIPS 180) - - Architecture: arm64 using: - - ARMv8 Crypto Extensions - config CRYPTO_SHA3_ARM64 tristate "Hash functions: SHA-3 (ARMv8.2 Crypto Extensions)" depends on KERNEL_MODE_NEON select CRYPTO_HASH select CRYPTO_SHA3 diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile index c231c980c5142..228101f125d50 100644 --- a/arch/arm64/crypto/Makefile +++ b/arch/arm64/crypto/Makefile @@ -6,13 +6,10 @@ # obj-$(CONFIG_CRYPTO_SHA1_ARM64_CE) += sha1-ce.o sha1-ce-y := sha1-ce-glue.o sha1-ce-core.o -obj-$(CONFIG_CRYPTO_SHA512_ARM64_CE) += sha512-ce.o -sha512-ce-y := sha512-ce-glue.o sha512-ce-core.o - obj-$(CONFIG_CRYPTO_SHA3_ARM64) += sha3-ce.o sha3-ce-y := sha3-ce-glue.o sha3-ce-core.o obj-$(CONFIG_CRYPTO_SM3_NEON) += sm3-neon.o sm3-neon-y := sm3-neon-glue.o sm3-neon-core.o @@ -51,24 +48,13 @@ obj-$(CONFIG_CRYPTO_AES_ARM64_CE_BLK) += aes-ce-blk.o aes-ce-blk-y := aes-glue-ce.o aes-ce.o obj-$(CONFIG_CRYPTO_AES_ARM64_NEON_BLK) += aes-neon-blk.o aes-neon-blk-y := aes-glue-neon.o aes-neon.o -obj-$(CONFIG_CRYPTO_SHA512_ARM64) += sha512-arm64.o -sha512-arm64-y := sha512-glue.o sha512-core.o - obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) += nhpoly1305-neon.o nhpoly1305-neon-y := nh-neon-core.o nhpoly1305-neon-glue.o obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o obj-$(CONFIG_CRYPTO_AES_ARM64_BS) += aes-neon-bs.o aes-neon-bs-y := aes-neonbs-core.o aes-neonbs-glue.o - -quiet_cmd_perlasm = PERLASM $@ - cmd_perlasm = $(PERL) $(<) void $(@) - -$(obj)/sha512-core.S: $(src)/../lib/crypto/sha2-armv8.pl - $(call cmd,perlasm) - -clean-files += sha512-core.S diff --git a/arch/arm64/crypto/sha512-ce-glue.c b/arch/arm64/crypto/sha512-ce-glue.c deleted file mode 100644 index 6fb3001fa2c9b..0000000000000 --- a/arch/arm64/crypto/sha512-ce-glue.c +++ /dev/null @@ -1,96 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * sha512-ce-glue.c - SHA-384/SHA-512 using ARMv8 Crypto Extensions - * - * Copyright (C) 2018 Linaro Ltd <ard.biesheuvel@linaro.org> - * - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License version 2 as - * published by the Free Software Foundation. - */ - -#include <asm/neon.h> -#include <crypto/internal/hash.h> -#include <crypto/sha2.h> -#include <crypto/sha512_base.h> -#include <linux/cpufeature.h> -#include <linux/kernel.h> -#include <linux/module.h> - -MODULE_DESCRIPTION("SHA-384/SHA-512 secure hash using ARMv8 Crypto Extensions"); -MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); -MODULE_LICENSE("GPL v2"); -MODULE_ALIAS_CRYPTO("sha384"); -MODULE_ALIAS_CRYPTO("sha512"); - -asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src, - int blocks); - -static void sha512_ce_transform(struct sha512_state *sst, u8 const *src, - int blocks) -{ - do { - int rem; - - kernel_neon_begin(); - rem = __sha512_ce_transform(sst, src, blocks); - kernel_neon_end(); - src += (blocks - rem) * SHA512_BLOCK_SIZE; - blocks = rem; - } while (blocks); -} - -static int sha512_ce_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha512_base_do_update_blocks(desc, data, len, - sha512_ce_transform); -} - -static int sha512_ce_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - sha512_base_do_finup(desc, data, len, sha512_ce_transform); - return sha512_base_finish(desc, out); -} - -static struct shash_alg algs[] = { { - .init = sha384_base_init, - .update = sha512_ce_update, - .finup = sha512_ce_finup, - .descsize = SHA512_STATE_SIZE, - .digestsize = SHA384_DIGEST_SIZE, - .base.cra_name = "sha384", - .base.cra_driver_name = "sha384-ce", - .base.cra_priority = 200, - .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize = SHA512_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, -}, { - .init = sha512_base_init, - .update = sha512_ce_update, - .finup = sha512_ce_finup, - .descsize = SHA512_STATE_SIZE, - .digestsize = SHA512_DIGEST_SIZE, - .base.cra_name = "sha512", - .base.cra_driver_name = "sha512-ce", - .base.cra_priority = 200, - .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize = SHA512_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, -} }; - -static int __init sha512_ce_mod_init(void) -{ - return crypto_register_shashes(algs, ARRAY_SIZE(algs)); -} - -static void __exit sha512_ce_mod_fini(void) -{ - crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); -} - -module_cpu_feature_match(SHA512, sha512_ce_mod_init); -module_exit(sha512_ce_mod_fini); diff --git a/arch/arm64/crypto/sha512-glue.c b/arch/arm64/crypto/sha512-glue.c deleted file mode 100644 index a78e184c100fa..0000000000000 --- a/arch/arm64/crypto/sha512-glue.c +++ /dev/null @@ -1,83 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Linux/arm64 port of the OpenSSL SHA512 implementation for AArch64 - * - * Copyright (c) 2016 Linaro Ltd. <ard.biesheuvel@linaro.org> - */ - -#include <crypto/internal/hash.h> -#include <crypto/sha2.h> -#include <crypto/sha512_base.h> -#include <linux/kernel.h> -#include <linux/module.h> - -MODULE_DESCRIPTION("SHA-384/SHA-512 secure hash for arm64"); -MODULE_AUTHOR("Andy Polyakov <appro@openssl.org>"); -MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>"); -MODULE_LICENSE("GPL v2"); -MODULE_ALIAS_CRYPTO("sha384"); -MODULE_ALIAS_CRYPTO("sha512"); - -asmlinkage void sha512_blocks_arch(u64 *digest, const void *data, - unsigned int num_blks); - -static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src, - int blocks) -{ - sha512_blocks_arch(sst->state, src, blocks); -} - -static int sha512_update_arm64(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha512_base_do_update_blocks(desc, data, len, - sha512_arm64_transform); -} - -static int sha512_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - sha512_base_do_finup(desc, data, len, sha512_arm64_transform); - return sha512_base_finish(desc, out); -} - -static struct shash_alg algs[] = { { - .digestsize = SHA512_DIGEST_SIZE, - .init = sha512_base_init, - .update = sha512_update_arm64, - .finup = sha512_finup, - .descsize = SHA512_STATE_SIZE, - .base.cra_name = "sha512", - .base.cra_driver_name = "sha512-arm64", - .base.cra_priority = 150, - .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize = SHA512_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, -}, { - .digestsize = SHA384_DIGEST_SIZE, - .init = sha384_base_init, - .update = sha512_update_arm64, - .finup = sha512_finup, - .descsize = SHA512_STATE_SIZE, - .base.cra_name = "sha384", - .base.cra_driver_name = "sha384-arm64", - .base.cra_priority = 150, - .base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize = SHA384_BLOCK_SIZE, - .base.cra_module = THIS_MODULE, -} }; - -static int __init sha512_mod_init(void) -{ - return crypto_register_shashes(algs, ARRAY_SIZE(algs)); -} - -static void __exit sha512_mod_fini(void) -{ - crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); -} - -module_init(sha512_mod_init); -module_exit(sha512_mod_fini); diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 83054a496f9cd..5f474a57a041c 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -178,10 +178,11 @@ config CRYPTO_LIB_SHA512 config CRYPTO_LIB_SHA512_ARCH bool depends on CRYPTO_LIB_SHA512 default y if ARM && !CPU_V7M + default y if ARM64 config CRYPTO_LIB_SM3 tristate if !KMSAN # avoid false positives from assembly diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 41513bc29a5e6..2aef827c025f0 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -5,10 +5,13 @@ obj-y += tests/ aflags-thumb2-$(CONFIG_THUMB2_KERNEL) := -U__thumb2__ -D__thumb2__=1 quiet_cmd_perlasm = PERLASM $@ cmd_perlasm = $(PERL) $(<) > $(@) +quiet_cmd_perlasm_with_args = PERLASM $@ + cmd_perlasm_with_args = $(PERL) $(<) void $(@) + obj-$(CONFIG_CRYPTO_LIB_UTILS) += libcryptoutils.o libcryptoutils-y := memneq.o utils.o # chacha is used by the /dev/random driver which is always builtin obj-y += chacha.o @@ -78,10 +81,17 @@ $(obj)/arm/sha512-core.S: $(src)/arm/sha512-armv4.pl $(call cmd,perlasm) clean-files += arm/sha512-core.S AFLAGS_arm/sha512-core.o += $(aflags-thumb2-y) endif +ifeq ($(CONFIG_ARM64),y) +libsha512-y += arm64/sha512-core.o +$(obj)/arm64/sha512-core.S: $(src)/../../arch/arm64/lib/crypto/sha2-armv8.pl + $(call cmd,perlasm_with_args) +clean-files += arm64/sha512-core.S +libsha512-$(CONFIG_KERNEL_MODE_NEON) += arm64/sha512-ce-core.o +endif endif # CONFIG_CRYPTO_LIB_SHA512_ARCH obj-$(CONFIG_MPILIB) += mpi/ obj-$(CONFIG_CRYPTO_SELFTESTS) += simd.o diff --git a/lib/crypto/arm64/.gitignore b/lib/crypto/arm64/.gitignore new file mode 100644 index 0000000000000..670a4d97b5684 --- /dev/null +++ b/lib/crypto/arm64/.gitignore @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +sha512-core.S diff --git a/arch/arm64/crypto/sha512-ce-core.S b/lib/crypto/arm64/sha512-ce-core.S similarity index 97% rename from arch/arm64/crypto/sha512-ce-core.S rename to lib/crypto/arm64/sha512-ce-core.S index 91ef68b15fcc6..7d870a435ea38 100644 --- a/arch/arm64/crypto/sha512-ce-core.S +++ b/lib/crypto/arm64/sha512-ce-core.S @@ -100,12 +100,12 @@ add v\i4\().2d, v\i1\().2d, v\i3\().2d sha512h2 q\i3, q\i1, v\i0\().2d .endm /* - * int __sha512_ce_transform(struct sha512_state *sst, u8 const *src, - * int blocks) + * size_t __sha512_ce_transform(struct sha512_block_state *state, + * const u8 *data, size_t nblocks); */ .text SYM_FUNC_START(__sha512_ce_transform) /* load state */ ld1 {v8.2d-v11.2d}, [x0] @@ -115,11 +115,11 @@ SYM_FUNC_START(__sha512_ce_transform) ld1 {v20.2d-v23.2d}, [x3], #64 /* load input */ 0: ld1 {v12.2d-v15.2d}, [x1], #64 ld1 {v16.2d-v19.2d}, [x1], #64 - sub w2, w2, #1 + sub x2, x2, #1 CPU_LE( rev64 v12.16b, v12.16b ) CPU_LE( rev64 v13.16b, v13.16b ) CPU_LE( rev64 v14.16b, v14.16b ) CPU_LE( rev64 v15.16b, v15.16b ) @@ -195,12 +195,12 @@ CPU_LE( rev64 v19.16b, v19.16b ) add v10.2d, v10.2d, v2.2d add v11.2d, v11.2d, v3.2d cond_yield 3f, x4, x5 /* handled all input blocks? */ - cbnz w2, 0b + cbnz x2, 0b /* store new state */ 3: st1 {v8.2d-v11.2d}, [x0] - mov w0, w2 + mov x0, x2 ret SYM_FUNC_END(__sha512_ce_transform) diff --git a/lib/crypto/arm64/sha512.h b/lib/crypto/arm64/sha512.h new file mode 100644 index 0000000000000..eae14f9752e0b --- /dev/null +++ b/lib/crypto/arm64/sha512.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * arm64-optimized SHA-512 block function + * + * Copyright 2025 Google LLC + */ + +#include <asm/neon.h> +#include <crypto/internal/simd.h> +#include <linux/cpufeature.h> + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha512_insns); + +asmlinkage void sha512_blocks_arch(struct sha512_block_state *state, + const u8 *data, size_t nblocks); +asmlinkage size_t __sha512_ce_transform(struct sha512_block_state *state, + const u8 *data, size_t nblocks); + +static void sha512_blocks(struct sha512_block_state *state, + const u8 *data, size_t nblocks) +{ + if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && + static_branch_likely(&have_sha512_insns) && + likely(crypto_simd_usable())) { + do { + size_t rem; + + kernel_neon_begin(); + rem = __sha512_ce_transform(state, data, nblocks); + kernel_neon_end(); + data += (nblocks - rem) * SHA512_BLOCK_SIZE; + nblocks = rem; + } while (nblocks); + } else { + sha512_blocks_arch(state, data, nblocks); + } +} + +#ifdef CONFIG_KERNEL_MODE_NEON +#define sha512_mod_init_arch sha512_mod_init_arch +static inline void sha512_mod_init_arch(void) +{ + if (cpu_have_named_feature(SHA512)) + static_branch_enable(&have_sha512_insns); +} +#endif /* CONFIG_KERNEL_MODE_NEON */ -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 10/16] mips: cavium-octeon: move octeon-crypto.h into asm directory 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (8 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 09/16] lib/crypto/sha512: migrate arm64-optimized " Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 11/16] lib/crypto/sha512: migrate mips-optimized SHA-512 code to library Eric Biggers ` (5 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Since arch/mips/cavium-octeon/crypto/octeon-crypto.h is now needed outside of its directory, move it to arch/mips/include/asm/octeon/crypto.h so that it can be included as <asm/octeon/crypto.h>. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/mips/cavium-octeon/crypto/octeon-crypto.c | 3 +-- arch/mips/cavium-octeon/crypto/octeon-md5.c | 3 +-- arch/mips/cavium-octeon/crypto/octeon-sha1.c | 3 +-- arch/mips/cavium-octeon/crypto/octeon-sha256.c | 3 +-- arch/mips/cavium-octeon/crypto/octeon-sha512.c | 3 +-- .../crypto/octeon-crypto.h => include/asm/octeon/crypto.h} | 0 6 files changed, 5 insertions(+), 10 deletions(-) rename arch/mips/{cavium-octeon/crypto/octeon-crypto.h => include/asm/octeon/crypto.h} (100%) diff --git a/arch/mips/cavium-octeon/crypto/octeon-crypto.c b/arch/mips/cavium-octeon/crypto/octeon-crypto.c index cfb4a146cf178..0ff8559391f5b 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-crypto.c +++ b/arch/mips/cavium-octeon/crypto/octeon-crypto.c @@ -5,16 +5,15 @@ * * Copyright (C) 2004-2012 Cavium Networks */ #include <asm/cop2.h> +#include <asm/octeon/crypto.h> #include <linux/export.h> #include <linux/interrupt.h> #include <linux/sched/task_stack.h> -#include "octeon-crypto.h" - /** * Enable access to Octeon's COP2 crypto hardware for kernel use. Wrap any * crypto operations in calls to octeon_crypto_enable/disable in order to make * sure the state of COP2 isn't corrupted if userspace is also performing * hardware crypto operations. Allocate the state parameter on the stack. diff --git a/arch/mips/cavium-octeon/crypto/octeon-md5.c b/arch/mips/cavium-octeon/crypto/octeon-md5.c index fbc84eb7fedf5..a8ce831e2cebd 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-md5.c +++ b/arch/mips/cavium-octeon/crypto/octeon-md5.c @@ -17,20 +17,19 @@ * under the terms of the GNU General Public License as published by the Free * Software Foundation; either version 2 of the License, or (at your option) * any later version. */ +#include <asm/octeon/crypto.h> #include <asm/octeon/octeon.h> #include <crypto/internal/hash.h> #include <crypto/md5.h> #include <linux/kernel.h> #include <linux/module.h> #include <linux/string.h> #include <linux/unaligned.h> -#include "octeon-crypto.h" - struct octeon_md5_state { __le32 hash[MD5_HASH_WORDS]; u64 byte_count; }; diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha1.c b/arch/mips/cavium-octeon/crypto/octeon-sha1.c index e70f21a473daf..e4a369a7764fb 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha1.c +++ b/arch/mips/cavium-octeon/crypto/octeon-sha1.c @@ -11,20 +11,19 @@ * Copyright (c) Alan Smithee. * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> * Copyright (c) Jean-Francois Dive <jef@linuxbe.org> */ +#include <asm/octeon/crypto.h> #include <asm/octeon/octeon.h> #include <crypto/internal/hash.h> #include <crypto/sha1.h> #include <crypto/sha1_base.h> #include <linux/errno.h> #include <linux/kernel.h> #include <linux/module.h> -#include "octeon-crypto.h" - /* * We pass everything as 64-bit. OCTEON can handle misaligned data. */ static void octeon_sha1_store_hash(struct sha1_state *sctx) diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha256.c b/arch/mips/cavium-octeon/crypto/octeon-sha256.c index f93faaf1f4af6..c20038239cb6b 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha256.c +++ b/arch/mips/cavium-octeon/crypto/octeon-sha256.c @@ -10,17 +10,16 @@ * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> * Copyright (c) 2002 James Morris <jmorris@intercode.com.au> * SHA224 Support Copyright 2007 Intel Corporation <jonathan.lynch@intel.com> */ +#include <asm/octeon/crypto.h> #include <asm/octeon/octeon.h> #include <crypto/internal/sha2.h> #include <linux/kernel.h> #include <linux/module.h> -#include "octeon-crypto.h" - /* * We pass everything as 64-bit. OCTEON can handle misaligned data. */ void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha512.c b/arch/mips/cavium-octeon/crypto/octeon-sha512.c index 215311053db3c..53de74f642db0 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha512.c +++ b/arch/mips/cavium-octeon/crypto/octeon-sha512.c @@ -11,19 +11,18 @@ * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com> * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> * Copyright (c) 2003 Kyle McMartin <kyle@debian.org> */ +#include <asm/octeon/crypto.h> #include <asm/octeon/octeon.h> #include <crypto/internal/hash.h> #include <crypto/sha2.h> #include <crypto/sha512_base.h> #include <linux/kernel.h> #include <linux/module.h> -#include "octeon-crypto.h" - /* * We pass everything as 64-bit. OCTEON can handle misaligned data. */ static void octeon_sha512_store_hash(struct sha512_state *sctx) diff --git a/arch/mips/cavium-octeon/crypto/octeon-crypto.h b/arch/mips/include/asm/octeon/crypto.h similarity index 100% rename from arch/mips/cavium-octeon/crypto/octeon-crypto.h rename to arch/mips/include/asm/octeon/crypto.h -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 11/16] lib/crypto/sha512: migrate mips-optimized SHA-512 code to library 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (9 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 10/16] mips: cavium-octeon: move octeon-crypto.h into asm directory Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 12/16] lib/crypto/sha512: migrate riscv-optimized " Eric Biggers ` (4 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Instead of exposing the mips-optimized SHA-512 code via mips-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be mips-optimized, and it fixes the longstanding issue where the mips-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. Note: to see the diff from arch/mips/cavium-octeon/crypto/octeon-sha512.c to lib/crypto/mips/sha512.h, view this commit with 'git show -M10'. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/mips/cavium-octeon/crypto/Makefile | 1 - .../mips/cavium-octeon/crypto/octeon-sha512.c | 166 ------------------ arch/mips/configs/cavium_octeon_defconfig | 1 - arch/mips/crypto/Kconfig | 10 -- lib/crypto/Kconfig | 1 + lib/crypto/mips/sha512.h | 74 ++++++++ 6 files changed, 75 insertions(+), 178 deletions(-) delete mode 100644 arch/mips/cavium-octeon/crypto/octeon-sha512.c create mode 100644 lib/crypto/mips/sha512.h diff --git a/arch/mips/cavium-octeon/crypto/Makefile b/arch/mips/cavium-octeon/crypto/Makefile index db26c73fa0eda..168b19ef7ce89 100644 --- a/arch/mips/cavium-octeon/crypto/Makefile +++ b/arch/mips/cavium-octeon/crypto/Makefile @@ -6,6 +6,5 @@ obj-y += octeon-crypto.o obj-$(CONFIG_CRYPTO_MD5_OCTEON) += octeon-md5.o obj-$(CONFIG_CRYPTO_SHA1_OCTEON) += octeon-sha1.o obj-$(CONFIG_CRYPTO_SHA256_OCTEON) += octeon-sha256.o -obj-$(CONFIG_CRYPTO_SHA512_OCTEON) += octeon-sha512.o diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha512.c b/arch/mips/cavium-octeon/crypto/octeon-sha512.c deleted file mode 100644 index 53de74f642db0..0000000000000 --- a/arch/mips/cavium-octeon/crypto/octeon-sha512.c +++ /dev/null @@ -1,166 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Cryptographic API. - * - * SHA-512 and SHA-384 Secure Hash Algorithm. - * - * Adapted for OCTEON by Aaro Koskinen <aaro.koskinen@iki.fi>. - * - * Based on crypto/sha512_generic.c, which is: - * - * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com> - * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> - * Copyright (c) 2003 Kyle McMartin <kyle@debian.org> - */ - -#include <asm/octeon/crypto.h> -#include <asm/octeon/octeon.h> -#include <crypto/internal/hash.h> -#include <crypto/sha2.h> -#include <crypto/sha512_base.h> -#include <linux/kernel.h> -#include <linux/module.h> - -/* - * We pass everything as 64-bit. OCTEON can handle misaligned data. - */ - -static void octeon_sha512_store_hash(struct sha512_state *sctx) -{ - write_octeon_64bit_hash_sha512(sctx->state[0], 0); - write_octeon_64bit_hash_sha512(sctx->state[1], 1); - write_octeon_64bit_hash_sha512(sctx->state[2], 2); - write_octeon_64bit_hash_sha512(sctx->state[3], 3); - write_octeon_64bit_hash_sha512(sctx->state[4], 4); - write_octeon_64bit_hash_sha512(sctx->state[5], 5); - write_octeon_64bit_hash_sha512(sctx->state[6], 6); - write_octeon_64bit_hash_sha512(sctx->state[7], 7); -} - -static void octeon_sha512_read_hash(struct sha512_state *sctx) -{ - sctx->state[0] = read_octeon_64bit_hash_sha512(0); - sctx->state[1] = read_octeon_64bit_hash_sha512(1); - sctx->state[2] = read_octeon_64bit_hash_sha512(2); - sctx->state[3] = read_octeon_64bit_hash_sha512(3); - sctx->state[4] = read_octeon_64bit_hash_sha512(4); - sctx->state[5] = read_octeon_64bit_hash_sha512(5); - sctx->state[6] = read_octeon_64bit_hash_sha512(6); - sctx->state[7] = read_octeon_64bit_hash_sha512(7); -} - -static void octeon_sha512_transform(struct sha512_state *sctx, - const u8 *src, int blocks) -{ - do { - const u64 *block = (const u64 *)src; - - write_octeon_64bit_block_sha512(block[0], 0); - write_octeon_64bit_block_sha512(block[1], 1); - write_octeon_64bit_block_sha512(block[2], 2); - write_octeon_64bit_block_sha512(block[3], 3); - write_octeon_64bit_block_sha512(block[4], 4); - write_octeon_64bit_block_sha512(block[5], 5); - write_octeon_64bit_block_sha512(block[6], 6); - write_octeon_64bit_block_sha512(block[7], 7); - write_octeon_64bit_block_sha512(block[8], 8); - write_octeon_64bit_block_sha512(block[9], 9); - write_octeon_64bit_block_sha512(block[10], 10); - write_octeon_64bit_block_sha512(block[11], 11); - write_octeon_64bit_block_sha512(block[12], 12); - write_octeon_64bit_block_sha512(block[13], 13); - write_octeon_64bit_block_sha512(block[14], 14); - octeon_sha512_start(block[15]); - - src += SHA512_BLOCK_SIZE; - } while (--blocks); -} - -static int octeon_sha512_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - struct sha512_state *sctx = shash_desc_ctx(desc); - struct octeon_cop2_state state; - unsigned long flags; - int remain; - - flags = octeon_crypto_enable(&state); - octeon_sha512_store_hash(sctx); - - remain = sha512_base_do_update_blocks(desc, data, len, - octeon_sha512_transform); - - octeon_sha512_read_hash(sctx); - octeon_crypto_disable(&state, flags); - return remain; -} - -static int octeon_sha512_finup(struct shash_desc *desc, const u8 *src, - unsigned int len, u8 *hash) -{ - struct sha512_state *sctx = shash_desc_ctx(desc); - struct octeon_cop2_state state; - unsigned long flags; - - flags = octeon_crypto_enable(&state); - octeon_sha512_store_hash(sctx); - - sha512_base_do_finup(desc, src, len, octeon_sha512_transform); - - octeon_sha512_read_hash(sctx); - octeon_crypto_disable(&state, flags); - return sha512_base_finish(desc, hash); -} - -static struct shash_alg octeon_sha512_algs[2] = { { - .digestsize = SHA512_DIGEST_SIZE, - .init = sha512_base_init, - .update = octeon_sha512_update, - .finup = octeon_sha512_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha512", - .cra_driver_name= "octeon-sha512", - .cra_priority = OCTEON_CR_OPCODE_PRIORITY, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -}, { - .digestsize = SHA384_DIGEST_SIZE, - .init = sha384_base_init, - .update = octeon_sha512_update, - .finup = octeon_sha512_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha384", - .cra_driver_name= "octeon-sha384", - .cra_priority = OCTEON_CR_OPCODE_PRIORITY, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -} }; - -static int __init octeon_sha512_mod_init(void) -{ - if (!octeon_has_crypto()) - return -ENOTSUPP; - return crypto_register_shashes(octeon_sha512_algs, - ARRAY_SIZE(octeon_sha512_algs)); -} - -static void __exit octeon_sha512_mod_fini(void) -{ - crypto_unregister_shashes(octeon_sha512_algs, - ARRAY_SIZE(octeon_sha512_algs)); -} - -module_init(octeon_sha512_mod_init); -module_exit(octeon_sha512_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-512 and SHA-384 Secure Hash Algorithms (OCTEON)"); -MODULE_AUTHOR("Aaro Koskinen <aaro.koskinen@iki.fi>"); diff --git a/arch/mips/configs/cavium_octeon_defconfig b/arch/mips/configs/cavium_octeon_defconfig index 88ae0aa85364b..effdfb2bb738b 100644 --- a/arch/mips/configs/cavium_octeon_defconfig +++ b/arch/mips/configs/cavium_octeon_defconfig @@ -155,11 +155,10 @@ CONFIG_SECURITY=y CONFIG_SECURITY_NETWORK=y CONFIG_CRYPTO_CBC=y CONFIG_CRYPTO_HMAC=y CONFIG_CRYPTO_MD5_OCTEON=y CONFIG_CRYPTO_SHA1_OCTEON=m -CONFIG_CRYPTO_SHA512_OCTEON=m CONFIG_CRYPTO_DES=y CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y CONFIG_DEBUG_FS=y CONFIG_MAGIC_SYSRQ=y # CONFIG_SCHED_DEBUG is not set diff --git a/arch/mips/crypto/Kconfig b/arch/mips/crypto/Kconfig index 6bf073ae7613f..51a76a5ee3b16 100644 --- a/arch/mips/crypto/Kconfig +++ b/arch/mips/crypto/Kconfig @@ -20,16 +20,6 @@ config CRYPTO_SHA1_OCTEON help SHA-1 secure hash algorithm (FIPS 180) Architecture: mips OCTEON -config CRYPTO_SHA512_OCTEON - tristate "Hash functions: SHA-384 and SHA-512 (OCTEON)" - depends on CPU_CAVIUM_OCTEON - select CRYPTO_SHA512 - select CRYPTO_HASH - help - SHA-384 and SHA-512 secure hash algorithms (FIPS 180) - - Architecture: mips OCTEON using crypto instructions, when available - endmenu diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 5f474a57a041c..7e54348f70ec1 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -179,10 +179,11 @@ config CRYPTO_LIB_SHA512 config CRYPTO_LIB_SHA512_ARCH bool depends on CRYPTO_LIB_SHA512 default y if ARM && !CPU_V7M default y if ARM64 + default y if MIPS && CPU_CAVIUM_OCTEON config CRYPTO_LIB_SM3 tristate if !KMSAN # avoid false positives from assembly diff --git a/lib/crypto/mips/sha512.h b/lib/crypto/mips/sha512.h new file mode 100644 index 0000000000000..b3ffbc1e8ca8e --- /dev/null +++ b/lib/crypto/mips/sha512.h @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Cryptographic API. + * + * SHA-512 and SHA-384 Secure Hash Algorithm. + * + * Adapted for OCTEON by Aaro Koskinen <aaro.koskinen@iki.fi>. + * + * Based on crypto/sha512_generic.c, which is: + * + * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com> + * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> + * Copyright (c) 2003 Kyle McMartin <kyle@debian.org> + */ + +#include <asm/octeon/crypto.h> +#include <asm/octeon/octeon.h> + +/* + * We pass everything as 64-bit. OCTEON can handle misaligned data. + */ + +static void sha512_blocks(struct sha512_block_state *state, + const u8 *data, size_t nblocks) +{ + struct octeon_cop2_state cop2_state; + unsigned long flags; + + if (!octeon_has_crypto()) + return sha512_blocks_generic(state, data, nblocks); + + flags = octeon_crypto_enable(&cop2_state); + write_octeon_64bit_hash_sha512(state->h[0], 0); + write_octeon_64bit_hash_sha512(state->h[1], 1); + write_octeon_64bit_hash_sha512(state->h[2], 2); + write_octeon_64bit_hash_sha512(state->h[3], 3); + write_octeon_64bit_hash_sha512(state->h[4], 4); + write_octeon_64bit_hash_sha512(state->h[5], 5); + write_octeon_64bit_hash_sha512(state->h[6], 6); + write_octeon_64bit_hash_sha512(state->h[7], 7); + + do { + const u64 *block = (const u64 *)data; + + write_octeon_64bit_block_sha512(block[0], 0); + write_octeon_64bit_block_sha512(block[1], 1); + write_octeon_64bit_block_sha512(block[2], 2); + write_octeon_64bit_block_sha512(block[3], 3); + write_octeon_64bit_block_sha512(block[4], 4); + write_octeon_64bit_block_sha512(block[5], 5); + write_octeon_64bit_block_sha512(block[6], 6); + write_octeon_64bit_block_sha512(block[7], 7); + write_octeon_64bit_block_sha512(block[8], 8); + write_octeon_64bit_block_sha512(block[9], 9); + write_octeon_64bit_block_sha512(block[10], 10); + write_octeon_64bit_block_sha512(block[11], 11); + write_octeon_64bit_block_sha512(block[12], 12); + write_octeon_64bit_block_sha512(block[13], 13); + write_octeon_64bit_block_sha512(block[14], 14); + octeon_sha512_start(block[15]); + + data += SHA512_BLOCK_SIZE; + } while (--nblocks); + + state->h[0] = read_octeon_64bit_hash_sha512(0); + state->h[1] = read_octeon_64bit_hash_sha512(1); + state->h[2] = read_octeon_64bit_hash_sha512(2); + state->h[3] = read_octeon_64bit_hash_sha512(3); + state->h[4] = read_octeon_64bit_hash_sha512(4); + state->h[5] = read_octeon_64bit_hash_sha512(5); + state->h[6] = read_octeon_64bit_hash_sha512(6); + state->h[7] = read_octeon_64bit_hash_sha512(7); + octeon_crypto_disable(&cop2_state, flags); +} -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 12/16] lib/crypto/sha512: migrate riscv-optimized SHA-512 code to library 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (10 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 11/16] lib/crypto/sha512: migrate mips-optimized SHA-512 code to library Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 13/16] lib/crypto/sha512: migrate s390-optimized " Eric Biggers ` (3 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Instead of exposing the riscv-optimized SHA-512 code via riscv-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be riscv-optimized, and it fixes the longstanding issue where the riscv-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly function from int to size_t. The assembly function actually already treated it as size_t. Note: to see the diff from arch/riscv/crypto/sha512-riscv64-glue.c to lib/crypto/riscv/sha512.h, view this commit with 'git show -M10'. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/riscv/crypto/Kconfig | 12 -- arch/riscv/crypto/Makefile | 3 - arch/riscv/crypto/sha512-riscv64-glue.c | 130 ------------------ lib/crypto/Kconfig | 1 + lib/crypto/Makefile | 2 + .../riscv}/sha512-riscv64-zvknhb-zvkb.S | 4 +- lib/crypto/riscv/sha512.h | 41 ++++++ 7 files changed, 46 insertions(+), 147 deletions(-) delete mode 100644 arch/riscv/crypto/sha512-riscv64-glue.c rename {arch/riscv/crypto => lib/crypto/riscv}/sha512-riscv64-zvknhb-zvkb.S (98%) create mode 100644 lib/crypto/riscv/sha512.h diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 53e4e1eacf554..a75d6325607b4 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -26,22 +26,10 @@ config CRYPTO_GHASH_RISCV64 GCM GHASH function (NIST SP 800-38D) Architecture: riscv64 using: - Zvkg vector crypto extension -config CRYPTO_SHA512_RISCV64 - tristate "Hash functions: SHA-384 and SHA-512" - depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO - select CRYPTO_LIB_SHA512 - select CRYPTO_SHA512 - help - SHA-384 and SHA-512 secure hash algorithm (FIPS 180) - - Architecture: riscv64 using: - - Zvknhb vector crypto extension - - Zvkb vector crypto extension - config CRYPTO_SM3_RISCV64 tristate "Hash functions: SM3 (ShangMi 3)" depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO select CRYPTO_HASH select CRYPTO_LIB_SM3 diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index e10e8257734e3..183495a95cc0e 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -5,13 +5,10 @@ aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o \ aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o -obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o -sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o - obj-$(CONFIG_CRYPTO_SM3_RISCV64) += sm3-riscv64.o sm3-riscv64-y := sm3-riscv64-glue.o sm3-riscv64-zvksh-zvkb.o obj-$(CONFIG_CRYPTO_SM4_RISCV64) += sm4-riscv64.o sm4-riscv64-y := sm4-riscv64-glue.o sm4-riscv64-zvksed-zvkb.o diff --git a/arch/riscv/crypto/sha512-riscv64-glue.c b/arch/riscv/crypto/sha512-riscv64-glue.c deleted file mode 100644 index b3dbc71de07b0..0000000000000 --- a/arch/riscv/crypto/sha512-riscv64-glue.c +++ /dev/null @@ -1,130 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * SHA-512 and SHA-384 using the RISC-V vector crypto extensions - * - * Copyright (C) 2023 VRULL GmbH - * Author: Heiko Stuebner <heiko.stuebner@vrull.eu> - * - * Copyright (C) 2023 SiFive, Inc. - * Author: Jerry Shih <jerry.shih@sifive.com> - */ - -#include <asm/simd.h> -#include <asm/vector.h> -#include <crypto/internal/hash.h> -#include <crypto/internal/simd.h> -#include <crypto/sha512_base.h> -#include <linux/kernel.h> -#include <linux/module.h> - -/* - * Note: the asm function only uses the 'state' field of struct sha512_state. - * It is assumed to be the first field. - */ -asmlinkage void sha512_transform_zvknhb_zvkb( - struct sha512_state *state, const u8 *data, int num_blocks); - -static void sha512_block(struct sha512_state *state, const u8 *data, - int num_blocks) -{ - /* - * Ensure struct sha512_state begins directly with the SHA-512 - * 512-bit internal state, as this is what the asm function expects. - */ - BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0); - - if (crypto_simd_usable()) { - kernel_vector_begin(); - sha512_transform_zvknhb_zvkb(state, data, num_blocks); - kernel_vector_end(); - } else { - struct __sha512_ctx ctx = {}; - - static_assert(sizeof(ctx.state) == sizeof(state->state)); - memcpy(&ctx.state, state->state, sizeof(ctx.state)); - __sha512_update(&ctx, data, - (size_t)num_blocks * SHA512_BLOCK_SIZE); - memcpy(state->state, &ctx.state, sizeof(state->state)); - } -} - -static int riscv64_sha512_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha512_base_do_update_blocks(desc, data, len, sha512_block); -} - -static int riscv64_sha512_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - sha512_base_do_finup(desc, data, len, sha512_block); - return sha512_base_finish(desc, out); -} - -static int riscv64_sha512_digest(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha512_base_init(desc) ?: - riscv64_sha512_finup(desc, data, len, out); -} - -static struct shash_alg riscv64_sha512_algs[] = { - { - .init = sha512_base_init, - .update = riscv64_sha512_update, - .finup = riscv64_sha512_finup, - .digest = riscv64_sha512_digest, - .descsize = SHA512_STATE_SIZE, - .digestsize = SHA512_DIGEST_SIZE, - .base = { - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_priority = 300, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_name = "sha512", - .cra_driver_name = "sha512-riscv64-zvknhb-zvkb", - .cra_module = THIS_MODULE, - }, - }, { - .init = sha384_base_init, - .update = riscv64_sha512_update, - .finup = riscv64_sha512_finup, - .descsize = SHA512_STATE_SIZE, - .digestsize = SHA384_DIGEST_SIZE, - .base = { - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_priority = 300, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_name = "sha384", - .cra_driver_name = "sha384-riscv64-zvknhb-zvkb", - .cra_module = THIS_MODULE, - }, - }, -}; - -static int __init riscv64_sha512_mod_init(void) -{ - if (riscv_isa_extension_available(NULL, ZVKNHB) && - riscv_isa_extension_available(NULL, ZVKB) && - riscv_vector_vlen() >= 128) - return crypto_register_shashes(riscv64_sha512_algs, - ARRAY_SIZE(riscv64_sha512_algs)); - - return -ENODEV; -} - -static void __exit riscv64_sha512_mod_exit(void) -{ - crypto_unregister_shashes(riscv64_sha512_algs, - ARRAY_SIZE(riscv64_sha512_algs)); -} - -module_init(riscv64_sha512_mod_init); -module_exit(riscv64_sha512_mod_exit); - -MODULE_DESCRIPTION("SHA-512 (RISC-V accelerated)"); -MODULE_AUTHOR("Heiko Stuebner <heiko.stuebner@vrull.eu>"); -MODULE_LICENSE("GPL"); -MODULE_ALIAS_CRYPTO("sha512"); -MODULE_ALIAS_CRYPTO("sha384"); diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 7e54348f70ec1..482d934cc5ecc 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -180,10 +180,11 @@ config CRYPTO_LIB_SHA512_ARCH bool depends on CRYPTO_LIB_SHA512 default y if ARM && !CPU_V7M default y if ARM64 default y if MIPS && CPU_CAVIUM_OCTEON + default y if RISCV && 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO config CRYPTO_LIB_SM3 tristate if !KMSAN # avoid false positives from assembly diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 2aef827c025f0..bfa35cc235cea 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -88,10 +88,12 @@ libsha512-y += arm64/sha512-core.o $(obj)/arm64/sha512-core.S: $(src)/../../arch/arm64/lib/crypto/sha2-armv8.pl $(call cmd,perlasm_with_args) clean-files += arm64/sha512-core.S libsha512-$(CONFIG_KERNEL_MODE_NEON) += arm64/sha512-ce-core.o endif + +libsha512-$(CONFIG_RISCV) += riscv/sha512-riscv64-zvknhb-zvkb.o endif # CONFIG_CRYPTO_LIB_SHA512_ARCH obj-$(CONFIG_MPILIB) += mpi/ obj-$(CONFIG_CRYPTO_SELFTESTS) += simd.o diff --git a/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.S b/lib/crypto/riscv/sha512-riscv64-zvknhb-zvkb.S similarity index 98% rename from arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.S rename to lib/crypto/riscv/sha512-riscv64-zvknhb-zvkb.S index 89f4a10d12dd6..b41eebf605462 100644 --- a/arch/riscv/crypto/sha512-riscv64-zvknhb-zvkb.S +++ b/lib/crypto/riscv/sha512-riscv64-zvknhb-zvkb.S @@ -91,12 +91,12 @@ sha512_4rounds \last, W1, W2, W3, W0 sha512_4rounds \last, W2, W3, W0, W1 sha512_4rounds \last, W3, W0, W1, W2 .endm -// void sha512_transform_zvknhb_zvkb(u64 state[8], const u8 *data, -// int num_blocks); +// void sha512_transform_zvknhb_zvkb(struct sha512_block_state *state, +// const u8 *data, size_t nblocks); SYM_FUNC_START(sha512_transform_zvknhb_zvkb) // Setup mask for the vmerge to replace the first word (idx==0) in // message scheduling. There are 4 words, so an 8-bit mask suffices. vsetivli zero, 1, e8, m1, ta, ma diff --git a/lib/crypto/riscv/sha512.h b/lib/crypto/riscv/sha512.h new file mode 100644 index 0000000000000..9d0abede322f7 --- /dev/null +++ b/lib/crypto/riscv/sha512.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * SHA-512 and SHA-384 using the RISC-V vector crypto extensions + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner <heiko.stuebner@vrull.eu> + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih <jerry.shih@sifive.com> + */ + +#include <asm/simd.h> +#include <asm/vector.h> +#include <crypto/internal/simd.h> + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_extensions); + +asmlinkage void sha512_transform_zvknhb_zvkb(struct sha512_block_state *state, + const u8 *data, size_t nblocks); + +static void sha512_blocks(struct sha512_block_state *state, + const u8 *data, size_t nblocks) +{ + if (static_branch_likely(&have_extensions) && + likely(crypto_simd_usable())) { + kernel_vector_begin(); + sha512_transform_zvknhb_zvkb(state, data, nblocks); + kernel_vector_end(); + } else { + sha512_blocks_generic(state, data, nblocks); + } +} + +#define sha512_mod_init_arch sha512_mod_init_arch +static inline void sha512_mod_init_arch(void) +{ + if (riscv_isa_extension_available(NULL, ZVKNHB) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >= 128) + static_branch_enable(&have_extensions); +} -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 13/16] lib/crypto/sha512: migrate s390-optimized SHA-512 code to library 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (11 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 12/16] lib/crypto/sha512: migrate riscv-optimized " Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 14/16] lib/crypto/sha512: migrate sparc-optimized " Eric Biggers ` (2 subsequent siblings) 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Instead of exposing the s390-optimized SHA-512 code via s390-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be s390-optimized, and it fixes the longstanding issue where the s390-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/s390/configs/debug_defconfig | 1 - arch/s390/configs/defconfig | 1 - arch/s390/crypto/Kconfig | 10 -- arch/s390/crypto/Makefile | 1 - arch/s390/crypto/sha512_s390.c | 151 ------------------------------ lib/crypto/Kconfig | 1 + lib/crypto/s390/sha512.h | 28 ++++++ 7 files changed, 29 insertions(+), 164 deletions(-) delete mode 100644 arch/s390/crypto/sha512_s390.c create mode 100644 lib/crypto/s390/sha512.h diff --git a/arch/s390/configs/debug_defconfig b/arch/s390/configs/debug_defconfig index 8ecad727497e1..ef313c30b375c 100644 --- a/arch/s390/configs/debug_defconfig +++ b/arch/s390/configs/debug_defconfig @@ -802,11 +802,10 @@ CONFIG_CRYPTO_ZSTD=m CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_USER_API_HASH=m CONFIG_CRYPTO_USER_API_SKCIPHER=m CONFIG_CRYPTO_USER_API_RNG=m CONFIG_CRYPTO_USER_API_AEAD=m -CONFIG_CRYPTO_SHA512_S390=m CONFIG_CRYPTO_SHA1_S390=m CONFIG_CRYPTO_SHA3_256_S390=m CONFIG_CRYPTO_SHA3_512_S390=m CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_AES_S390=m diff --git a/arch/s390/configs/defconfig b/arch/s390/configs/defconfig index c13a77765162a..b6fa341bb03b6 100644 --- a/arch/s390/configs/defconfig +++ b/arch/s390/configs/defconfig @@ -789,11 +789,10 @@ CONFIG_CRYPTO_ANSI_CPRNG=m CONFIG_CRYPTO_JITTERENTROPY_OSR=1 CONFIG_CRYPTO_USER_API_HASH=m CONFIG_CRYPTO_USER_API_SKCIPHER=m CONFIG_CRYPTO_USER_API_RNG=m CONFIG_CRYPTO_USER_API_AEAD=m -CONFIG_CRYPTO_SHA512_S390=m CONFIG_CRYPTO_SHA1_S390=m CONFIG_CRYPTO_SHA3_256_S390=m CONFIG_CRYPTO_SHA3_512_S390=m CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_AES_S390=m diff --git a/arch/s390/crypto/Kconfig b/arch/s390/crypto/Kconfig index e2c27588b21a9..4557514fbac35 100644 --- a/arch/s390/crypto/Kconfig +++ b/arch/s390/crypto/Kconfig @@ -1,19 +1,9 @@ # SPDX-License-Identifier: GPL-2.0 menu "Accelerated Cryptographic Algorithms for CPU (s390)" -config CRYPTO_SHA512_S390 - tristate "Hash functions: SHA-384 and SHA-512" - select CRYPTO_HASH - help - SHA-384 and SHA-512 secure hash algorithms (FIPS 180) - - Architecture: s390 - - It is available as of z10. - config CRYPTO_SHA1_S390 tristate "Hash functions: SHA-1" select CRYPTO_HASH help SHA-1 secure hash algorithm (FIPS 180) diff --git a/arch/s390/crypto/Makefile b/arch/s390/crypto/Makefile index 21757d86cd499..473d64c0982af 100644 --- a/arch/s390/crypto/Makefile +++ b/arch/s390/crypto/Makefile @@ -2,11 +2,10 @@ # # Cryptographic API # obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o sha_common.o -obj-$(CONFIG_CRYPTO_SHA512_S390) += sha512_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA3_256_S390) += sha3_256_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA3_512_S390) += sha3_512_s390.o sha_common.o obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o obj-$(CONFIG_CRYPTO_AES_S390) += aes_s390.o obj-$(CONFIG_CRYPTO_PAES_S390) += paes_s390.o diff --git a/arch/s390/crypto/sha512_s390.c b/arch/s390/crypto/sha512_s390.c deleted file mode 100644 index e8bb172dbed75..0000000000000 --- a/arch/s390/crypto/sha512_s390.c +++ /dev/null @@ -1,151 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0+ -/* - * Cryptographic API. - * - * s390 implementation of the SHA512 and SHA38 Secure Hash Algorithm. - * - * Copyright IBM Corp. 2007 - * Author(s): Jan Glauber (jang@de.ibm.com) - */ -#include <asm/cpacf.h> -#include <crypto/internal/hash.h> -#include <crypto/sha2.h> -#include <linux/cpufeature.h> -#include <linux/errno.h> -#include <linux/kernel.h> -#include <linux/module.h> - -#include "sha.h" - -static int sha512_init_s390(struct shash_desc *desc) -{ - struct s390_sha_ctx *ctx = shash_desc_ctx(desc); - - ctx->sha512.state[0] = SHA512_H0; - ctx->sha512.state[1] = SHA512_H1; - ctx->sha512.state[2] = SHA512_H2; - ctx->sha512.state[3] = SHA512_H3; - ctx->sha512.state[4] = SHA512_H4; - ctx->sha512.state[5] = SHA512_H5; - ctx->sha512.state[6] = SHA512_H6; - ctx->sha512.state[7] = SHA512_H7; - ctx->count = 0; - ctx->sha512.count_hi = 0; - ctx->func = CPACF_KIMD_SHA_512; - - return 0; -} - -static int sha512_export(struct shash_desc *desc, void *out) -{ - struct s390_sha_ctx *sctx = shash_desc_ctx(desc); - struct sha512_state *octx = out; - - octx->count[0] = sctx->count; - octx->count[1] = sctx->sha512.count_hi; - memcpy(octx->state, sctx->state, sizeof(octx->state)); - return 0; -} - -static int sha512_import(struct shash_desc *desc, const void *in) -{ - struct s390_sha_ctx *sctx = shash_desc_ctx(desc); - const struct sha512_state *ictx = in; - - sctx->count = ictx->count[0]; - sctx->sha512.count_hi = ictx->count[1]; - - memcpy(sctx->state, ictx->state, sizeof(ictx->state)); - sctx->func = CPACF_KIMD_SHA_512; - return 0; -} - -static struct shash_alg sha512_alg = { - .digestsize = SHA512_DIGEST_SIZE, - .init = sha512_init_s390, - .update = s390_sha_update_blocks, - .finup = s390_sha_finup, - .export = sha512_export, - .import = sha512_import, - .descsize = sizeof(struct s390_sha_ctx), - .statesize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha512", - .cra_driver_name= "sha512-s390", - .cra_priority = 300, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -}; - -MODULE_ALIAS_CRYPTO("sha512"); - -static int sha384_init_s390(struct shash_desc *desc) -{ - struct s390_sha_ctx *ctx = shash_desc_ctx(desc); - - ctx->sha512.state[0] = SHA384_H0; - ctx->sha512.state[1] = SHA384_H1; - ctx->sha512.state[2] = SHA384_H2; - ctx->sha512.state[3] = SHA384_H3; - ctx->sha512.state[4] = SHA384_H4; - ctx->sha512.state[5] = SHA384_H5; - ctx->sha512.state[6] = SHA384_H6; - ctx->sha512.state[7] = SHA384_H7; - ctx->count = 0; - ctx->sha512.count_hi = 0; - ctx->func = CPACF_KIMD_SHA_512; - - return 0; -} - -static struct shash_alg sha384_alg = { - .digestsize = SHA384_DIGEST_SIZE, - .init = sha384_init_s390, - .update = s390_sha_update_blocks, - .finup = s390_sha_finup, - .export = sha512_export, - .import = sha512_import, - .descsize = sizeof(struct s390_sha_ctx), - .statesize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha384", - .cra_driver_name= "sha384-s390", - .cra_priority = 300, - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_ctxsize = sizeof(struct s390_sha_ctx), - .cra_module = THIS_MODULE, - } -}; - -MODULE_ALIAS_CRYPTO("sha384"); - -static int __init init(void) -{ - int ret; - - if (!cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_512)) - return -ENODEV; - if ((ret = crypto_register_shash(&sha512_alg)) < 0) - goto out; - if ((ret = crypto_register_shash(&sha384_alg)) < 0) - crypto_unregister_shash(&sha512_alg); -out: - return ret; -} - -static void __exit fini(void) -{ - crypto_unregister_shash(&sha512_alg); - crypto_unregister_shash(&sha384_alg); -} - -module_cpu_feature_match(S390_CPU_FEATURE_MSA, init); -module_exit(fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA512 and SHA-384 Secure Hash Algorithm"); diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 482d934cc5ecc..4bdb4118a789f 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -181,10 +181,11 @@ config CRYPTO_LIB_SHA512_ARCH depends on CRYPTO_LIB_SHA512 default y if ARM && !CPU_V7M default y if ARM64 default y if MIPS && CPU_CAVIUM_OCTEON default y if RISCV && 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + default y if S390 config CRYPTO_LIB_SM3 tristate if !KMSAN # avoid false positives from assembly diff --git a/lib/crypto/s390/sha512.h b/lib/crypto/s390/sha512.h new file mode 100644 index 0000000000000..24744651550cb --- /dev/null +++ b/lib/crypto/s390/sha512.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * SHA-512 optimized using the CP Assist for Cryptographic Functions (CPACF) + * + * Copyright 2025 Google LLC + */ +#include <asm/cpacf.h> +#include <linux/cpufeature.h> + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_cpacf_sha512); + +static void sha512_blocks(struct sha512_block_state *state, + const u8 *data, size_t nblocks) +{ + if (static_branch_likely(&have_cpacf_sha512)) + cpacf_kimd(CPACF_KIMD_SHA_512, state, data, + nblocks * SHA512_BLOCK_SIZE); + else + sha512_blocks_generic(state, data, nblocks); +} + +#define sha512_mod_init_arch sha512_mod_init_arch +static inline void sha512_mod_init_arch(void) +{ + if (cpu_have_feature(S390_CPU_FEATURE_MSA) && + cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_512)) + static_branch_enable(&have_cpacf_sha512); +} -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 14/16] lib/crypto/sha512: migrate sparc-optimized SHA-512 code to library 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (12 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 13/16] lib/crypto/sha512: migrate s390-optimized " Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 15/16] lib/crypto/sha512: migrate x86-optimized " Eric Biggers 2025-06-11 2:09 ` [PATCH 16/16] crypto: sha512 - remove sha512_base.h Eric Biggers 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Instead of exposing the sparc-optimized SHA-512 code via sparc-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be sparc-optimized, and it fixes the longstanding issue where the sparc-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly function from int to size_t. The assembly function actually already treated it as size_t. Note: to see the diff from arch/sparc/crypto/sha512_glue.c to lib/crypto/sparc/sha512.h, view this commit with 'git show -M10'. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/sparc/crypto/Kconfig | 10 -- arch/sparc/crypto/Makefile | 2 - arch/sparc/crypto/sha512_glue.c | 122 ------------------ lib/crypto/Kconfig | 1 + lib/crypto/Makefile | 1 + lib/crypto/sparc/sha512.h | 42 ++++++ .../crypto => lib/crypto/sparc}/sha512_asm.S | 0 7 files changed, 44 insertions(+), 134 deletions(-) delete mode 100644 arch/sparc/crypto/sha512_glue.c create mode 100644 lib/crypto/sparc/sha512.h rename {arch/sparc/crypto => lib/crypto/sparc}/sha512_asm.S (100%) diff --git a/arch/sparc/crypto/Kconfig b/arch/sparc/crypto/Kconfig index a6ba319c42dce..9d8da9aef3a41 100644 --- a/arch/sparc/crypto/Kconfig +++ b/arch/sparc/crypto/Kconfig @@ -34,20 +34,10 @@ config CRYPTO_SHA1_SPARC64 help SHA-1 secure hash algorithm (FIPS 180) Architecture: sparc64 -config CRYPTO_SHA512_SPARC64 - tristate "Hash functions: SHA-384 and SHA-512" - depends on SPARC64 - select CRYPTO_SHA512 - select CRYPTO_HASH - help - SHA-384 and SHA-512 secure hash algorithms (FIPS 180) - - Architecture: sparc64 using crypto instructions, when available - config CRYPTO_AES_SPARC64 tristate "Ciphers: AES, modes: ECB, CBC, CTR" depends on SPARC64 select CRYPTO_SKCIPHER help diff --git a/arch/sparc/crypto/Makefile b/arch/sparc/crypto/Makefile index 701c39edb0d73..99a7e8fd13bc9 100644 --- a/arch/sparc/crypto/Makefile +++ b/arch/sparc/crypto/Makefile @@ -2,19 +2,17 @@ # # Arch-specific CryptoAPI modules. # obj-$(CONFIG_CRYPTO_SHA1_SPARC64) += sha1-sparc64.o -obj-$(CONFIG_CRYPTO_SHA512_SPARC64) += sha512-sparc64.o obj-$(CONFIG_CRYPTO_MD5_SPARC64) += md5-sparc64.o obj-$(CONFIG_CRYPTO_AES_SPARC64) += aes-sparc64.o obj-$(CONFIG_CRYPTO_DES_SPARC64) += des-sparc64.o obj-$(CONFIG_CRYPTO_CAMELLIA_SPARC64) += camellia-sparc64.o sha1-sparc64-y := sha1_asm.o sha1_glue.o -sha512-sparc64-y := sha512_asm.o sha512_glue.o md5-sparc64-y := md5_asm.o md5_glue.o aes-sparc64-y := aes_asm.o aes_glue.o des-sparc64-y := des_asm.o des_glue.o camellia-sparc64-y := camellia_asm.o camellia_glue.o diff --git a/arch/sparc/crypto/sha512_glue.c b/arch/sparc/crypto/sha512_glue.c deleted file mode 100644 index fb81c3290c8c0..0000000000000 --- a/arch/sparc/crypto/sha512_glue.c +++ /dev/null @@ -1,122 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* Glue code for SHA512 hashing optimized for sparc64 crypto opcodes. - * - * This is based largely upon crypto/sha512_generic.c - * - * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com> - * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> - * Copyright (c) 2003 Kyle McMartin <kyle@debian.org> - */ - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include <asm/elf.h> -#include <asm/opcodes.h> -#include <asm/pstate.h> -#include <crypto/internal/hash.h> -#include <crypto/sha2.h> -#include <crypto/sha512_base.h> -#include <linux/kernel.h> -#include <linux/module.h> - -asmlinkage void sha512_sparc64_transform(u64 *digest, const char *data, - unsigned int rounds); - -static void sha512_block(struct sha512_state *sctx, const u8 *src, int blocks) -{ - sha512_sparc64_transform(sctx->state, src, blocks); -} - -static int sha512_sparc64_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha512_base_do_update_blocks(desc, data, len, sha512_block); -} - -static int sha512_sparc64_finup(struct shash_desc *desc, const u8 *src, - unsigned int len, u8 *out) -{ - sha512_base_do_finup(desc, src, len, sha512_block); - return sha512_base_finish(desc, out); -} - -static struct shash_alg sha512_alg = { - .digestsize = SHA512_DIGEST_SIZE, - .init = sha512_base_init, - .update = sha512_sparc64_update, - .finup = sha512_sparc64_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha512", - .cra_driver_name= "sha512-sparc64", - .cra_priority = SPARC_CR_OPCODE_PRIORITY, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -}; - -static struct shash_alg sha384_alg = { - .digestsize = SHA384_DIGEST_SIZE, - .init = sha384_base_init, - .update = sha512_sparc64_update, - .finup = sha512_sparc64_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha384", - .cra_driver_name= "sha384-sparc64", - .cra_priority = SPARC_CR_OPCODE_PRIORITY, - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -}; - -static bool __init sparc64_has_sha512_opcode(void) -{ - unsigned long cfr; - - if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) - return false; - - __asm__ __volatile__("rd %%asr26, %0" : "=r" (cfr)); - if (!(cfr & CFR_SHA512)) - return false; - - return true; -} - -static int __init sha512_sparc64_mod_init(void) -{ - if (sparc64_has_sha512_opcode()) { - int ret = crypto_register_shash(&sha384_alg); - if (ret < 0) - return ret; - - ret = crypto_register_shash(&sha512_alg); - if (ret < 0) { - crypto_unregister_shash(&sha384_alg); - return ret; - } - - pr_info("Using sparc64 sha512 opcode optimized SHA-512/SHA-384 implementation\n"); - return 0; - } - pr_info("sparc64 sha512 opcode not available.\n"); - return -ENODEV; -} - -static void __exit sha512_sparc64_mod_fini(void) -{ - crypto_unregister_shash(&sha384_alg); - crypto_unregister_shash(&sha512_alg); -} - -module_init(sha512_sparc64_mod_init); -module_exit(sha512_sparc64_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-384 and SHA-512 Secure Hash Algorithm, sparc64 sha512 opcode accelerated"); - -MODULE_ALIAS_CRYPTO("sha384"); -MODULE_ALIAS_CRYPTO("sha512"); - -#include "crop_devid.c" diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 4bdb4118a789f..0d6522b92ef57 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -182,10 +182,11 @@ config CRYPTO_LIB_SHA512_ARCH default y if ARM && !CPU_V7M default y if ARM64 default y if MIPS && CPU_CAVIUM_OCTEON default y if RISCV && 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO default y if S390 + default y if SPARC64 config CRYPTO_LIB_SM3 tristate if !KMSAN # avoid false positives from assembly diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index bfa35cc235cea..3c651927f5ba5 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -90,10 +90,11 @@ $(obj)/arm64/sha512-core.S: $(src)/../../arch/arm64/lib/crypto/sha2-armv8.pl clean-files += arm64/sha512-core.S libsha512-$(CONFIG_KERNEL_MODE_NEON) += arm64/sha512-ce-core.o endif libsha512-$(CONFIG_RISCV) += riscv/sha512-riscv64-zvknhb-zvkb.o +libsha512-$(CONFIG_SPARC) += sparc/sha512_asm.o endif # CONFIG_CRYPTO_LIB_SHA512_ARCH obj-$(CONFIG_MPILIB) += mpi/ obj-$(CONFIG_CRYPTO_SELFTESTS) += simd.o diff --git a/lib/crypto/sparc/sha512.h b/lib/crypto/sparc/sha512.h new file mode 100644 index 0000000000000..55303ab6b15f7 --- /dev/null +++ b/lib/crypto/sparc/sha512.h @@ -0,0 +1,42 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * SHA-512 accelerated using the sparc64 sha512 opcodes + * + * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com> + * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk> + * Copyright (c) 2003 Kyle McMartin <kyle@debian.org> + */ + +#include <asm/elf.h> +#include <asm/opcodes.h> +#include <asm/pstate.h> + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha512_opcodes); + +asmlinkage void sha512_sparc64_transform(struct sha512_block_state *state, + const u8 *data, size_t nblocks); + +static void sha512_blocks(struct sha512_block_state *state, + const u8 *data, size_t nblocks) +{ + if (static_branch_likely(&have_sha512_opcodes)) + sha512_sparc64_transform(state, data, nblocks); + else + sha512_blocks_generic(state, data, nblocks); +} + +#define sha512_mod_init_arch sha512_mod_init_arch +static inline void sha512_mod_init_arch(void) +{ + unsigned long cfr; + + if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) + return; + + __asm__ __volatile__("rd %%asr26, %0" : "=r" (cfr)); + if (!(cfr & CFR_SHA512)) + return; + + static_branch_enable(&have_sha512_opcodes); + pr_info("Using sparc64 sha512 opcode optimized SHA-512/SHA-384 implementation\n"); +} diff --git a/arch/sparc/crypto/sha512_asm.S b/lib/crypto/sparc/sha512_asm.S similarity index 100% rename from arch/sparc/crypto/sha512_asm.S rename to lib/crypto/sparc/sha512_asm.S -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 15/16] lib/crypto/sha512: migrate x86-optimized SHA-512 code to library 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (13 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 14/16] lib/crypto/sha512: migrate sparc-optimized " Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 2025-06-11 2:09 ` [PATCH 16/16] crypto: sha512 - remove sha512_base.h Eric Biggers 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> Instead of exposing the x86-optimized SHA-512 code via x86-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be x86-optimized, and it fixes the longstanding issue where the x86-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha512_blocks(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t. Signed-off-by: Eric Biggers <ebiggers@google.com> --- arch/x86/crypto/Kconfig | 13 - arch/x86/crypto/Makefile | 3 - arch/x86/crypto/sha512_ssse3_glue.c | 322 ------------------ lib/crypto/Kconfig | 1 + lib/crypto/Makefile | 3 + .../crypto/x86}/sha512-avx-asm.S | 11 +- .../crypto/x86}/sha512-avx2-asm.S | 11 +- .../crypto/x86}/sha512-ssse3-asm.S | 12 +- lib/crypto/x86/sha512.h | 54 +++ 9 files changed, 78 insertions(+), 352 deletions(-) delete mode 100644 arch/x86/crypto/sha512_ssse3_glue.c rename {arch/x86/crypto => lib/crypto/x86}/sha512-avx-asm.S (97%) rename {arch/x86/crypto => lib/crypto/x86}/sha512-avx2-asm.S (98%) rename {arch/x86/crypto => lib/crypto/x86}/sha512-ssse3-asm.S (97%) create mode 100644 lib/crypto/x86/sha512.h diff --git a/arch/x86/crypto/Kconfig b/arch/x86/crypto/Kconfig index 56cfdc79e2c66..eb641a300154e 100644 --- a/arch/x86/crypto/Kconfig +++ b/arch/x86/crypto/Kconfig @@ -388,23 +388,10 @@ config CRYPTO_SHA1_SSSE3 - SSSE3 (Supplemental SSE3) - AVX (Advanced Vector Extensions) - AVX2 (Advanced Vector Extensions 2) - SHA-NI (SHA Extensions New Instructions) -config CRYPTO_SHA512_SSSE3 - tristate "Hash functions: SHA-384 and SHA-512 (SSSE3/AVX/AVX2)" - depends on 64BIT - select CRYPTO_SHA512 - select CRYPTO_HASH - help - SHA-384 and SHA-512 secure hash algorithms (FIPS 180) - - Architecture: x86_64 using: - - SSSE3 (Supplemental SSE3) - - AVX (Advanced Vector Extensions) - - AVX2 (Advanced Vector Extensions 2) - config CRYPTO_SM3_AVX_X86_64 tristate "Hash functions: SM3 (AVX)" depends on 64BIT select CRYPTO_HASH select CRYPTO_LIB_SM3 diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index aa289a9e0153b..d31348be83704 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -52,13 +52,10 @@ aesni-intel-$(CONFIG_64BIT) += aes-gcm-avx10-x86_64.o endif obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o sha1-ssse3-y := sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ni_asm.o sha1_ssse3_glue.o -obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o -sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o - obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o obj-$(CONFIG_CRYPTO_POLYVAL_CLMUL_NI) += polyval-clmulni.o polyval-clmulni-y := polyval-clmulni_asm.o polyval-clmulni_glue.o diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c deleted file mode 100644 index 97744b7d23817..0000000000000 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ /dev/null @@ -1,322 +0,0 @@ -/* - * Cryptographic API. - * - * Glue code for the SHA512 Secure Hash Algorithm assembler - * implementation using supplemental SSE3 / AVX / AVX2 instructions. - * - * This file is based on sha512_generic.c - * - * Copyright (C) 2013 Intel Corporation - * Author: Tim Chen <tim.c.chen@linux.intel.com> - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the Free - * Software Foundation; either version 2 of the License, or (at your option) - * any later version. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - * - */ - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include <asm/cpu_device_id.h> -#include <asm/simd.h> -#include <crypto/internal/hash.h> -#include <linux/kernel.h> -#include <linux/module.h> -#include <crypto/sha2.h> -#include <crypto/sha512_base.h> - -asmlinkage void sha512_transform_ssse3(struct sha512_state *state, - const u8 *data, int blocks); - -static int sha512_update_x86(struct shash_desc *desc, const u8 *data, - unsigned int len, sha512_block_fn *sha512_xform) -{ - int remain; - - /* - * Make sure struct sha512_state begins directly with the SHA512 - * 512-bit internal state, as this is what the asm functions expect. - */ - BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0); - - kernel_fpu_begin(); - remain = sha512_base_do_update_blocks(desc, data, len, sha512_xform); - kernel_fpu_end(); - - return remain; -} - -static int sha512_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha512_block_fn *sha512_xform) -{ - kernel_fpu_begin(); - sha512_base_do_finup(desc, data, len, sha512_xform); - kernel_fpu_end(); - - return sha512_base_finish(desc, out); -} - -static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha512_update_x86(desc, data, len, sha512_transform_ssse3); -} - -static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha512_finup(desc, data, len, out, sha512_transform_ssse3); -} - -static struct shash_alg sha512_ssse3_algs[] = { { - .digestsize = SHA512_DIGEST_SIZE, - .init = sha512_base_init, - .update = sha512_ssse3_update, - .finup = sha512_ssse3_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha512", - .cra_driver_name = "sha512-ssse3", - .cra_priority = 150, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -}, { - .digestsize = SHA384_DIGEST_SIZE, - .init = sha384_base_init, - .update = sha512_ssse3_update, - .finup = sha512_ssse3_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha384", - .cra_driver_name = "sha384-ssse3", - .cra_priority = 150, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -} }; - -static int register_sha512_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - return crypto_register_shashes(sha512_ssse3_algs, - ARRAY_SIZE(sha512_ssse3_algs)); - return 0; -} - -static void unregister_sha512_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - crypto_unregister_shashes(sha512_ssse3_algs, - ARRAY_SIZE(sha512_ssse3_algs)); -} - -asmlinkage void sha512_transform_avx(struct sha512_state *state, - const u8 *data, int blocks); -static bool avx_usable(void) -{ - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { - if (boot_cpu_has(X86_FEATURE_AVX)) - pr_info("AVX detected but unusable.\n"); - return false; - } - - return true; -} - -static int sha512_avx_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha512_update_x86(desc, data, len, sha512_transform_avx); -} - -static int sha512_avx_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha512_finup(desc, data, len, out, sha512_transform_avx); -} - -static struct shash_alg sha512_avx_algs[] = { { - .digestsize = SHA512_DIGEST_SIZE, - .init = sha512_base_init, - .update = sha512_avx_update, - .finup = sha512_avx_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha512", - .cra_driver_name = "sha512-avx", - .cra_priority = 160, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -}, { - .digestsize = SHA384_DIGEST_SIZE, - .init = sha384_base_init, - .update = sha512_avx_update, - .finup = sha512_avx_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha384", - .cra_driver_name = "sha384-avx", - .cra_priority = 160, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -} }; - -static int register_sha512_avx(void) -{ - if (avx_usable()) - return crypto_register_shashes(sha512_avx_algs, - ARRAY_SIZE(sha512_avx_algs)); - return 0; -} - -static void unregister_sha512_avx(void) -{ - if (avx_usable()) - crypto_unregister_shashes(sha512_avx_algs, - ARRAY_SIZE(sha512_avx_algs)); -} - -asmlinkage void sha512_transform_rorx(struct sha512_state *state, - const u8 *data, int blocks); - -static int sha512_avx2_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha512_update_x86(desc, data, len, sha512_transform_rorx); -} - -static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha512_finup(desc, data, len, out, sha512_transform_rorx); -} - -static struct shash_alg sha512_avx2_algs[] = { { - .digestsize = SHA512_DIGEST_SIZE, - .init = sha512_base_init, - .update = sha512_avx2_update, - .finup = sha512_avx2_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha512", - .cra_driver_name = "sha512-avx2", - .cra_priority = 170, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA512_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -}, { - .digestsize = SHA384_DIGEST_SIZE, - .init = sha384_base_init, - .update = sha512_avx2_update, - .finup = sha512_avx2_finup, - .descsize = SHA512_STATE_SIZE, - .base = { - .cra_name = "sha384", - .cra_driver_name = "sha384-avx2", - .cra_priority = 170, - .cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize = SHA384_BLOCK_SIZE, - .cra_module = THIS_MODULE, - } -} }; - -static bool avx2_usable(void) -{ - if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) && - boot_cpu_has(X86_FEATURE_BMI2)) - return true; - - return false; -} - -static int register_sha512_avx2(void) -{ - if (avx2_usable()) - return crypto_register_shashes(sha512_avx2_algs, - ARRAY_SIZE(sha512_avx2_algs)); - return 0; -} -static const struct x86_cpu_id module_cpu_ids[] = { - X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), - X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), - X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), - {} -}; -MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); - -static void unregister_sha512_avx2(void) -{ - if (avx2_usable()) - crypto_unregister_shashes(sha512_avx2_algs, - ARRAY_SIZE(sha512_avx2_algs)); -} - -static int __init sha512_ssse3_mod_init(void) -{ - if (!x86_match_cpu(module_cpu_ids)) - return -ENODEV; - - if (register_sha512_ssse3()) - goto fail; - - if (register_sha512_avx()) { - unregister_sha512_ssse3(); - goto fail; - } - - if (register_sha512_avx2()) { - unregister_sha512_avx(); - unregister_sha512_ssse3(); - goto fail; - } - - return 0; -fail: - return -ENODEV; -} - -static void __exit sha512_ssse3_mod_fini(void) -{ - unregister_sha512_avx2(); - unregister_sha512_avx(); - unregister_sha512_ssse3(); -} - -module_init(sha512_ssse3_mod_init); -module_exit(sha512_ssse3_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA512 Secure Hash Algorithm, Supplemental SSE3 accelerated"); - -MODULE_ALIAS_CRYPTO("sha512"); -MODULE_ALIAS_CRYPTO("sha512-ssse3"); -MODULE_ALIAS_CRYPTO("sha512-avx"); -MODULE_ALIAS_CRYPTO("sha512-avx2"); -MODULE_ALIAS_CRYPTO("sha384"); -MODULE_ALIAS_CRYPTO("sha384-ssse3"); -MODULE_ALIAS_CRYPTO("sha384-avx"); -MODULE_ALIAS_CRYPTO("sha384-avx2"); diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 0d6522b92ef57..88496ee08b5ae 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -183,10 +183,11 @@ config CRYPTO_LIB_SHA512_ARCH default y if ARM64 default y if MIPS && CPU_CAVIUM_OCTEON default y if RISCV && 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO default y if S390 default y if SPARC64 + default y if X86_64 config CRYPTO_LIB_SM3 tristate if !KMSAN # avoid false positives from assembly diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 3c651927f5ba5..bc4bf15f26142 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -91,10 +91,13 @@ clean-files += arm64/sha512-core.S libsha512-$(CONFIG_KERNEL_MODE_NEON) += arm64/sha512-ce-core.o endif libsha512-$(CONFIG_RISCV) += riscv/sha512-riscv64-zvknhb-zvkb.o libsha512-$(CONFIG_SPARC) += sparc/sha512_asm.o +libsha512-$(CONFIG_X86) += x86/sha512-ssse3-asm.o \ + x86/sha512-avx-asm.o \ + x86/sha512-avx2-asm.o endif # CONFIG_CRYPTO_LIB_SHA512_ARCH obj-$(CONFIG_MPILIB) += mpi/ obj-$(CONFIG_CRYPTO_SELFTESTS) += simd.o diff --git a/arch/x86/crypto/sha512-avx-asm.S b/lib/crypto/x86/sha512-avx-asm.S similarity index 97% rename from arch/x86/crypto/sha512-avx-asm.S rename to lib/crypto/x86/sha512-avx-asm.S index 5bfce4b045fdf..84291772ba385 100644 --- a/arch/x86/crypto/sha512-avx-asm.S +++ b/lib/crypto/x86/sha512-avx-asm.S @@ -46,11 +46,11 @@ # and search for that title. # ######################################################################## #include <linux/linkage.h> -#include <linux/cfi_types.h> +#include <linux/objtool.h> .text # Virtual Registers # ARG1 @@ -265,18 +265,21 @@ frame_size = frame_WK + WK_SIZE add tmp0, h_64 RotateState .endm ######################################################################## -# void sha512_transform_avx(sha512_state *state, const u8 *data, int blocks) +# void sha512_transform_avx(struct sha512_block_state *state, +# const u8 *data, size_t nblocks); # Purpose: Updates the SHA512 digest stored at "state" with the message # stored in "data". # The size of the message pointed to by "data" must be an integer multiple # of SHA512 message blocks. -# "blocks" is the message length in SHA512 blocks +# "nblocks" is the message length in SHA512 blocks ######################################################################## -SYM_TYPED_FUNC_START(sha512_transform_avx) +SYM_FUNC_START(sha512_transform_avx) + ANNOTATE_NOENDBR # since this is called only via static_call + test msglen, msglen je .Lnowork # Save GPRs push %rbx diff --git a/arch/x86/crypto/sha512-avx2-asm.S b/lib/crypto/x86/sha512-avx2-asm.S similarity index 98% rename from arch/x86/crypto/sha512-avx2-asm.S rename to lib/crypto/x86/sha512-avx2-asm.S index 24973f42c43ff..2af6a4d7d1640 100644 --- a/arch/x86/crypto/sha512-avx2-asm.S +++ b/lib/crypto/x86/sha512-avx2-asm.S @@ -48,11 +48,11 @@ ######################################################################## # This code schedules 1 blocks at a time, with 4 lanes per block ######################################################################## #include <linux/linkage.h> -#include <linux/cfi_types.h> +#include <linux/objtool.h> .text # Virtual Registers Y_0 = %ymm4 @@ -557,18 +557,21 @@ frame_size = frame_CTX + CTX_SIZE RotateState .endm ######################################################################## -# void sha512_transform_rorx(sha512_state *state, const u8 *data, int blocks) +# void sha512_transform_rorx(struct sha512_block_state *state, +# const u8 *data, size_t nblocks); # Purpose: Updates the SHA512 digest stored at "state" with the message # stored in "data". # The size of the message pointed to by "data" must be an integer multiple # of SHA512 message blocks. -# "blocks" is the message length in SHA512 blocks +# "nblocks" is the message length in SHA512 blocks ######################################################################## -SYM_TYPED_FUNC_START(sha512_transform_rorx) +SYM_FUNC_START(sha512_transform_rorx) + ANNOTATE_NOENDBR # since this is called only via static_call + # Save GPRs push %rbx push %r12 push %r13 push %r14 diff --git a/arch/x86/crypto/sha512-ssse3-asm.S b/lib/crypto/x86/sha512-ssse3-asm.S similarity index 97% rename from arch/x86/crypto/sha512-ssse3-asm.S rename to lib/crypto/x86/sha512-ssse3-asm.S index 30a2c4777f9d5..a7544beb59d38 100644 --- a/arch/x86/crypto/sha512-ssse3-asm.S +++ b/lib/crypto/x86/sha512-ssse3-asm.S @@ -46,11 +46,11 @@ # and search for that title. # ######################################################################## #include <linux/linkage.h> -#include <linux/cfi_types.h> +#include <linux/objtool.h> .text # Virtual Registers # ARG1 @@ -264,20 +264,20 @@ frame_size = frame_WK + WK_SIZE lea (T1, T2), h_64 RotateState .endm ######################################################################## -## void sha512_transform_ssse3(struct sha512_state *state, const u8 *data, -## int blocks); -# (struct sha512_state is assumed to begin with u64 state[8]) +# void sha512_transform_ssse3(struct sha512_block_state *state, +# const u8 *data, size_t nblocks); # Purpose: Updates the SHA512 digest stored at "state" with the message # stored in "data". # The size of the message pointed to by "data" must be an integer multiple # of SHA512 message blocks. -# "blocks" is the message length in SHA512 blocks. +# "nblocks" is the message length in SHA512 blocks ######################################################################## -SYM_TYPED_FUNC_START(sha512_transform_ssse3) +SYM_FUNC_START(sha512_transform_ssse3) + ANNOTATE_NOENDBR # since this is called only via static_call test msglen, msglen je .Lnowork # Save GPRs diff --git a/lib/crypto/x86/sha512.h b/lib/crypto/x86/sha512.h new file mode 100644 index 0000000000000..c13503d9d57d9 --- /dev/null +++ b/lib/crypto/x86/sha512.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * x86-optimized SHA-512 block function + * + * Copyright 2025 Google LLC + */ + +#include <asm/fpu/api.h> +#include <crypto/internal/simd.h> +#include <linux/static_call.h> + +DEFINE_STATIC_CALL(sha512_blocks_x86, sha512_blocks_generic); + +#define DEFINE_X86_SHA512_FN(c_fn, asm_fn) \ + asmlinkage void asm_fn(struct sha512_block_state *state, \ + const u8 *data, size_t nblocks); \ + static void c_fn(struct sha512_block_state *state, const u8 *data, \ + size_t nblocks) \ + { \ + if (likely(crypto_simd_usable())) { \ + kernel_fpu_begin(); \ + asm_fn(state, data, nblocks); \ + kernel_fpu_end(); \ + } else { \ + sha512_blocks_generic(state, data, nblocks); \ + } \ + } + +DEFINE_X86_SHA512_FN(sha512_blocks_ssse3, sha512_transform_ssse3); +DEFINE_X86_SHA512_FN(sha512_blocks_avx, sha512_transform_avx); +DEFINE_X86_SHA512_FN(sha512_blocks_avx2, sha512_transform_rorx); + +static void sha512_blocks(struct sha512_block_state *state, + const u8 *data, size_t nblocks) +{ + static_call(sha512_blocks_x86)(state, data, nblocks); +} + +#define sha512_mod_init_arch sha512_mod_init_arch +static inline void sha512_mod_init_arch(void) +{ + if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL) && + boot_cpu_has(X86_FEATURE_AVX)) { + if (boot_cpu_has(X86_FEATURE_AVX2) && + boot_cpu_has(X86_FEATURE_BMI2)) + static_call_update(sha512_blocks_x86, + sha512_blocks_avx2); + else + static_call_update(sha512_blocks_x86, + sha512_blocks_avx); + } else if (boot_cpu_has(X86_FEATURE_SSSE3)) { + static_call_update(sha512_blocks_x86, sha512_blocks_ssse3); + } +} -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 16/16] crypto: sha512 - remove sha512_base.h 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers ` (14 preceding siblings ...) 2025-06-11 2:09 ` [PATCH 15/16] lib/crypto/sha512: migrate x86-optimized " Eric Biggers @ 2025-06-11 2:09 ` Eric Biggers 15 siblings, 0 replies; 34+ messages in thread From: Eric Biggers @ 2025-06-11 2:09 UTC (permalink / raw) To: linux-crypto Cc: linux-kernel, linux-arm-kernel, linux-mips, linux-riscv, linux-s390, sparclinux, x86, Ard Biesheuvel, Jason A . Donenfeld , Linus Torvalds From: Eric Biggers <ebiggers@google.com> sha512_base.h is no longer used, so remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> --- include/crypto/sha512_base.h | 117 ----------------------------------- 1 file changed, 117 deletions(-) delete mode 100644 include/crypto/sha512_base.h diff --git a/include/crypto/sha512_base.h b/include/crypto/sha512_base.h deleted file mode 100644 index d1361b3eb70b0..0000000000000 --- a/include/crypto/sha512_base.h +++ /dev/null @@ -1,117 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * sha512_base.h - core logic for SHA-512 implementations - * - * Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org> - */ - -#ifndef _CRYPTO_SHA512_BASE_H -#define _CRYPTO_SHA512_BASE_H - -#include <crypto/internal/hash.h> -#include <crypto/sha2.h> -#include <linux/compiler.h> -#include <linux/math.h> -#include <linux/string.h> -#include <linux/types.h> -#include <linux/unaligned.h> - -typedef void (sha512_block_fn)(struct sha512_state *sst, u8 const *src, - int blocks); - -static inline int sha384_base_init(struct shash_desc *desc) -{ - struct sha512_state *sctx = shash_desc_ctx(desc); - - sctx->state[0] = SHA384_H0; - sctx->state[1] = SHA384_H1; - sctx->state[2] = SHA384_H2; - sctx->state[3] = SHA384_H3; - sctx->state[4] = SHA384_H4; - sctx->state[5] = SHA384_H5; - sctx->state[6] = SHA384_H6; - sctx->state[7] = SHA384_H7; - sctx->count[0] = sctx->count[1] = 0; - - return 0; -} - -static inline int sha512_base_init(struct shash_desc *desc) -{ - struct sha512_state *sctx = shash_desc_ctx(desc); - - sctx->state[0] = SHA512_H0; - sctx->state[1] = SHA512_H1; - sctx->state[2] = SHA512_H2; - sctx->state[3] = SHA512_H3; - sctx->state[4] = SHA512_H4; - sctx->state[5] = SHA512_H5; - sctx->state[6] = SHA512_H6; - sctx->state[7] = SHA512_H7; - sctx->count[0] = sctx->count[1] = 0; - - return 0; -} - -static inline int sha512_base_do_update_blocks(struct shash_desc *desc, - const u8 *data, - unsigned int len, - sha512_block_fn *block_fn) -{ - unsigned int remain = len - round_down(len, SHA512_BLOCK_SIZE); - struct sha512_state *sctx = shash_desc_ctx(desc); - - len -= remain; - sctx->count[0] += len; - if (sctx->count[0] < len) - sctx->count[1]++; - block_fn(sctx, data, len / SHA512_BLOCK_SIZE); - return remain; -} - -static inline int sha512_base_do_finup(struct shash_desc *desc, const u8 *src, - unsigned int len, - sha512_block_fn *block_fn) -{ - unsigned int bit_offset = SHA512_BLOCK_SIZE / 8 - 2; - struct sha512_state *sctx = shash_desc_ctx(desc); - union { - __be64 b64[SHA512_BLOCK_SIZE / 4]; - u8 u8[SHA512_BLOCK_SIZE * 2]; - } block = {}; - - if (len >= SHA512_BLOCK_SIZE) { - int remain; - - remain = sha512_base_do_update_blocks(desc, src, len, block_fn); - src += len - remain; - len = remain; - } - - if (len >= bit_offset * 8) - bit_offset += SHA512_BLOCK_SIZE / 8; - memcpy(&block, src, len); - block.u8[len] = 0x80; - sctx->count[0] += len; - block.b64[bit_offset] = cpu_to_be64(sctx->count[1] << 3 | - sctx->count[0] >> 61); - block.b64[bit_offset + 1] = cpu_to_be64(sctx->count[0] << 3); - block_fn(sctx, block.u8, (bit_offset + 2) * 8 / SHA512_BLOCK_SIZE); - memzero_explicit(&block, sizeof(block)); - - return 0; -} - -static inline int sha512_base_finish(struct shash_desc *desc, u8 *out) -{ - unsigned int digest_size = crypto_shash_digestsize(desc->tfm); - struct sha512_state *sctx = shash_desc_ctx(desc); - __be64 *digest = (__be64 *)out; - int i; - - for (i = 0; digest_size > 0; i++, digest_size -= sizeof(__be64)) - put_unaligned_be64(sctx->state[i], digest++); - return 0; -} - -#endif /* _CRYPTO_SHA512_BASE_H */ -- 2.49.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv ^ permalink raw reply related [flat|nested] 34+ messages in thread
end of thread, other threads:[~2025-06-16 4:11 UTC | newest] Thread overview: 34+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-06-11 2:09 [PATCH 00/16] SHA-512 library functions Eric Biggers 2025-06-11 2:09 ` [PATCH 01/16] crypto: sha512 - rename conflicting symbols Eric Biggers 2025-06-11 2:09 ` [PATCH 02/16] lib/crypto/sha512: add support for SHA-384 and SHA-512 Eric Biggers 2025-06-11 2:09 ` [PATCH 03/16] lib/crypto/sha512: add HMAC-SHA384 and HMAC-SHA512 support Eric Biggers 2025-06-11 2:09 ` [PATCH 04/16] lib/crypto/sha512: add KUnit tests for SHA-384 and SHA-512 Eric Biggers 2025-06-11 2:09 ` [PATCH 05/16] lib/crypto/sha256: add KUnit tests for SHA-224 and SHA-256 Eric Biggers 2025-06-11 2:09 ` [PATCH 06/16] crypto: riscv/sha512 - stop depending on sha512_generic_block_fn Eric Biggers 2025-06-11 2:09 ` [PATCH 07/16] crypto: sha512 - replace sha512_generic with wrapper around SHA-512 library Eric Biggers 2025-06-11 2:24 ` Herbert Xu 2025-06-11 3:39 ` Eric Biggers 2025-06-11 3:46 ` Herbert Xu 2025-06-11 3:58 ` Eric Biggers 2025-06-13 5:36 ` Eric Biggers 2025-06-13 5:38 ` Herbert Xu 2025-06-13 5:54 ` Eric Biggers 2025-06-13 7:38 ` Ard Biesheuvel 2025-06-13 8:39 ` Herbert Xu 2025-06-13 14:51 ` Eric Biggers 2025-06-13 16:35 ` Linus Torvalds 2025-06-13 8:51 ` [PATCH] crypto: ahash - Stop legacy tfms from using the set_virt fallback path Herbert Xu 2025-06-15 3:18 ` Eric Biggers 2025-06-15 7:22 ` Ard Biesheuvel 2025-06-15 18:46 ` Eric Biggers 2025-06-15 19:37 ` Linus Torvalds 2025-06-16 4:09 ` [PATCH] crypto: ahash - Fix infinite recursion in ahash_def_finup Herbert Xu 2025-06-11 2:09 ` [PATCH 08/16] lib/crypto/sha512: migrate arm-optimized SHA-512 code to library Eric Biggers 2025-06-11 2:09 ` [PATCH 09/16] lib/crypto/sha512: migrate arm64-optimized " Eric Biggers 2025-06-11 2:09 ` [PATCH 10/16] mips: cavium-octeon: move octeon-crypto.h into asm directory Eric Biggers 2025-06-11 2:09 ` [PATCH 11/16] lib/crypto/sha512: migrate mips-optimized SHA-512 code to library Eric Biggers 2025-06-11 2:09 ` [PATCH 12/16] lib/crypto/sha512: migrate riscv-optimized " Eric Biggers 2025-06-11 2:09 ` [PATCH 13/16] lib/crypto/sha512: migrate s390-optimized " Eric Biggers 2025-06-11 2:09 ` [PATCH 14/16] lib/crypto/sha512: migrate sparc-optimized " Eric Biggers 2025-06-11 2:09 ` [PATCH 15/16] lib/crypto/sha512: migrate x86-optimized " Eric Biggers 2025-06-11 2:09 ` [PATCH 16/16] crypto: sha512 - remove sha512_base.h Eric Biggers
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).