linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] crypto: s390/hmac - Use generic hash export format
@ 2025-04-29  8:49 Herbert Xu
  2025-04-29  8:49 ` [PATCH 1/2] crypto: s390/hmac - Extend hash length counters to 128 bits Herbert Xu
  2025-04-29  8:49 ` [PATCH 2/2] crypto: s390/hmac - Use generic hash export format Herbert Xu
  0 siblings, 2 replies; 8+ messages in thread
From: Herbert Xu @ 2025-04-29  8:49 UTC (permalink / raw)
  To: Linux Crypto Mailing List
  Cc: Harald Freudenberger, Holger Dengler, linux-s390

This mini series converts the s390 hmac implementation to use
the generic export format.  First it extends the implementation
to support large lengths (you could always import a partial hash
with a length that is just about to overflow), and then it adds
export/import functions matching the format of the generic hmac
sha2 implementation.

Herbert Xu (2):
  crypto: s390/hmac - Extend hash length counters to 128 bits
  crypto: s390/hmac - Use generic hash export format

 arch/s390/crypto/Kconfig     |   4 +-
 arch/s390/crypto/hmac_s390.c | 200 ++++++++++++++++++++++++++++++++---
 2 files changed, 187 insertions(+), 17 deletions(-)

-- 
2.39.5


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] crypto: s390/hmac - Extend hash length counters to 128 bits
  2025-04-29  8:49 [PATCH 0/2] crypto: s390/hmac - Use generic hash export format Herbert Xu
@ 2025-04-29  8:49 ` Herbert Xu
  2025-05-19 16:10   ` Holger Dengler
  2025-04-29  8:49 ` [PATCH 2/2] crypto: s390/hmac - Use generic hash export format Herbert Xu
  1 sibling, 1 reply; 8+ messages in thread
From: Herbert Xu @ 2025-04-29  8:49 UTC (permalink / raw)
  To: Linux Crypto Mailing List
  Cc: Harald Freudenberger, Holger Dengler, linux-s390

As sha512 requires 128-bit counters, extend the hash length counters
to that length.  Previously they were just 32 bits which means that
a >4G sha256 hash would be incorrect.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 arch/s390/crypto/hmac_s390.c | 25 ++++++++++++++-----------
 1 file changed, 14 insertions(+), 11 deletions(-)

diff --git a/arch/s390/crypto/hmac_s390.c b/arch/s390/crypto/hmac_s390.c
index bba9a818dfdc..e6edf1013228 100644
--- a/arch/s390/crypto/hmac_s390.c
+++ b/arch/s390/crypto/hmac_s390.c
@@ -72,23 +72,23 @@ struct s390_kmac_sha2_ctx {
 	u8 param[MAX_DIGEST_SIZE + MAX_IMBL_SIZE + MAX_BLOCK_SIZE];
 	union s390_kmac_gr0 gr0;
 	u8 buf[MAX_BLOCK_SIZE];
-	unsigned int buflen;
+	u64 buflen[2];
 };
 
 /*
  * kmac_sha2_set_imbl - sets the input message bit-length based on the blocksize
  */
-static inline void kmac_sha2_set_imbl(u8 *param, unsigned int buflen,
-				      unsigned int blocksize)
+static inline void kmac_sha2_set_imbl(u8 *param, u64 buflen_lo,
+				      u64 buflen_hi, unsigned int blocksize)
 {
 	u8 *imbl = param + SHA2_IMBL_OFFSET(blocksize);
 
 	switch (blocksize) {
 	case SHA256_BLOCK_SIZE:
-		*(u64 *)imbl = (u64)buflen * BITS_PER_BYTE;
+		*(u64 *)imbl = buflen_lo * BITS_PER_BYTE;
 		break;
 	case SHA512_BLOCK_SIZE:
-		*(u128 *)imbl = (u128)buflen * BITS_PER_BYTE;
+		*(u128 *)imbl = (((u128)buflen_hi << 64) + buflen_lo) << 3;
 		break;
 	default:
 		break;
@@ -176,7 +176,8 @@ static int s390_hmac_sha2_init(struct shash_desc *desc)
 	memcpy(ctx->param + SHA2_KEY_OFFSET(bs),
 	       tfm_ctx->key, bs);
 
-	ctx->buflen = 0;
+	ctx->buflen[0] = 0;
+	ctx->buflen[1] = 0;
 	ctx->gr0.reg = 0;
 	switch (crypto_shash_digestsize(desc->tfm)) {
 	case SHA224_DIGEST_SIZE:
@@ -206,8 +207,10 @@ static int s390_hmac_sha2_update(struct shash_desc *desc,
 	unsigned int offset, n;
 
 	/* check current buffer */
-	offset = ctx->buflen % bs;
-	ctx->buflen += len;
+	offset = ctx->buflen[0] % bs;
+	ctx->buflen[0] += len;
+	if (ctx->buflen[0] < len)
+		ctx->buflen[1]++;
 	if (offset + len < bs)
 		goto store;
 
@@ -243,8 +246,8 @@ static int s390_hmac_sha2_final(struct shash_desc *desc, u8 *out)
 	unsigned int bs = crypto_shash_blocksize(desc->tfm);
 
 	ctx->gr0.iimp = 0;
-	kmac_sha2_set_imbl(ctx->param, ctx->buflen, bs);
-	_cpacf_kmac(&ctx->gr0.reg, ctx->param, ctx->buf, ctx->buflen % bs);
+	kmac_sha2_set_imbl(ctx->param, ctx->buflen[0], ctx->buflen[1], bs);
+	_cpacf_kmac(&ctx->gr0.reg, ctx->param, ctx->buf, ctx->buflen[0] % bs);
 	memcpy(out, ctx->param, crypto_shash_digestsize(desc->tfm));
 
 	return 0;
@@ -262,7 +265,7 @@ static int s390_hmac_sha2_digest(struct shash_desc *desc,
 		return rc;
 
 	ctx->gr0.iimp = 0;
-	kmac_sha2_set_imbl(ctx->param, len,
+	kmac_sha2_set_imbl(ctx->param, len, 0,
 			   crypto_shash_blocksize(desc->tfm));
 	_cpacf_kmac(&ctx->gr0.reg, ctx->param, data, len);
 	memcpy(out, ctx->param, ds);
-- 
2.39.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] crypto: s390/hmac - Use generic hash export format
  2025-04-29  8:49 [PATCH 0/2] crypto: s390/hmac - Use generic hash export format Herbert Xu
  2025-04-29  8:49 ` [PATCH 1/2] crypto: s390/hmac - Extend hash length counters to 128 bits Herbert Xu
@ 2025-04-29  8:49 ` Herbert Xu
  2025-04-29 12:04   ` T Pratham
  1 sibling, 1 reply; 8+ messages in thread
From: Herbert Xu @ 2025-04-29  8:49 UTC (permalink / raw)
  To: Linux Crypto Mailing List
  Cc: Harald Freudenberger, Holger Dengler, linux-s390

Convert the hash export format to match that of the generic
algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 arch/s390/crypto/Kconfig     |   4 +-
 arch/s390/crypto/hmac_s390.c | 175 ++++++++++++++++++++++++++++++++++-
 2 files changed, 173 insertions(+), 6 deletions(-)

diff --git a/arch/s390/crypto/Kconfig b/arch/s390/crypto/Kconfig
index e2c27588b21a..342c639ce2dc 100644
--- a/arch/s390/crypto/Kconfig
+++ b/arch/s390/crypto/Kconfig
@@ -91,7 +91,9 @@ config CRYPTO_DES_S390
 
 config CRYPTO_HMAC_S390
 	tristate "Keyed-hash message authentication code: HMAC"
-	select CRYPTO_HASH
+	select CRYPTO_HMAC
+	select CRYPTO_SHA256
+	select CRYPTO_SHA512
 	help
 	  s390 specific HMAC hardware support for SHA224, SHA256, SHA384 and
 	  SHA512.
diff --git a/arch/s390/crypto/hmac_s390.c b/arch/s390/crypto/hmac_s390.c
index e6edf1013228..44f2a5d394d1 100644
--- a/arch/s390/crypto/hmac_s390.c
+++ b/arch/s390/crypto/hmac_s390.c
@@ -53,6 +53,7 @@
 #define SHA2_KEY_OFFSET(bs)	(SHA2_CV_SIZE(bs) + SHA2_IMBL_SIZE(bs))
 
 struct s390_hmac_ctx {
+	struct crypto_shash *fb;
 	u8 key[MAX_BLOCK_SIZE];
 };
 
@@ -157,6 +158,11 @@ static int s390_hmac_sha2_setkey(struct crypto_shash *tfm,
 	struct s390_hmac_ctx *tfm_ctx = crypto_shash_ctx(tfm);
 	unsigned int ds = crypto_shash_digestsize(tfm);
 	unsigned int bs = crypto_shash_blocksize(tfm);
+	int err;
+
+	err = crypto_shash_setkey(tfm_ctx->fb, key, keylen);
+	if (err)
+		return err;
 
 	memset(tfm_ctx, 0, sizeof(*tfm_ctx));
 
@@ -273,7 +279,160 @@ static int s390_hmac_sha2_digest(struct shash_desc *desc,
 	return 0;
 }
 
-#define S390_HMAC_SHA2_ALG(x) {						\
+static int s390_hmac_sha2_init_tfm(struct crypto_shash *tfm)
+{
+	struct s390_hmac_ctx *ctx = crypto_shash_ctx(tfm);
+	struct crypto_shash *fb;
+
+	fb = crypto_alloc_shash(crypto_shash_alg_name(tfm), 0,
+				CRYPTO_ALG_NEED_FALLBACK);
+	if (IS_ERR(fb))
+		return PTR_ERR(fb);
+
+	ctx->fb = fb;
+	return 0;
+}
+
+static void s390_hmac_sha2_exit_tfm(struct crypto_shash *tfm)
+{
+	struct s390_hmac_ctx *ctx = crypto_shash_ctx(tfm);
+
+	crypto_free_shash(ctx->fb);
+}
+
+static int s390_hmac_export_zero(struct shash_desc *desc, void *out)
+{
+	struct s390_hmac_ctx *ctx = crypto_shash_ctx(desc->tfm);
+	struct crypto_shash *fb = ctx->fb;
+	SHASH_DESC_ON_STACK(fbdesc, fb);
+
+	fbdesc->tfm = fb;
+	return crypto_shash_init(fbdesc) ?:
+	       crypto_shash_export(fbdesc, out);
+}
+
+static int s390_hmac_export_sha256(struct shash_desc *desc, void *out)
+{
+	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
+	u64 total = ctx->buflen[0];
+	union {
+		u8 *u8;
+		u64 *u64;
+	} p = { .u8 = out };
+	unsigned int remain;
+	u64 hashed;
+	int err = 0;
+
+	hashed = round_down(total, SHA256_BLOCK_SIZE);
+	remain = total - hashed;
+
+	if (!hashed)
+		err = s390_hmac_export_zero(desc, out);
+	else
+		memcpy(p.u8, ctx->param, SHA256_DIGEST_SIZE);
+
+	p.u8 += SHA256_DIGEST_SIZE;
+	put_unaligned(total, p.u64++);
+
+	memcpy(p.u8, ctx->buf, remain);
+
+	return err;
+}
+
+static int s390_hmac_import_sha256(struct shash_desc *desc, const void *in)
+{
+	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
+	union {
+		const u8 *u8;
+		const u64 *u64;
+	} p = { .u8 = in };
+	unsigned int remain;
+	u64 total;
+	int err;
+
+	err = s390_hmac_sha2_init(desc);
+	if (err)
+		return err;
+
+	memcpy(ctx->param, p.u8, SHA256_DIGEST_SIZE);
+	p.u8 += SHA256_DIGEST_SIZE;
+
+	total = get_unaligned(p.u64++);
+	remain = total % SHA256_BLOCK_SIZE;
+	ctx->buflen[0] = total;
+
+	if (total - remain)
+		ctx->gr0.ikp = 1;
+
+	memcpy(ctx->buf, p.u8, remain);
+
+	return 0;
+}
+
+static int s390_hmac_export_sha512(struct shash_desc *desc, void *out)
+{
+	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
+	u64 total_hi = ctx->buflen[1];
+	u64 total = ctx->buflen[0];
+	union {
+		u8 *u8;
+		u32 *u32;
+		u64 *u64;
+	} p = { .u8 = out };
+	unsigned int remain;
+	u64 hashed;
+	int err = 0;
+
+	hashed = round_down(total, SHA512_BLOCK_SIZE);
+	remain = total - hashed;
+
+	if (!(hashed | total_hi))
+		err = s390_hmac_export_zero(desc, out);
+	else
+		memcpy(p.u8, ctx->param, SHA512_DIGEST_SIZE);
+
+	p.u8 += SHA512_DIGEST_SIZE;
+	put_unaligned(total, p.u64++);
+	put_unaligned(total_hi, p.u64++);
+
+	memcpy(p.u8, ctx->buf, remain);
+
+	return err;
+}
+
+static int s390_hmac_import_sha512(struct shash_desc *desc, const void *in)
+{
+	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
+	union {
+		const u8 *u8;
+		const u64 *u64;
+	} p = { .u8 = in };
+	unsigned int remain;
+	u64 total, total_hi;
+	int err;
+
+	err = s390_hmac_sha2_init(desc);
+	if (err)
+		return err;
+
+	memcpy(ctx->param, p.u8, SHA512_DIGEST_SIZE);
+	p.u8 += SHA512_DIGEST_SIZE;
+
+	total = get_unaligned(p.u64++);
+	total_hi = get_unaligned(p.u64++);
+	ctx->buflen[0] = total;
+	ctx->buflen[1] = total_hi;
+
+	remain = total % SHA512_BLOCK_SIZE;
+	if ((total - remain) | total_hi)
+		ctx->gr0.ikp = 1;
+
+	memcpy(ctx->buf, p.u8, remain);
+
+	return 0;
+}
+
+#define S390_HMAC_SHA2_ALG(x, exf, imf, state) {			\
 	.fc = CPACF_KMAC_HMAC_SHA_##x,					\
 	.alg = {							\
 		.init = s390_hmac_sha2_init,				\
@@ -281,8 +440,13 @@ static int s390_hmac_sha2_digest(struct shash_desc *desc,
 		.final = s390_hmac_sha2_final,				\
 		.digest = s390_hmac_sha2_digest,			\
 		.setkey = s390_hmac_sha2_setkey,			\
+		.init_tfm = s390_hmac_sha2_init_tfm,			\
+		.exit_tfm = s390_hmac_sha2_exit_tfm,			\
+		.export = exf,						\
+		.import = imf,						\
 		.descsize = sizeof(struct s390_kmac_sha2_ctx),		\
 		.halg = {						\
+			.statesize = sizeof(struct state),		\
 			.digestsize = SHA##x##_DIGEST_SIZE,		\
 			.base = {					\
 				.cra_name = "hmac(sha" #x ")",		\
@@ -291,6 +455,7 @@ static int s390_hmac_sha2_digest(struct shash_desc *desc,
 				.cra_priority = 400,			\
 				.cra_ctxsize = sizeof(struct s390_hmac_ctx), \
 				.cra_module = THIS_MODULE,		\
+				.cra_flags = CRYPTO_ALG_NEED_FALLBACK,	\
 			},						\
 		},							\
 	},								\
@@ -301,10 +466,10 @@ static struct s390_hmac_alg {
 	unsigned int fc;
 	struct shash_alg alg;
 } s390_hmac_algs[] = {
-	S390_HMAC_SHA2_ALG(224),
-	S390_HMAC_SHA2_ALG(256),
-	S390_HMAC_SHA2_ALG(384),
-	S390_HMAC_SHA2_ALG(512),
+	S390_HMAC_SHA2_ALG(224, s390_hmac_export_sha256, s390_hmac_import_sha256, sha256_state),
+	S390_HMAC_SHA2_ALG(256, s390_hmac_export_sha256, s390_hmac_import_sha256, sha256_state),
+	S390_HMAC_SHA2_ALG(384, s390_hmac_export_sha512, s390_hmac_import_sha512, sha512_state),
+	S390_HMAC_SHA2_ALG(512, s390_hmac_export_sha512, s390_hmac_import_sha512, sha512_state),
 };
 
 static __always_inline void _s390_hmac_algs_unregister(void)
-- 
2.39.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] crypto: s390/hmac - Use generic hash export format
  2025-04-29  8:49 ` [PATCH 2/2] crypto: s390/hmac - Use generic hash export format Herbert Xu
@ 2025-04-29 12:04   ` T Pratham
  2025-04-30 10:34     ` [PATCH] crypto: s390/hmac - Use API partial block handling Herbert Xu
  0 siblings, 1 reply; 8+ messages in thread
From: T Pratham @ 2025-04-29 12:04 UTC (permalink / raw)
  To: Herbert Xu, Linux Crypto Mailing List
  Cc: Harald Freudenberger, Holger Dengler, linux-s390

On 29/04/25 14:19, Herbert Xu wrote:
> [...]
> +
> +static int s390_hmac_export_sha256(struct shash_desc *desc, void *out)
> +{
> +	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
> +	u64 total = ctx->buflen[0];
> +	union {
> +		u8 *u8;
> +		u64 *u64;
> +	} p = { .u8 = out };
> +	unsigned int remain;
> +	u64 hashed;
> +	int err = 0;
> +
> +	hashed = round_down(total, SHA256_BLOCK_SIZE);
> +	remain = total - hashed;
> +
> +	if (!hashed)
> +		err = s390_hmac_export_zero(desc, out);
> +	else
> +		memcpy(p.u8, ctx->param, SHA256_DIGEST_SIZE);
> +
> +	p.u8 += SHA256_DIGEST_SIZE;
> +	put_unaligned(total, p.u64++);
> +
> +	memcpy(p.u8, ctx->buf, remain);
> +
> +	return err;
> +}
Why do pointer increment with different types through a union which is un-intuitive to understand and prone to easy errors in future. It is easy to mix up the layout of the data being stored. Why not just typecast void * to a struct exposing different fields? Same with sha512.
> +
> + [...]
> +
> +static int s390_hmac_export_sha512(struct shash_desc *desc, void *out)
> +{
> +	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
> +	u64 total_hi = ctx->buflen[1];
> +	u64 total = ctx->buflen[0];
Can use uniform naming here. total_hi and total_lo.

Regards
T Pratham <t-pratham@ti.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] crypto: s390/hmac - Use API partial block handling
  2025-04-29 12:04   ` T Pratham
@ 2025-04-30 10:34     ` Herbert Xu
  2025-05-02  9:00       ` [v2 PATCH] " Herbert Xu
  0 siblings, 1 reply; 8+ messages in thread
From: Herbert Xu @ 2025-04-30 10:34 UTC (permalink / raw)
  To: T Pratham
  Cc: Linux Crypto Mailing List, Harald Freudenberger, Holger Dengler,
	linux-s390

On Tue, Apr 29, 2025 at 05:34:18PM +0530, T Pratham wrote:
>
> Why do pointer increment with different types through a union which is un-intuitive to understand and prone to easy errors in future. It is easy to mix up the layout of the data being stored. Why not just typecast void * to a struct exposing different fields? Same with sha512.

You can't cast a void * to a random struct and start writing to
it because of alignment faults.  Now s390 actually happens to be
OK in that respect, but this way of writing exports is used by
my ahash patches as well and I would like to keep them consistent.

> Can use uniform naming here. total_hi and total_lo.

Thanks.  I've got rid of them altogether.

It turns out that the patch I sent out yesterday is actually
wrong as it predates the shash partial block API.  Here is a
more up-to-date version:

---8<---
Use the Crypto API partial block handling.

Also switch to the generic export format.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 arch/s390/crypto/hmac_s390.c | 154 ++++++++++++++++++++++++-----------
 1 file changed, 108 insertions(+), 46 deletions(-)

diff --git a/arch/s390/crypto/hmac_s390.c b/arch/s390/crypto/hmac_s390.c
index e6edf1013228..474b4233effd 100644
--- a/arch/s390/crypto/hmac_s390.c
+++ b/arch/s390/crypto/hmac_s390.c
@@ -9,10 +9,14 @@
 #define pr_fmt(fmt)	KMSG_COMPONENT ": " fmt
 
 #include <asm/cpacf.h>
-#include <crypto/sha2.h>
 #include <crypto/internal/hash.h>
+#include <crypto/hmac.h>
+#include <crypto/sha2.h>
 #include <linux/cpufeature.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/string.h>
 
 /*
  * KMAC param block layout for sha2 function codes:
@@ -71,7 +75,6 @@ union s390_kmac_gr0 {
 struct s390_kmac_sha2_ctx {
 	u8 param[MAX_DIGEST_SIZE + MAX_IMBL_SIZE + MAX_BLOCK_SIZE];
 	union s390_kmac_gr0 gr0;
-	u8 buf[MAX_BLOCK_SIZE];
 	u64 buflen[2];
 };
 
@@ -95,8 +98,8 @@ static inline void kmac_sha2_set_imbl(u8 *param, u64 buflen_lo,
 	}
 }
 
-static int hash_key(const u8 *in, unsigned int inlen,
-		    u8 *digest, unsigned int digestsize)
+static int hash_data(const u8 *in, unsigned int inlen,
+		     u8 *digest, unsigned int digestsize, bool final)
 {
 	unsigned long func;
 	union {
@@ -123,19 +126,23 @@ static int hash_key(const u8 *in, unsigned int inlen,
 
 	switch (digestsize) {
 	case SHA224_DIGEST_SIZE:
-		func = CPACF_KLMD_SHA_256;
+		func = final ? CPACF_KLMD_SHA_256 : CPACF_KIMD_SHA_256;
 		PARAM_INIT(256, 224, inlen * 8);
+		if (!final)
+			digestsize = SHA256_DIGEST_SIZE;
 		break;
 	case SHA256_DIGEST_SIZE:
-		func = CPACF_KLMD_SHA_256;
+		func = final ? CPACF_KLMD_SHA_256 : CPACF_KIMD_SHA_256;
 		PARAM_INIT(256, 256, inlen * 8);
 		break;
 	case SHA384_DIGEST_SIZE:
-		func = CPACF_KLMD_SHA_512;
+		func = final ? CPACF_KLMD_SHA_512 : CPACF_KIMD_SHA_512;
 		PARAM_INIT(512, 384, inlen * 8);
+		if (!final)
+			digestsize = SHA512_DIGEST_SIZE;
 		break;
 	case SHA512_DIGEST_SIZE:
-		func = CPACF_KLMD_SHA_512;
+		func = final ? CPACF_KLMD_SHA_512 : CPACF_KIMD_SHA_512;
 		PARAM_INIT(512, 512, inlen * 8);
 		break;
 	default:
@@ -151,6 +158,12 @@ static int hash_key(const u8 *in, unsigned int inlen,
 	return 0;
 }
 
+static int hash_key(const u8 *in, unsigned int inlen,
+		    u8 *digest, unsigned int digestsize)
+{
+	return hash_data(in, inlen, digest, digestsize, true);
+}
+
 static int s390_hmac_sha2_setkey(struct crypto_shash *tfm,
 				 const u8 *key, unsigned int keylen)
 {
@@ -204,50 +217,31 @@ static int s390_hmac_sha2_update(struct shash_desc *desc,
 {
 	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
 	unsigned int bs = crypto_shash_blocksize(desc->tfm);
-	unsigned int offset, n;
+	unsigned int n = round_down(len, bs);
 
-	/* check current buffer */
-	offset = ctx->buflen[0] % bs;
-	ctx->buflen[0] += len;
-	if (ctx->buflen[0] < len)
+	ctx->buflen[0] += n;
+	if (ctx->buflen[0] < n)
 		ctx->buflen[1]++;
-	if (offset + len < bs)
-		goto store;
 
-	/* process one stored block */
-	if (offset) {
-		n = bs - offset;
-		memcpy(ctx->buf + offset, data, n);
-		ctx->gr0.iimp = 1;
-		_cpacf_kmac(&ctx->gr0.reg, ctx->param, ctx->buf, bs);
-		data += n;
-		len -= n;
-		offset = 0;
-	}
 	/* process as many blocks as possible */
-	if (len >= bs) {
-		n = (len / bs) * bs;
-		ctx->gr0.iimp = 1;
-		_cpacf_kmac(&ctx->gr0.reg, ctx->param, data, n);
-		data += n;
-		len -= n;
-	}
-store:
-	/* store incomplete block in buffer */
-	if (len)
-		memcpy(ctx->buf + offset, data, len);
-
-	return 0;
+	ctx->gr0.iimp = 1;
+	_cpacf_kmac(&ctx->gr0.reg, ctx->param, data, n);
+	return len - n;
 }
 
-static int s390_hmac_sha2_final(struct shash_desc *desc, u8 *out)
+static int s390_hmac_sha2_finup(struct shash_desc *desc, const u8 *src,
+				unsigned int len, u8 *out)
 {
 	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
 	unsigned int bs = crypto_shash_blocksize(desc->tfm);
 
+	ctx->buflen[0] += len;
+	if (ctx->buflen[0] < len)
+		ctx->buflen[1]++;
+
 	ctx->gr0.iimp = 0;
 	kmac_sha2_set_imbl(ctx->param, ctx->buflen[0], ctx->buflen[1], bs);
-	_cpacf_kmac(&ctx->gr0.reg, ctx->param, ctx->buf, ctx->buflen[0] % bs);
+	_cpacf_kmac(&ctx->gr0.reg, ctx->param, src, len);
 	memcpy(out, ctx->param, crypto_shash_digestsize(desc->tfm));
 
 	return 0;
@@ -273,22 +267,90 @@ static int s390_hmac_sha2_digest(struct shash_desc *desc,
 	return 0;
 }
 
-#define S390_HMAC_SHA2_ALG(x) {						\
+static int s390_hmac_export_zero(struct shash_desc *desc, void *out)
+{
+	struct crypto_shash *tfm = desc->tfm;
+	u8 ipad[SHA512_BLOCK_SIZE];
+	struct s390_hmac_ctx *ctx;
+	unsigned int bs;
+	int err, i;
+
+	ctx = crypto_shash_ctx(tfm);
+	bs = crypto_shash_blocksize(tfm);
+	for (i = 0; i < bs; i++)
+		ipad[i] = ctx->key[i] ^ HMAC_IPAD_VALUE;
+
+	err = hash_data(ipad, bs, out, crypto_shash_digestsize(tfm), false);
+	memzero_explicit(ipad, sizeof(ipad));
+	return err;
+}
+
+static int s390_hmac_export(struct shash_desc *desc, void *out)
+{
+	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
+	unsigned int ds = crypto_shash_digestsize(desc->tfm);
+	union {
+		u8 *u8;
+		u64 *u64;
+	} p = { .u8 = out };
+	int err = 0;
+
+	if (!ctx->gr0.ikp)
+		err = s390_hmac_export_zero(desc, out);
+	else
+		memcpy(p.u8, ctx->param, ds);
+	p.u8 += ds;
+	put_unaligned(ctx->buflen[0], p.u64++);
+	if (ds == SHA512_DIGEST_SIZE)
+		put_unaligned(ctx->buflen[1], p.u64);
+	return err;
+}
+
+static int s390_hmac_import(struct shash_desc *desc, const void *in)
+{
+	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
+	unsigned int ds = crypto_shash_digestsize(desc->tfm);
+	union {
+		const u8 *u8;
+		const u64 *u64;
+	} p = { .u8 = in };
+	int err;
+
+	err = s390_hmac_sha2_init(desc);
+	if (err)
+		return err;
+
+	memcpy(ctx->param, p.u8, ds);
+	p.u8 += ds;
+	ctx->buflen[0] = get_unaligned(p.u64++);
+	if (ds == SHA512_DIGEST_SIZE)
+		ctx->buflen[1] = get_unaligned(p.u64);
+	if (ctx->buflen[0] | ctx->buflen[1])
+		ctx->gr0.ikp = 1;
+	return 0;
+}
+
+#define S390_HMAC_SHA2_ALG(x, ss) {					\
 	.fc = CPACF_KMAC_HMAC_SHA_##x,					\
 	.alg = {							\
 		.init = s390_hmac_sha2_init,				\
 		.update = s390_hmac_sha2_update,			\
-		.final = s390_hmac_sha2_final,				\
+		.finup = s390_hmac_sha2_finup,				\
 		.digest = s390_hmac_sha2_digest,			\
 		.setkey = s390_hmac_sha2_setkey,			\
+		.export = s390_hmac_export,				\
+		.import = s390_hmac_import,				\
 		.descsize = sizeof(struct s390_kmac_sha2_ctx),		\
 		.halg = {						\
+			.statesize = ss,				\
 			.digestsize = SHA##x##_DIGEST_SIZE,		\
 			.base = {					\
 				.cra_name = "hmac(sha" #x ")",		\
 				.cra_driver_name = "hmac_s390_sha" #x,	\
 				.cra_blocksize = SHA##x##_BLOCK_SIZE,	\
 				.cra_priority = 400,			\
+				.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | \
+					     CRYPTO_AHASH_ALG_FINUP_MAX, \
 				.cra_ctxsize = sizeof(struct s390_hmac_ctx), \
 				.cra_module = THIS_MODULE,		\
 			},						\
@@ -301,10 +363,10 @@ static struct s390_hmac_alg {
 	unsigned int fc;
 	struct shash_alg alg;
 } s390_hmac_algs[] = {
-	S390_HMAC_SHA2_ALG(224),
-	S390_HMAC_SHA2_ALG(256),
-	S390_HMAC_SHA2_ALG(384),
-	S390_HMAC_SHA2_ALG(512),
+	S390_HMAC_SHA2_ALG(224, sizeof(struct crypto_sha256_state)),
+	S390_HMAC_SHA2_ALG(256, sizeof(struct crypto_sha256_state)),
+	S390_HMAC_SHA2_ALG(384, SHA512_STATE_SIZE),
+	S390_HMAC_SHA2_ALG(512, SHA512_STATE_SIZE),
 };
 
 static __always_inline void _s390_hmac_algs_unregister(void)
-- 
2.39.5

-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [v2 PATCH] crypto: s390/hmac - Use API partial block handling
  2025-04-30 10:34     ` [PATCH] crypto: s390/hmac - Use API partial block handling Herbert Xu
@ 2025-05-02  9:00       ` Herbert Xu
  2025-05-19 16:12         ` Holger Dengler
  0 siblings, 1 reply; 8+ messages in thread
From: Herbert Xu @ 2025-05-02  9:00 UTC (permalink / raw)
  To: T Pratham
  Cc: Linux Crypto Mailing List, Harald Freudenberger, Holger Dengler,
	linux-s390

v2 fixes the export of 224 and 384.

---8<---
Use the Crypto API partial block handling.

Also switch to the generic export format.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 arch/s390/crypto/hmac_s390.c | 153 ++++++++++++++++++++++++-----------
 1 file changed, 107 insertions(+), 46 deletions(-)

diff --git a/arch/s390/crypto/hmac_s390.c b/arch/s390/crypto/hmac_s390.c
index e6edf1013228..93a1098d9f8d 100644
--- a/arch/s390/crypto/hmac_s390.c
+++ b/arch/s390/crypto/hmac_s390.c
@@ -9,10 +9,14 @@
 #define pr_fmt(fmt)	KMSG_COMPONENT ": " fmt
 
 #include <asm/cpacf.h>
-#include <crypto/sha2.h>
 #include <crypto/internal/hash.h>
+#include <crypto/hmac.h>
+#include <crypto/sha2.h>
 #include <linux/cpufeature.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/string.h>
 
 /*
  * KMAC param block layout for sha2 function codes:
@@ -71,7 +75,6 @@ union s390_kmac_gr0 {
 struct s390_kmac_sha2_ctx {
 	u8 param[MAX_DIGEST_SIZE + MAX_IMBL_SIZE + MAX_BLOCK_SIZE];
 	union s390_kmac_gr0 gr0;
-	u8 buf[MAX_BLOCK_SIZE];
 	u64 buflen[2];
 };
 
@@ -95,8 +98,8 @@ static inline void kmac_sha2_set_imbl(u8 *param, u64 buflen_lo,
 	}
 }
 
-static int hash_key(const u8 *in, unsigned int inlen,
-		    u8 *digest, unsigned int digestsize)
+static int hash_data(const u8 *in, unsigned int inlen,
+		     u8 *digest, unsigned int digestsize, bool final)
 {
 	unsigned long func;
 	union {
@@ -123,19 +126,23 @@ static int hash_key(const u8 *in, unsigned int inlen,
 
 	switch (digestsize) {
 	case SHA224_DIGEST_SIZE:
-		func = CPACF_KLMD_SHA_256;
+		func = final ? CPACF_KLMD_SHA_256 : CPACF_KIMD_SHA_256;
 		PARAM_INIT(256, 224, inlen * 8);
+		if (!final)
+			digestsize = SHA256_DIGEST_SIZE;
 		break;
 	case SHA256_DIGEST_SIZE:
-		func = CPACF_KLMD_SHA_256;
+		func = final ? CPACF_KLMD_SHA_256 : CPACF_KIMD_SHA_256;
 		PARAM_INIT(256, 256, inlen * 8);
 		break;
 	case SHA384_DIGEST_SIZE:
-		func = CPACF_KLMD_SHA_512;
+		func = final ? CPACF_KLMD_SHA_512 : CPACF_KIMD_SHA_512;
 		PARAM_INIT(512, 384, inlen * 8);
+		if (!final)
+			digestsize = SHA512_DIGEST_SIZE;
 		break;
 	case SHA512_DIGEST_SIZE:
-		func = CPACF_KLMD_SHA_512;
+		func = final ? CPACF_KLMD_SHA_512 : CPACF_KIMD_SHA_512;
 		PARAM_INIT(512, 512, inlen * 8);
 		break;
 	default:
@@ -151,6 +158,12 @@ static int hash_key(const u8 *in, unsigned int inlen,
 	return 0;
 }
 
+static int hash_key(const u8 *in, unsigned int inlen,
+		    u8 *digest, unsigned int digestsize)
+{
+	return hash_data(in, inlen, digest, digestsize, true);
+}
+
 static int s390_hmac_sha2_setkey(struct crypto_shash *tfm,
 				 const u8 *key, unsigned int keylen)
 {
@@ -204,50 +217,31 @@ static int s390_hmac_sha2_update(struct shash_desc *desc,
 {
 	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
 	unsigned int bs = crypto_shash_blocksize(desc->tfm);
-	unsigned int offset, n;
+	unsigned int n = round_down(len, bs);
 
-	/* check current buffer */
-	offset = ctx->buflen[0] % bs;
-	ctx->buflen[0] += len;
-	if (ctx->buflen[0] < len)
+	ctx->buflen[0] += n;
+	if (ctx->buflen[0] < n)
 		ctx->buflen[1]++;
-	if (offset + len < bs)
-		goto store;
 
-	/* process one stored block */
-	if (offset) {
-		n = bs - offset;
-		memcpy(ctx->buf + offset, data, n);
-		ctx->gr0.iimp = 1;
-		_cpacf_kmac(&ctx->gr0.reg, ctx->param, ctx->buf, bs);
-		data += n;
-		len -= n;
-		offset = 0;
-	}
 	/* process as many blocks as possible */
-	if (len >= bs) {
-		n = (len / bs) * bs;
-		ctx->gr0.iimp = 1;
-		_cpacf_kmac(&ctx->gr0.reg, ctx->param, data, n);
-		data += n;
-		len -= n;
-	}
-store:
-	/* store incomplete block in buffer */
-	if (len)
-		memcpy(ctx->buf + offset, data, len);
-
-	return 0;
+	ctx->gr0.iimp = 1;
+	_cpacf_kmac(&ctx->gr0.reg, ctx->param, data, n);
+	return len - n;
 }
 
-static int s390_hmac_sha2_final(struct shash_desc *desc, u8 *out)
+static int s390_hmac_sha2_finup(struct shash_desc *desc, const u8 *src,
+				unsigned int len, u8 *out)
 {
 	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
 	unsigned int bs = crypto_shash_blocksize(desc->tfm);
 
+	ctx->buflen[0] += len;
+	if (ctx->buflen[0] < len)
+		ctx->buflen[1]++;
+
 	ctx->gr0.iimp = 0;
 	kmac_sha2_set_imbl(ctx->param, ctx->buflen[0], ctx->buflen[1], bs);
-	_cpacf_kmac(&ctx->gr0.reg, ctx->param, ctx->buf, ctx->buflen[0] % bs);
+	_cpacf_kmac(&ctx->gr0.reg, ctx->param, src, len);
 	memcpy(out, ctx->param, crypto_shash_digestsize(desc->tfm));
 
 	return 0;
@@ -273,22 +267,89 @@ static int s390_hmac_sha2_digest(struct shash_desc *desc,
 	return 0;
 }
 
-#define S390_HMAC_SHA2_ALG(x) {						\
+static int s390_hmac_export_zero(struct shash_desc *desc, void *out)
+{
+	struct crypto_shash *tfm = desc->tfm;
+	u8 ipad[SHA512_BLOCK_SIZE];
+	struct s390_hmac_ctx *ctx;
+	unsigned int bs;
+	int err, i;
+
+	ctx = crypto_shash_ctx(tfm);
+	bs = crypto_shash_blocksize(tfm);
+	for (i = 0; i < bs; i++)
+		ipad[i] = ctx->key[i] ^ HMAC_IPAD_VALUE;
+
+	err = hash_data(ipad, bs, out, crypto_shash_digestsize(tfm), false);
+	memzero_explicit(ipad, sizeof(ipad));
+	return err;
+}
+
+static int s390_hmac_export(struct shash_desc *desc, void *out)
+{
+	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
+	unsigned int bs = crypto_shash_blocksize(desc->tfm);
+	unsigned int ds = bs / 2;
+	union {
+		u8 *u8;
+		u64 *u64;
+	} p = { .u8 = out };
+	int err = 0;
+
+	if (!ctx->gr0.ikp)
+		err = s390_hmac_export_zero(desc, out);
+	else
+		memcpy(p.u8, ctx->param, ds);
+	p.u8 += ds;
+	put_unaligned(ctx->buflen[0], p.u64++);
+	if (ds == SHA512_DIGEST_SIZE)
+		put_unaligned(ctx->buflen[1], p.u64);
+	return err;
+}
+
+static int s390_hmac_import(struct shash_desc *desc, const void *in)
+{
+	struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
+	unsigned int bs = crypto_shash_blocksize(desc->tfm);
+	unsigned int ds = bs / 2;
+	union {
+		const u8 *u8;
+		const u64 *u64;
+	} p = { .u8 = in };
+	int err;
+
+	err = s390_hmac_sha2_init(desc);
+	memcpy(ctx->param, p.u8, ds);
+	p.u8 += ds;
+	ctx->buflen[0] = get_unaligned(p.u64++);
+	if (ds == SHA512_DIGEST_SIZE)
+		ctx->buflen[1] = get_unaligned(p.u64);
+	if (ctx->buflen[0] | ctx->buflen[1])
+		ctx->gr0.ikp = 1;
+	return err;
+}
+
+#define S390_HMAC_SHA2_ALG(x, ss) {					\
 	.fc = CPACF_KMAC_HMAC_SHA_##x,					\
 	.alg = {							\
 		.init = s390_hmac_sha2_init,				\
 		.update = s390_hmac_sha2_update,			\
-		.final = s390_hmac_sha2_final,				\
+		.finup = s390_hmac_sha2_finup,				\
 		.digest = s390_hmac_sha2_digest,			\
 		.setkey = s390_hmac_sha2_setkey,			\
+		.export = s390_hmac_export,				\
+		.import = s390_hmac_import,				\
 		.descsize = sizeof(struct s390_kmac_sha2_ctx),		\
 		.halg = {						\
+			.statesize = ss,				\
 			.digestsize = SHA##x##_DIGEST_SIZE,		\
 			.base = {					\
 				.cra_name = "hmac(sha" #x ")",		\
 				.cra_driver_name = "hmac_s390_sha" #x,	\
 				.cra_blocksize = SHA##x##_BLOCK_SIZE,	\
 				.cra_priority = 400,			\
+				.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY | \
+					     CRYPTO_AHASH_ALG_FINUP_MAX, \
 				.cra_ctxsize = sizeof(struct s390_hmac_ctx), \
 				.cra_module = THIS_MODULE,		\
 			},						\
@@ -301,10 +362,10 @@ static struct s390_hmac_alg {
 	unsigned int fc;
 	struct shash_alg alg;
 } s390_hmac_algs[] = {
-	S390_HMAC_SHA2_ALG(224),
-	S390_HMAC_SHA2_ALG(256),
-	S390_HMAC_SHA2_ALG(384),
-	S390_HMAC_SHA2_ALG(512),
+	S390_HMAC_SHA2_ALG(224, sizeof(struct crypto_sha256_state)),
+	S390_HMAC_SHA2_ALG(256, sizeof(struct crypto_sha256_state)),
+	S390_HMAC_SHA2_ALG(384, SHA512_STATE_SIZE),
+	S390_HMAC_SHA2_ALG(512, SHA512_STATE_SIZE),
 };
 
 static __always_inline void _s390_hmac_algs_unregister(void)
-- 
2.39.5

-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] crypto: s390/hmac - Extend hash length counters to 128 bits
  2025-04-29  8:49 ` [PATCH 1/2] crypto: s390/hmac - Extend hash length counters to 128 bits Herbert Xu
@ 2025-05-19 16:10   ` Holger Dengler
  0 siblings, 0 replies; 8+ messages in thread
From: Holger Dengler @ 2025-05-19 16:10 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Harald Freudenberger, linux-s390, Linux Crypto Mailing List

On 29/04/2025 10:49, Herbert Xu wrote:
> As sha512 requires 128-bit counters, extend the hash length counters
> to that length.  Previously they were just 32 bits which means that
> a >4G sha256 hash would be incorrect.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Look good to me.
Tested-by: Holger Dengler <dengler@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>

-- 
Mit freundlichen Grüßen / Kind regards
Holger Dengler
--
IBM Systems, Linux on IBM Z Development
dengler@linux.ibm.com


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [v2 PATCH] crypto: s390/hmac - Use API partial block handling
  2025-05-02  9:00       ` [v2 PATCH] " Herbert Xu
@ 2025-05-19 16:12         ` Holger Dengler
  0 siblings, 0 replies; 8+ messages in thread
From: Holger Dengler @ 2025-05-19 16:12 UTC (permalink / raw)
  To: Herbert Xu, T Pratham
  Cc: Linux Crypto Mailing List, Harald Freudenberger, linux-s390

On 02/05/2025 11:00, Herbert Xu wrote:
> v2 fixes the export of 224 and 384.
> 
> ---8<---
> Use the Crypto API partial block handling.
> 
> Also switch to the generic export format.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

This patch was also included in the test. Looks good to me.
Tested-by: Holger Dengler <dengler@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>

-- 
Mit freundlichen Grüßen / Kind regards
Holger Dengler
--
IBM Systems, Linux on IBM Z Development
dengler@linux.ibm.com


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-05-19 16:40 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-29  8:49 [PATCH 0/2] crypto: s390/hmac - Use generic hash export format Herbert Xu
2025-04-29  8:49 ` [PATCH 1/2] crypto: s390/hmac - Extend hash length counters to 128 bits Herbert Xu
2025-05-19 16:10   ` Holger Dengler
2025-04-29  8:49 ` [PATCH 2/2] crypto: s390/hmac - Use generic hash export format Herbert Xu
2025-04-29 12:04   ` T Pratham
2025-04-30 10:34     ` [PATCH] crypto: s390/hmac - Use API partial block handling Herbert Xu
2025-05-02  9:00       ` [v2 PATCH] " Herbert Xu
2025-05-19 16:12         ` Holger Dengler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).