linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] crypto: shash optimizations
@ 2023-10-09  7:32 Eric Biggers
  2023-10-09  7:32 ` [PATCH 1/2] crypto: shash - optimize the default digest and finup Eric Biggers
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Eric Biggers @ 2023-10-09  7:32 UTC (permalink / raw)
  To: linux-crypto

This series fixes some inefficiencies in crypto_shash_digest() and
crypto_shash_finup(), particularly in cases where the algorithm doesn't
implement ->digest or ->finup respectively.

Eric Biggers (2):
  crypto: shash - optimize the default digest and finup
  crypto: shash - fold shash_digest_unaligned() into
    crypto_shash_digest()

 crypto/shash.c | 27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)


base-commit: 8468516f9f93a41dc65158b6428a1a1039c68f20
-- 
2.42.0


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] crypto: shash - optimize the default digest and finup
  2023-10-09  7:32 [PATCH 0/2] crypto: shash optimizations Eric Biggers
@ 2023-10-09  7:32 ` Eric Biggers
  2023-10-09  7:32 ` [PATCH 2/2] crypto: shash - fold shash_digest_unaligned() into crypto_shash_digest() Eric Biggers
  2023-10-20  5:51 ` [PATCH 0/2] crypto: shash optimizations Herbert Xu
  2 siblings, 0 replies; 4+ messages in thread
From: Eric Biggers @ 2023-10-09  7:32 UTC (permalink / raw)
  To: linux-crypto

From: Eric Biggers <ebiggers@google.com>

For an shash algorithm that doesn't implement ->digest, currently
crypto_shash_digest() with aligned input makes 5 indirect calls: 1 to
shash_digest_unaligned(), 1 to ->init, 2 to ->update ('alignmask + 1'
bytes, then the rest), then 1 to ->final.  This is true even if the
algorithm implements ->finup.  This is caused by an unnecessary fallback
to code meant to handle unaligned inputs.  In fact,
crypto_shash_digest() already does the needed alignment check earlier.
Therefore, optimize the number of indirect calls for aligned inputs to 3
when the algorithm implements ->finup.  It remains at 5 when the
algorithm implements neither ->finup nor ->digest.

Similarly, for an shash algorithm that doesn't implement ->finup,
currently crypto_shash_finup() with aligned input makes 4 indirect
calls: 1 to shash_finup_unaligned(), 2 to ->update, and
1 to ->final.  Optimize this to 3 calls.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/shash.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index 1fadb6b59bdcc..d99dc2f94c65f 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -184,20 +184,29 @@ int crypto_shash_final(struct shash_desc *desc, u8 *out)
 }
 EXPORT_SYMBOL_GPL(crypto_shash_final);
 
 static int shash_finup_unaligned(struct shash_desc *desc, const u8 *data,
 				 unsigned int len, u8 *out)
 {
 	return shash_update_unaligned(desc, data, len) ?:
 	       shash_final_unaligned(desc, out);
 }
 
+static int shash_default_finup(struct shash_desc *desc, const u8 *data,
+			       unsigned int len, u8 *out)
+{
+	struct shash_alg *shash = crypto_shash_alg(desc->tfm);
+
+	return shash->update(desc, data, len) ?:
+	       shash->final(desc, out);
+}
+
 int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
 		       unsigned int len, u8 *out)
 {
 	struct crypto_shash *tfm = desc->tfm;
 	struct shash_alg *shash = crypto_shash_alg(tfm);
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
 	int err;
 
 	if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
 		struct crypto_istat_hash *istat = shash_get_stat(shash);
@@ -217,20 +226,29 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
 EXPORT_SYMBOL_GPL(crypto_shash_finup);
 
 static int shash_digest_unaligned(struct shash_desc *desc, const u8 *data,
 				  unsigned int len, u8 *out)
 {
 	return crypto_shash_init(desc) ?:
 	       shash_update_unaligned(desc, data, len) ?:
 	       shash_final_unaligned(desc, out);
 }
 
+static int shash_default_digest(struct shash_desc *desc, const u8 *data,
+				unsigned int len, u8 *out)
+{
+	struct shash_alg *shash = crypto_shash_alg(desc->tfm);
+
+	return shash->init(desc) ?:
+	       shash->finup(desc, data, len, out);
+}
+
 int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
 			unsigned int len, u8 *out)
 {
 	struct crypto_shash *tfm = desc->tfm;
 	struct shash_alg *shash = crypto_shash_alg(tfm);
 	unsigned long alignmask = crypto_shash_alignmask(tfm);
 	int err;
 
 	if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
 		struct crypto_istat_hash *istat = shash_get_stat(shash);
@@ -649,23 +667,23 @@ static int shash_prepare_alg(struct shash_alg *alg)
 		return -EINVAL;
 
 	err = hash_prepare_alg(&alg->halg);
 	if (err)
 		return err;
 
 	base->cra_type = &crypto_shash_type;
 	base->cra_flags |= CRYPTO_ALG_TYPE_SHASH;
 
 	if (!alg->finup)
-		alg->finup = shash_finup_unaligned;
+		alg->finup = shash_default_finup;
 	if (!alg->digest)
-		alg->digest = shash_digest_unaligned;
+		alg->digest = shash_default_digest;
 	if (!alg->export) {
 		alg->export = shash_default_export;
 		alg->import = shash_default_import;
 		alg->halg.statesize = alg->descsize;
 	}
 	if (!alg->setkey)
 		alg->setkey = shash_no_setkey;
 
 	return 0;
 }
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] crypto: shash - fold shash_digest_unaligned() into crypto_shash_digest()
  2023-10-09  7:32 [PATCH 0/2] crypto: shash optimizations Eric Biggers
  2023-10-09  7:32 ` [PATCH 1/2] crypto: shash - optimize the default digest and finup Eric Biggers
@ 2023-10-09  7:32 ` Eric Biggers
  2023-10-20  5:51 ` [PATCH 0/2] crypto: shash optimizations Herbert Xu
  2 siblings, 0 replies; 4+ messages in thread
From: Eric Biggers @ 2023-10-09  7:32 UTC (permalink / raw)
  To: linux-crypto

From: Eric Biggers <ebiggers@google.com>

Fold shash_digest_unaligned() into its only remaining caller.  Also,
avoid a redundant check of CRYPTO_TFM_NEED_KEY by replacing the call to
crypto_shash_init() with shash->init(desc).  Finally, replace
shash_update_unaligned() + shash_final_unaligned() with
shash_finup_unaligned() which does exactly that.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/shash.c | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/crypto/shash.c b/crypto/shash.c
index d99dc2f94c65f..15fee57cca8ef 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -218,28 +218,20 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
 	if (((unsigned long)data | (unsigned long)out) & alignmask)
 		err = shash_finup_unaligned(desc, data, len, out);
 	else
 		err = shash->finup(desc, data, len, out);
 
 
 	return crypto_shash_errstat(shash, err);
 }
 EXPORT_SYMBOL_GPL(crypto_shash_finup);
 
-static int shash_digest_unaligned(struct shash_desc *desc, const u8 *data,
-				  unsigned int len, u8 *out)
-{
-	return crypto_shash_init(desc) ?:
-	       shash_update_unaligned(desc, data, len) ?:
-	       shash_final_unaligned(desc, out);
-}
-
 static int shash_default_digest(struct shash_desc *desc, const u8 *data,
 				unsigned int len, u8 *out)
 {
 	struct shash_alg *shash = crypto_shash_alg(desc->tfm);
 
 	return shash->init(desc) ?:
 	       shash->finup(desc, data, len, out);
 }
 
 int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
@@ -253,21 +245,22 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
 	if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
 		struct crypto_istat_hash *istat = shash_get_stat(shash);
 
 		atomic64_inc(&istat->hash_cnt);
 		atomic64_add(len, &istat->hash_tlen);
 	}
 
 	if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
 		err = -ENOKEY;
 	else if (((unsigned long)data | (unsigned long)out) & alignmask)
-		err = shash_digest_unaligned(desc, data, len, out);
+		err = shash->init(desc) ?:
+		      shash_finup_unaligned(desc, data, len, out);
 	else
 		err = shash->digest(desc, data, len, out);
 
 	return crypto_shash_errstat(shash, err);
 }
 EXPORT_SYMBOL_GPL(crypto_shash_digest);
 
 int crypto_shash_tfm_digest(struct crypto_shash *tfm, const u8 *data,
 			    unsigned int len, u8 *out)
 {
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 0/2] crypto: shash optimizations
  2023-10-09  7:32 [PATCH 0/2] crypto: shash optimizations Eric Biggers
  2023-10-09  7:32 ` [PATCH 1/2] crypto: shash - optimize the default digest and finup Eric Biggers
  2023-10-09  7:32 ` [PATCH 2/2] crypto: shash - fold shash_digest_unaligned() into crypto_shash_digest() Eric Biggers
@ 2023-10-20  5:51 ` Herbert Xu
  2 siblings, 0 replies; 4+ messages in thread
From: Herbert Xu @ 2023-10-20  5:51 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto

Eric Biggers <ebiggers@kernel.org> wrote:
> This series fixes some inefficiencies in crypto_shash_digest() and
> crypto_shash_finup(), particularly in cases where the algorithm doesn't
> implement ->digest or ->finup respectively.
> 
> Eric Biggers (2):
>  crypto: shash - optimize the default digest and finup
>  crypto: shash - fold shash_digest_unaligned() into
>    crypto_shash_digest()
> 
> crypto/shash.c | 27 +++++++++++++++++++--------
> 1 file changed, 19 insertions(+), 8 deletions(-)
> 
> 
> base-commit: 8468516f9f93a41dc65158b6428a1a1039c68f20

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-10-20  5:51 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-09  7:32 [PATCH 0/2] crypto: shash optimizations Eric Biggers
2023-10-09  7:32 ` [PATCH 1/2] crypto: shash - optimize the default digest and finup Eric Biggers
2023-10-09  7:32 ` [PATCH 2/2] crypto: shash - fold shash_digest_unaligned() into crypto_shash_digest() Eric Biggers
2023-10-20  5:51 ` [PATCH 0/2] crypto: shash optimizations Herbert Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).