* [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash
@ 2025-05-15 5:54 Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 01/11] crypto: hash - Move core export and import into internel/hash.h Herbert Xu
` (11 more replies)
0 siblings, 12 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
v4 switches the name of the hmac shash and ahash instances. The
ahash instance will bear the hmac name while shash gets the driver
name of hmac-shash. Add patch to silence testmgr warning on -17
for shash allocations as it is expected.
This series adds partial block handling to ahash so that drivers
do not have to handle them. It also adds hmac ahash support so
that drivers that do hmac purely in software can be simplified.
A new test has been added to testmgr to ensure that all implementations
of a given algorithm use the same export format. As a transitional
measure only algorithms that declare themselves as block-only, or
provides export_core/import_core hooks will be tested.
Herbert Xu (11):
crypto: hash - Move core export and import into internel/hash.h
crypto: hash - Add export_core and import_core hooks
crypto: ahash - Handle partial blocks in API
crypto: hmac - Zero shash desc in setkey
crypto: hmac - Add export_core and import_core
crypto: shash - Set reqsize in shash_alg
crypto: algapi - Add driver template support to crypto_inst_setname
crypto: testmgr - Ignore EEXIST on shash allocation
crypto: hmac - Add ahash support
crypto: testmgr - Use ahash for generic tfm
crypto: testmgr - Add hash export format testing
crypto/ahash.c | 572 ++++++++++++++++-----------------
crypto/algapi.c | 8 +-
crypto/hmac.c | 392 +++++++++++++++++++---
crypto/shash.c | 46 ++-
crypto/testmgr.c | 134 ++++++--
crypto/testmgr.h | 2 +
include/crypto/algapi.h | 12 +-
include/crypto/hash.h | 73 ++---
include/crypto/internal/hash.h | 66 ++++
9 files changed, 883 insertions(+), 422 deletions(-)
--
2.39.5
^ permalink raw reply [flat|nested] 21+ messages in thread
* [v4 PATCH 01/11] crypto: hash - Move core export and import into internel/hash.h
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 02/11] crypto: hash - Add export_core and import_core hooks Herbert Xu
` (10 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
The core export and import functions are targeted at implementors
so move them into internal/hash.h.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
include/crypto/hash.h | 48 ----------------------------------
include/crypto/internal/hash.h | 48 ++++++++++++++++++++++++++++++++++
2 files changed, 48 insertions(+), 48 deletions(-)
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 1760662ad70a..9fc9daaaaab4 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -506,18 +506,6 @@ int crypto_ahash_digest(struct ahash_request *req);
*/
int crypto_ahash_export(struct ahash_request *req, void *out);
-/**
- * crypto_ahash_export_core() - extract core state for message digest
- * @req: reference to the ahash_request handle whose state is exported
- * @out: output buffer of sufficient size that can hold the hash state
- *
- * Export the hash state without the partial block buffer.
- *
- * Context: Softirq or process context.
- * Return: 0 if the export creation was successful; < 0 if an error occurred
- */
-int crypto_ahash_export_core(struct ahash_request *req, void *out);
-
/**
* crypto_ahash_import() - import message digest state
* @req: reference to ahash_request handle the state is imported into
@@ -531,18 +519,6 @@ int crypto_ahash_export_core(struct ahash_request *req, void *out);
*/
int crypto_ahash_import(struct ahash_request *req, const void *in);
-/**
- * crypto_ahash_import_core() - import core state
- * @req: reference to ahash_request handle the state is imported into
- * @in: buffer holding the state
- *
- * Import the hash state without the partial block buffer.
- *
- * Context: Softirq or process context.
- * Return: 0 if the import was successful; < 0 if an error occurred
- */
-int crypto_ahash_import_core(struct ahash_request *req, const void *in);
-
/**
* crypto_ahash_init() - (re)initialize message digest handle
* @req: ahash_request handle that already is initialized with all necessary
@@ -933,18 +909,6 @@ int crypto_hash_digest(struct crypto_ahash *tfm, const u8 *data,
*/
int crypto_shash_export(struct shash_desc *desc, void *out);
-/**
- * crypto_shash_export_core() - extract core state for message digest
- * @desc: reference to the operational state handle whose state is exported
- * @out: output buffer of sufficient size that can hold the hash state
- *
- * Export the hash state without the partial block buffer.
- *
- * Context: Softirq or process context.
- * Return: 0 if the export creation was successful; < 0 if an error occurred
- */
-int crypto_shash_export_core(struct shash_desc *desc, void *out);
-
/**
* crypto_shash_import() - import operational state
* @desc: reference to the operational state handle the state imported into
@@ -959,18 +923,6 @@ int crypto_shash_export_core(struct shash_desc *desc, void *out);
*/
int crypto_shash_import(struct shash_desc *desc, const void *in);
-/**
- * crypto_shash_import_core() - import core state
- * @desc: reference to the operational state handle the state imported into
- * @in: buffer holding the state
- *
- * Import the hash state without the partial block buffer.
- *
- * Context: Softirq or process context.
- * Return: 0 if the import was successful; < 0 if an error occurred
- */
-int crypto_shash_import_core(struct shash_desc *desc, const void *in);
-
/**
* crypto_shash_init() - (re)initialize message digest
* @desc: operational state handle that is already filled
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index f2bbdb74e11a..ef5ea75ac5c8 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -305,5 +305,53 @@ static inline unsigned int crypto_shash_coresize(struct crypto_shash *tfm)
#define HASH_REQUEST_ZERO(name) \
memzero_explicit(__##name##_req, sizeof(__##name##_req))
+/**
+ * crypto_ahash_export_core() - extract core state for message digest
+ * @req: reference to the ahash_request handle whose state is exported
+ * @out: output buffer of sufficient size that can hold the hash state
+ *
+ * Export the hash state without the partial block buffer.
+ *
+ * Context: Softirq or process context.
+ * Return: 0 if the export creation was successful; < 0 if an error occurred
+ */
+int crypto_ahash_export_core(struct ahash_request *req, void *out);
+
+/**
+ * crypto_ahash_import_core() - import core state
+ * @req: reference to ahash_request handle the state is imported into
+ * @in: buffer holding the state
+ *
+ * Import the hash state without the partial block buffer.
+ *
+ * Context: Softirq or process context.
+ * Return: 0 if the import was successful; < 0 if an error occurred
+ */
+int crypto_ahash_import_core(struct ahash_request *req, const void *in);
+
+/**
+ * crypto_shash_export_core() - extract core state for message digest
+ * @desc: reference to the operational state handle whose state is exported
+ * @out: output buffer of sufficient size that can hold the hash state
+ *
+ * Export the hash state without the partial block buffer.
+ *
+ * Context: Softirq or process context.
+ * Return: 0 if the export creation was successful; < 0 if an error occurred
+ */
+int crypto_shash_export_core(struct shash_desc *desc, void *out);
+
+/**
+ * crypto_shash_import_core() - import core state
+ * @desc: reference to the operational state handle the state imported into
+ * @in: buffer holding the state
+ *
+ * Import the hash state without the partial block buffer.
+ *
+ * Context: Softirq or process context.
+ * Return: 0 if the import was successful; < 0 if an error occurred
+ */
+int crypto_shash_import_core(struct shash_desc *desc, const void *in);
+
#endif /* _CRYPTO_INTERNAL_HASH_H */
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 02/11] crypto: hash - Add export_core and import_core hooks
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 01/11] crypto: hash - Move core export and import into internel/hash.h Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 03/11] crypto: ahash - Handle partial blocks in API Herbert Xu
` (9 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
Add export_core and import_core hooks. These are intended to be
used by algorithms which are wrappers around block-only algorithms,
but are not themselves block-only, e.g., hmac.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/ahash.c | 22 ++++++++++++++---
crypto/shash.c | 44 +++++++++++++++++++++++++++-------
include/crypto/hash.h | 10 ++++++++
include/crypto/internal/hash.h | 3 +++
4 files changed, 68 insertions(+), 11 deletions(-)
diff --git a/crypto/ahash.c b/crypto/ahash.c
index 344bf1b43e71..7d96c76731ef 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -704,7 +704,7 @@ int crypto_ahash_export_core(struct ahash_request *req, void *out)
if (likely(tfm->using_shash))
return crypto_shash_export_core(ahash_request_ctx(req), out);
- return crypto_ahash_alg(tfm)->export(req, out);
+ return crypto_ahash_alg(tfm)->export_core(req, out);
}
EXPORT_SYMBOL_GPL(crypto_ahash_export_core);
@@ -727,7 +727,7 @@ int crypto_ahash_import_core(struct ahash_request *req, const void *in)
in);
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
- return crypto_ahash_alg(tfm)->import(req, in);
+ return crypto_ahash_alg(tfm)->import_core(req, in);
}
EXPORT_SYMBOL_GPL(crypto_ahash_import_core);
@@ -739,7 +739,7 @@ int crypto_ahash_import(struct ahash_request *req, const void *in)
return crypto_shash_import(prepare_shash_desc(req, tfm), in);
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
- return crypto_ahash_import_core(req, in);
+ return crypto_ahash_alg(tfm)->import(req, in);
}
EXPORT_SYMBOL_GPL(crypto_ahash_import);
@@ -971,6 +971,16 @@ struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash)
}
EXPORT_SYMBOL_GPL(crypto_clone_ahash);
+static int ahash_default_export_core(struct ahash_request *req, void *out)
+{
+ return -ENOSYS;
+}
+
+static int ahash_default_import_core(struct ahash_request *req, const void *in)
+{
+ return -ENOSYS;
+}
+
static int ahash_prepare_alg(struct ahash_alg *alg)
{
struct crypto_alg *base = &alg->halg.base;
@@ -996,6 +1006,12 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
if (!alg->setkey)
alg->setkey = ahash_nosetkey;
+ if (!alg->export_core || !alg->import_core) {
+ alg->export_core = ahash_default_export_core;
+ alg->import_core = ahash_default_import_core;
+ base->cra_flags |= CRYPTO_AHASH_ALG_NO_EXPORT_CORE;
+ }
+
return 0;
}
diff --git a/crypto/shash.c b/crypto/shash.c
index dee391d47f51..5bc74a72d5ad 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -203,9 +203,10 @@ int crypto_shash_tfm_digest(struct crypto_shash *tfm, const u8 *data,
}
EXPORT_SYMBOL_GPL(crypto_shash_tfm_digest);
-int crypto_shash_export_core(struct shash_desc *desc, void *out)
+static int __crypto_shash_export(struct shash_desc *desc, void *out,
+ int (*export)(struct shash_desc *desc,
+ void *out))
{
- int (*export)(struct shash_desc *desc, void *out);
struct crypto_shash *tfm = desc->tfm;
u8 *buf = shash_desc_ctx(desc);
unsigned int plen, ss;
@@ -214,7 +215,6 @@ int crypto_shash_export_core(struct shash_desc *desc, void *out)
ss = crypto_shash_statesize(tfm);
if (crypto_shash_block_only(tfm))
ss -= plen;
- export = crypto_shash_alg(tfm)->export;
if (!export) {
memcpy(out, buf, ss);
return 0;
@@ -222,6 +222,12 @@ int crypto_shash_export_core(struct shash_desc *desc, void *out)
return export(desc, out);
}
+
+int crypto_shash_export_core(struct shash_desc *desc, void *out)
+{
+ return __crypto_shash_export(desc, out,
+ crypto_shash_alg(desc->tfm)->export_core);
+}
EXPORT_SYMBOL_GPL(crypto_shash_export_core);
int crypto_shash_export(struct shash_desc *desc, void *out)
@@ -236,13 +242,14 @@ int crypto_shash_export(struct shash_desc *desc, void *out)
memcpy(out + ss - plen, buf + descsize - plen, plen);
}
- return crypto_shash_export_core(desc, out);
+ return __crypto_shash_export(desc, out, crypto_shash_alg(tfm)->export);
}
EXPORT_SYMBOL_GPL(crypto_shash_export);
-int crypto_shash_import_core(struct shash_desc *desc, const void *in)
+static int __crypto_shash_import(struct shash_desc *desc, const void *in,
+ int (*import)(struct shash_desc *desc,
+ const void *in))
{
- int (*import)(struct shash_desc *desc, const void *in);
struct crypto_shash *tfm = desc->tfm;
unsigned int descsize, plen, ss;
u8 *buf = shash_desc_ctx(desc);
@@ -256,7 +263,6 @@ int crypto_shash_import_core(struct shash_desc *desc, const void *in)
buf[descsize - 1] = 0;
if (crypto_shash_block_only(tfm))
ss -= plen;
- import = crypto_shash_alg(tfm)->import;
if (!import) {
memcpy(buf, in, ss);
return 0;
@@ -264,6 +270,12 @@ int crypto_shash_import_core(struct shash_desc *desc, const void *in)
return import(desc, in);
}
+
+int crypto_shash_import_core(struct shash_desc *desc, const void *in)
+{
+ return __crypto_shash_import(desc, in,
+ crypto_shash_alg(desc->tfm)->import_core);
+}
EXPORT_SYMBOL_GPL(crypto_shash_import_core);
int crypto_shash_import(struct shash_desc *desc, const void *in)
@@ -271,7 +283,7 @@ int crypto_shash_import(struct shash_desc *desc, const void *in)
struct crypto_shash *tfm = desc->tfm;
int err;
- err = crypto_shash_import_core(desc, in);
+ err = __crypto_shash_import(desc, in, crypto_shash_alg(tfm)->import);
if (crypto_shash_block_only(tfm)) {
unsigned int plen = crypto_shash_blocksize(tfm) + 1;
unsigned int descsize = crypto_shash_descsize(tfm);
@@ -436,6 +448,16 @@ int hash_prepare_alg(struct hash_alg_common *alg)
return 0;
}
+static int shash_default_export_core(struct shash_desc *desc, void *out)
+{
+ return -ENOSYS;
+}
+
+static int shash_default_import_core(struct shash_desc *desc, const void *in)
+{
+ return -ENOSYS;
+}
+
static int shash_prepare_alg(struct shash_alg *alg)
{
struct crypto_alg *base = &alg->halg.base;
@@ -476,6 +498,12 @@ static int shash_prepare_alg(struct shash_alg *alg)
BUILD_BUG_ON(MAX_ALGAPI_BLOCKSIZE >= 256);
alg->descsize += base->cra_blocksize + 1;
alg->statesize += base->cra_blocksize + 1;
+ alg->export_core = alg->export;
+ alg->import_core = alg->import;
+ } else if (!alg->export_core || !alg->import_core) {
+ alg->export_core = shash_default_export_core;
+ alg->import_core = shash_default_import_core;
+ base->cra_flags |= CRYPTO_AHASH_ALG_NO_EXPORT_CORE;
}
if (alg->descsize > HASH_MAX_DESCSIZE)
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 9fc9daaaaab4..bf177cf9be10 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -129,6 +129,10 @@ struct ahash_request {
* data so the transformation can continue from this point onward. No
* data processing happens at this point. Driver must not use
* req->result.
+ * @export_core: Export partial state without partial block. Only defined
+ * for algorithms that are not block-only.
+ * @import_core: Import partial state without partial block. Only defined
+ * for algorithms that are not block-only.
* @init_tfm: Initialize the cryptographic transformation object.
* This function is called only once at the instantiation
* time, right after the transformation context was
@@ -151,6 +155,8 @@ struct ahash_alg {
int (*digest)(struct ahash_request *req);
int (*export)(struct ahash_request *req, void *out);
int (*import)(struct ahash_request *req, const void *in);
+ int (*export_core)(struct ahash_request *req, void *out);
+ int (*import_core)(struct ahash_request *req, const void *in);
int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
unsigned int keylen);
int (*init_tfm)(struct crypto_ahash *tfm);
@@ -200,6 +206,8 @@ struct shash_desc {
* @digest: see struct ahash_alg
* @export: see struct ahash_alg
* @import: see struct ahash_alg
+ * @export_core: see struct ahash_alg
+ * @import_core: see struct ahash_alg
* @setkey: see struct ahash_alg
* @init_tfm: Initialize the cryptographic transformation object.
* This function is called only once at the instantiation
@@ -230,6 +238,8 @@ struct shash_alg {
unsigned int len, u8 *out);
int (*export)(struct shash_desc *desc, void *out);
int (*import)(struct shash_desc *desc, const void *in);
+ int (*export_core)(struct shash_desc *desc, void *out);
+ int (*import_core)(struct shash_desc *desc, const void *in);
int (*setkey)(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen);
int (*init_tfm)(struct crypto_shash *tfm);
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index ef5ea75ac5c8..e9de2bc34a10 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -20,6 +20,9 @@
/* Set this bit if finup can deal with multiple blocks. */
#define CRYPTO_AHASH_ALG_FINUP_MAX 0x04000000
+/* This bit is set by the Crypto API if export_core is not supported. */
+#define CRYPTO_AHASH_ALG_NO_EXPORT_CORE 0x08000000
+
#define HASH_FBREQ_ON_STACK(name, req) \
char __##name##_req[sizeof(struct ahash_request) + \
MAX_SYNC_HASH_REQSIZE] CRYPTO_MINALIGN_ATTR; \
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 03/11] crypto: ahash - Handle partial blocks in API
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 01/11] crypto: hash - Move core export and import into internel/hash.h Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 02/11] crypto: hash - Add export_core and import_core hooks Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 04/11] crypto: hmac - Zero shash desc in setkey Herbert Xu
` (8 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
Provide an option to handle the partial blocks in the ahash API.
Almost every hash algorithm has a block size and are only able
to hash partial blocks on finalisation.
As a first step disable virtual address support for algorithms
with state sizes larger than HASH_MAX_STATESIZE. This is OK as
virtual addresses are currently only used on synchronous fallbacks.
This means ahash_do_req_chain only needs to handle synchronous
fallbacks, removing the complexities of saving the request state.
Also move the saved request state into the ahash_request object
as nesting is no longer possible.
Add a scatterlist to ahash_request to store the partial block.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/ahash.c | 541 ++++++++++++++++++++----------------------
include/crypto/hash.h | 12 +-
2 files changed, 265 insertions(+), 288 deletions(-)
diff --git a/crypto/ahash.c b/crypto/ahash.c
index 7d96c76731ef..cf8bbe7e54c0 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -12,11 +12,13 @@
* Copyright (c) 2008 Loc Ho <lho@amcc.com>
*/
+#include <crypto/scatterwalk.h>
#include <linux/cryptouser.h>
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/module.h>
+#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/seq_file.h>
#include <linux/string.h>
@@ -40,24 +42,47 @@ struct crypto_hash_walk {
struct scatterlist *sg;
};
-struct ahash_save_req_state {
- struct ahash_request *req0;
- crypto_completion_t compl;
- void *data;
- struct scatterlist sg;
- const u8 *src;
- u8 *page;
- unsigned int offset;
- unsigned int nbytes;
- bool update;
-};
-
-static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt);
-static void ahash_restore_req(struct ahash_request *req);
-static void ahash_def_finup_done1(void *data, int err);
-static int ahash_def_finup_finish1(struct ahash_request *req, int err);
static int ahash_def_finup(struct ahash_request *req);
+static inline bool crypto_ahash_block_only(struct crypto_ahash *tfm)
+{
+ return crypto_ahash_alg(tfm)->halg.base.cra_flags &
+ CRYPTO_AHASH_ALG_BLOCK_ONLY;
+}
+
+static inline bool crypto_ahash_final_nonzero(struct crypto_ahash *tfm)
+{
+ return crypto_ahash_alg(tfm)->halg.base.cra_flags &
+ CRYPTO_AHASH_ALG_FINAL_NONZERO;
+}
+
+static inline bool crypto_ahash_need_fallback(struct crypto_ahash *tfm)
+{
+ return crypto_ahash_alg(tfm)->halg.base.cra_flags &
+ CRYPTO_ALG_NEED_FALLBACK;
+}
+
+static inline void ahash_op_done(void *data, int err,
+ int (*finish)(struct ahash_request *, int))
+{
+ struct ahash_request *areq = data;
+ crypto_completion_t compl;
+
+ compl = areq->saved_complete;
+ data = areq->saved_data;
+ if (err == -EINPROGRESS)
+ goto out;
+
+ areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+
+ err = finish(areq, err);
+ if (err == -EINPROGRESS || err == -EBUSY)
+ return;
+
+out:
+ compl(data, err);
+}
+
static int hash_walk_next(struct crypto_hash_walk *walk)
{
unsigned int offset = walk->offset;
@@ -298,7 +323,7 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
int err;
err = alg->setkey(tfm, key, keylen);
- if (!err && ahash_is_async(tfm))
+ if (!err && crypto_ahash_need_fallback(tfm))
err = crypto_ahash_setkey(crypto_ahash_fb(tfm),
key, keylen);
if (unlikely(err)) {
@@ -311,159 +336,47 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
}
EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
-static int ahash_reqchain_virt(struct ahash_save_req_state *state,
- int err, u32 mask)
-{
- struct ahash_request *req = state->req0;
- struct crypto_ahash *tfm;
-
- tfm = crypto_ahash_reqtfm(req);
-
- for (;;) {
- unsigned len = state->nbytes;
-
- if (!state->offset)
- break;
-
- if (state->offset == len || err) {
- u8 *result = req->result;
-
- ahash_request_set_virt(req, state->src, result, len);
- state->offset = 0;
- break;
- }
-
- len -= state->offset;
-
- len = min(PAGE_SIZE, len);
- memcpy(state->page, state->src + state->offset, len);
- state->offset += len;
- req->nbytes = len;
-
- err = crypto_ahash_alg(tfm)->update(req);
- if (err == -EINPROGRESS) {
- if (state->offset < state->nbytes)
- err = -EBUSY;
- break;
- }
-
- if (err == -EBUSY)
- break;
- }
-
- return err;
-}
-
-static int ahash_reqchain_finish(struct ahash_request *req0,
- struct ahash_save_req_state *state,
- int err, u32 mask)
-{
- u8 *page;
-
- err = ahash_reqchain_virt(state, err, mask);
- if (err == -EINPROGRESS || err == -EBUSY)
- goto out;
-
- page = state->page;
- if (page) {
- memset(page, 0, PAGE_SIZE);
- free_page((unsigned long)page);
- }
- ahash_restore_req(req0);
-
-out:
- return err;
-}
-
-static void ahash_reqchain_done(void *data, int err)
-{
- struct ahash_save_req_state *state = data;
- crypto_completion_t compl = state->compl;
-
- data = state->data;
-
- if (err == -EINPROGRESS) {
- if (state->offset < state->nbytes)
- return;
- goto notify;
- }
-
- err = ahash_reqchain_finish(state->req0, state, err,
- CRYPTO_TFM_REQ_MAY_BACKLOG);
- if (err == -EBUSY)
- return;
-
-notify:
- compl(data, err);
-}
-
static int ahash_do_req_chain(struct ahash_request *req,
- int (*op)(struct ahash_request *req))
+ int (*const *op)(struct ahash_request *req))
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- bool update = op == crypto_ahash_alg(tfm)->update;
- struct ahash_save_req_state *state;
- struct ahash_save_req_state state0;
- u8 *page = NULL;
int err;
- if (crypto_ahash_req_virt(tfm) ||
- !update || !ahash_request_isvirt(req))
- return op(req);
+ if (crypto_ahash_req_virt(tfm) || !ahash_request_isvirt(req))
+ return (*op)(req);
- if (update && ahash_request_isvirt(req)) {
- page = (void *)__get_free_page(GFP_ATOMIC);
- err = -ENOMEM;
- if (!page)
- goto out;
- }
+ if (crypto_ahash_statesize(tfm) > HASH_MAX_STATESIZE)
+ return -ENOSYS;
- state = &state0;
- if (ahash_is_async(tfm)) {
- err = ahash_save_req(req, ahash_reqchain_done);
- if (err)
- goto out_free_page;
+ {
+ u8 state[HASH_MAX_STATESIZE];
- state = req->base.data;
- }
+ if (op == &crypto_ahash_alg(tfm)->digest) {
+ ahash_request_set_tfm(req, crypto_ahash_fb(tfm));
+ err = crypto_ahash_digest(req);
+ goto out_no_state;
+ }
- state->update = update;
- state->page = page;
- state->offset = 0;
- state->nbytes = 0;
+ err = crypto_ahash_export(req, state);
+ ahash_request_set_tfm(req, crypto_ahash_fb(tfm));
+ err = err ?: crypto_ahash_import(req, state);
- if (page)
- sg_init_one(&state->sg, page, PAGE_SIZE);
+ if (op == &crypto_ahash_alg(tfm)->finup) {
+ err = err ?: crypto_ahash_finup(req);
+ goto out_no_state;
+ }
- if (update && ahash_request_isvirt(req) && req->nbytes) {
- unsigned len = req->nbytes;
- u8 *result = req->result;
+ err = err ?:
+ crypto_ahash_update(req) ?:
+ crypto_ahash_export(req, state);
- state->src = req->svirt;
- state->nbytes = len;
+ ahash_request_set_tfm(req, tfm);
+ return err ?: crypto_ahash_import(req, state);
- len = min(PAGE_SIZE, len);
-
- memcpy(page, req->svirt, len);
- state->offset = len;
-
- ahash_request_set_crypt(req, &state->sg, result, len);
- }
-
- err = op(req);
- if (err == -EINPROGRESS || err == -EBUSY) {
- if (state->offset < state->nbytes)
- err = -EBUSY;
+out_no_state:
+ ahash_request_set_tfm(req, tfm);
return err;
}
-
- return ahash_reqchain_finish(req, state, err, ~0);
-
-out_free_page:
- free_page((unsigned long)page);
-
-out:
- return err;
}
int crypto_ahash_init(struct ahash_request *req)
@@ -476,144 +389,191 @@ int crypto_ahash_init(struct ahash_request *req)
return -ENOKEY;
if (ahash_req_on_stack(req) && ahash_is_async(tfm))
return -EAGAIN;
- return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->init);
+ if (crypto_ahash_block_only(tfm)) {
+ u8 *buf = ahash_request_ctx(req);
+
+ buf += crypto_ahash_reqsize(tfm) - 1;
+ *buf = 0;
+ }
+ return crypto_ahash_alg(tfm)->init(req);
}
EXPORT_SYMBOL_GPL(crypto_ahash_init);
-static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt)
+static void ahash_save_req(struct ahash_request *req, crypto_completion_t cplt)
{
- struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
- struct ahash_save_req_state *state;
-
- if (!ahash_is_async(tfm))
- return 0;
-
- state = kmalloc(sizeof(*state), GFP_ATOMIC);
- if (!state)
- return -ENOMEM;
-
- state->compl = req->base.complete;
- state->data = req->base.data;
+ req->saved_complete = req->base.complete;
+ req->saved_data = req->base.data;
req->base.complete = cplt;
- req->base.data = state;
- state->req0 = req;
-
- return 0;
+ req->base.data = req;
}
static void ahash_restore_req(struct ahash_request *req)
{
- struct ahash_save_req_state *state;
- struct crypto_ahash *tfm;
+ req->base.complete = req->saved_complete;
+ req->base.data = req->saved_data;
+}
- tfm = crypto_ahash_reqtfm(req);
- if (!ahash_is_async(tfm))
- return;
+static int ahash_update_finish(struct ahash_request *req, int err)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ bool nonzero = crypto_ahash_final_nonzero(tfm);
+ int bs = crypto_ahash_blocksize(tfm);
+ u8 *blenp = ahash_request_ctx(req);
+ int blen;
+ u8 *buf;
- state = req->base.data;
+ blenp += crypto_ahash_reqsize(tfm) - 1;
+ blen = *blenp;
+ buf = blenp - bs;
- req->base.complete = state->compl;
- req->base.data = state->data;
- kfree(state);
+ if (blen) {
+ req->src = req->sg_head + 1;
+ if (sg_is_chain(req->src))
+ req->src = sg_chain_ptr(req->src);
+ }
+
+ req->nbytes += nonzero - blen;
+
+ blen = err < 0 ? 0 : err + nonzero;
+ if (ahash_request_isvirt(req))
+ memcpy(buf, req->svirt + req->nbytes - blen, blen);
+ else
+ memcpy_from_sglist(buf, req->src, req->nbytes - blen, blen);
+ *blenp = blen;
+
+ ahash_restore_req(req);
+
+ return err;
+}
+
+static void ahash_update_done(void *data, int err)
+{
+ ahash_op_done(data, err, ahash_update_finish);
}
int crypto_ahash_update(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ bool nonzero = crypto_ahash_final_nonzero(tfm);
+ int bs = crypto_ahash_blocksize(tfm);
+ u8 *blenp = ahash_request_ctx(req);
+ int blen, err;
+ u8 *buf;
if (likely(tfm->using_shash))
return shash_ahash_update(req, ahash_request_ctx(req));
if (ahash_req_on_stack(req) && ahash_is_async(tfm))
return -EAGAIN;
- return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->update);
+ if (!crypto_ahash_block_only(tfm))
+ return ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->update);
+
+ blenp += crypto_ahash_reqsize(tfm) - 1;
+ blen = *blenp;
+ buf = blenp - bs;
+
+ if (blen + req->nbytes < bs + nonzero) {
+ if (ahash_request_isvirt(req))
+ memcpy(buf + blen, req->svirt, req->nbytes);
+ else
+ memcpy_from_sglist(buf + blen, req->src, 0,
+ req->nbytes);
+
+ *blenp += req->nbytes;
+ return 0;
+ }
+
+ if (blen) {
+ memset(req->sg_head, 0, sizeof(req->sg_head[0]));
+ sg_set_buf(req->sg_head, buf, blen);
+ if (req->src != req->sg_head + 1)
+ sg_chain(req->sg_head, 2, req->src);
+ req->src = req->sg_head;
+ req->nbytes += blen;
+ }
+ req->nbytes -= nonzero;
+
+ ahash_save_req(req, ahash_update_done);
+
+ err = ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->update);
+ if (err == -EINPROGRESS || err == -EBUSY)
+ return err;
+
+ return ahash_update_finish(req, err);
}
EXPORT_SYMBOL_GPL(crypto_ahash_update);
-int crypto_ahash_final(struct ahash_request *req)
+static int ahash_finup_finish(struct ahash_request *req, int err)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ u8 *blenp = ahash_request_ctx(req);
+ int blen;
- if (likely(tfm->using_shash))
- return crypto_shash_final(ahash_request_ctx(req), req->result);
- if (ahash_req_on_stack(req) && ahash_is_async(tfm))
- return -EAGAIN;
- return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->final);
+ blenp += crypto_ahash_reqsize(tfm) - 1;
+ blen = *blenp;
+
+ if (blen) {
+ if (sg_is_last(req->src))
+ req->src = NULL;
+ else {
+ req->src = req->sg_head + 1;
+ if (sg_is_chain(req->src))
+ req->src = sg_chain_ptr(req->src);
+ }
+ req->nbytes -= blen;
+ }
+
+ ahash_restore_req(req);
+
+ return err;
+}
+
+static void ahash_finup_done(void *data, int err)
+{
+ ahash_op_done(data, err, ahash_finup_finish);
}
-EXPORT_SYMBOL_GPL(crypto_ahash_final);
int crypto_ahash_finup(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ int bs = crypto_ahash_blocksize(tfm);
+ u8 *blenp = ahash_request_ctx(req);
+ int blen, err;
+ u8 *buf;
if (likely(tfm->using_shash))
return shash_ahash_finup(req, ahash_request_ctx(req));
if (ahash_req_on_stack(req) && ahash_is_async(tfm))
return -EAGAIN;
- if (!crypto_ahash_alg(tfm)->finup ||
- (!crypto_ahash_req_virt(tfm) && ahash_request_isvirt(req)))
+ if (!crypto_ahash_alg(tfm)->finup)
return ahash_def_finup(req);
- return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->finup);
+ if (!crypto_ahash_block_only(tfm))
+ return ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->finup);
+
+ blenp += crypto_ahash_reqsize(tfm) - 1;
+ blen = *blenp;
+ buf = blenp - bs;
+
+ if (blen) {
+ memset(req->sg_head, 0, sizeof(req->sg_head[0]));
+ sg_set_buf(req->sg_head, buf, blen);
+ if (!req->src)
+ sg_mark_end(req->sg_head);
+ else if (req->src != req->sg_head + 1)
+ sg_chain(req->sg_head, 2, req->src);
+ req->src = req->sg_head;
+ req->nbytes += blen;
+ }
+
+ ahash_save_req(req, ahash_finup_done);
+
+ err = ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->finup);
+ if (err == -EINPROGRESS || err == -EBUSY)
+ return err;
+
+ return ahash_finup_finish(req, err);
}
EXPORT_SYMBOL_GPL(crypto_ahash_finup);
-static int ahash_def_digest_finish(struct ahash_request *req, int err)
-{
- struct crypto_ahash *tfm;
-
- if (err)
- goto out;
-
- tfm = crypto_ahash_reqtfm(req);
- if (ahash_is_async(tfm))
- req->base.complete = ahash_def_finup_done1;
-
- err = crypto_ahash_update(req);
- if (err == -EINPROGRESS || err == -EBUSY)
- return err;
-
- return ahash_def_finup_finish1(req, err);
-
-out:
- ahash_restore_req(req);
- return err;
-}
-
-static void ahash_def_digest_done(void *data, int err)
-{
- struct ahash_save_req_state *state0 = data;
- struct ahash_save_req_state state;
- struct ahash_request *areq;
-
- state = *state0;
- areq = state.req0;
- if (err == -EINPROGRESS)
- goto out;
-
- areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
-
- err = ahash_def_digest_finish(areq, err);
- if (err == -EINPROGRESS || err == -EBUSY)
- return;
-
-out:
- state.compl(state.data, err);
-}
-
-static int ahash_def_digest(struct ahash_request *req)
-{
- int err;
-
- err = ahash_save_req(req, ahash_def_digest_done);
- if (err)
- return err;
-
- err = crypto_ahash_init(req);
- if (err == -EINPROGRESS || err == -EBUSY)
- return err;
-
- return ahash_def_digest_finish(req, err);
-}
-
int crypto_ahash_digest(struct ahash_request *req)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
@@ -622,18 +582,15 @@ int crypto_ahash_digest(struct ahash_request *req)
return shash_ahash_digest(req, prepare_shash_desc(req, tfm));
if (ahash_req_on_stack(req) && ahash_is_async(tfm))
return -EAGAIN;
- if (!crypto_ahash_req_virt(tfm) && ahash_request_isvirt(req))
- return ahash_def_digest(req);
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
- return ahash_do_req_chain(req, crypto_ahash_alg(tfm)->digest);
+ return ahash_do_req_chain(req, &crypto_ahash_alg(tfm)->digest);
}
EXPORT_SYMBOL_GPL(crypto_ahash_digest);
static void ahash_def_finup_done2(void *data, int err)
{
- struct ahash_save_req_state *state = data;
- struct ahash_request *areq = state->req0;
+ struct ahash_request *areq = data;
if (err == -EINPROGRESS)
return;
@@ -644,14 +601,10 @@ static void ahash_def_finup_done2(void *data, int err)
static int ahash_def_finup_finish1(struct ahash_request *req, int err)
{
- struct crypto_ahash *tfm;
-
if (err)
goto out;
- tfm = crypto_ahash_reqtfm(req);
- if (ahash_is_async(tfm))
- req->base.complete = ahash_def_finup_done2;
+ req->base.complete = ahash_def_finup_done2;
err = crypto_ahash_final(req);
if (err == -EINPROGRESS || err == -EBUSY)
@@ -664,32 +617,14 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err)
static void ahash_def_finup_done1(void *data, int err)
{
- struct ahash_save_req_state *state0 = data;
- struct ahash_save_req_state state;
- struct ahash_request *areq;
-
- state = *state0;
- areq = state.req0;
- if (err == -EINPROGRESS)
- goto out;
-
- areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
-
- err = ahash_def_finup_finish1(areq, err);
- if (err == -EINPROGRESS || err == -EBUSY)
- return;
-
-out:
- state.compl(state.data, err);
+ ahash_op_done(data, err, ahash_def_finup_finish1);
}
static int ahash_def_finup(struct ahash_request *req)
{
int err;
- err = ahash_save_req(req, ahash_def_finup_done1);
- if (err)
- return err;
+ ahash_save_req(req, ahash_def_finup_done1);
err = crypto_ahash_update(req);
if (err == -EINPROGRESS || err == -EBUSY)
@@ -714,6 +649,14 @@ int crypto_ahash_export(struct ahash_request *req, void *out)
if (likely(tfm->using_shash))
return crypto_shash_export(ahash_request_ctx(req), out);
+ if (crypto_ahash_block_only(tfm)) {
+ unsigned int plen = crypto_ahash_blocksize(tfm) + 1;
+ unsigned int reqsize = crypto_ahash_reqsize(tfm);
+ unsigned int ss = crypto_ahash_statesize(tfm);
+ u8 *buf = ahash_request_ctx(req);
+
+ memcpy(out + ss - plen, buf + reqsize - plen, plen);
+ }
return crypto_ahash_alg(tfm)->export(req, out);
}
EXPORT_SYMBOL_GPL(crypto_ahash_export);
@@ -739,6 +682,12 @@ int crypto_ahash_import(struct ahash_request *req, const void *in)
return crypto_shash_import(prepare_shash_desc(req, tfm), in);
if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
return -ENOKEY;
+ if (crypto_ahash_block_only(tfm)) {
+ unsigned int reqsize = crypto_ahash_reqsize(tfm);
+ u8 *buf = ahash_request_ctx(req);
+
+ buf[reqsize - 1] = 0;
+ }
return crypto_ahash_alg(tfm)->import(req, in);
}
EXPORT_SYMBOL_GPL(crypto_ahash_import);
@@ -753,7 +702,7 @@ static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm)
else if (tfm->__crt_alg->cra_exit)
tfm->__crt_alg->cra_exit(tfm);
- if (ahash_is_async(hash))
+ if (crypto_ahash_need_fallback(hash))
crypto_free_ahash(crypto_ahash_fb(hash));
}
@@ -770,9 +719,12 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
if (tfm->__crt_alg->cra_type == &crypto_shash_type)
return crypto_init_ahash_using_shash(tfm);
- if (ahash_is_async(hash)) {
+ if (crypto_ahash_need_fallback(hash)) {
fb = crypto_alloc_ahash(crypto_ahash_alg_name(hash),
- 0, CRYPTO_ALG_ASYNC);
+ CRYPTO_ALG_REQ_VIRT,
+ CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_REQ_VIRT |
+ CRYPTO_AHASH_ALG_NO_EXPORT_CORE);
if (IS_ERR(fb))
return PTR_ERR(fb);
@@ -797,6 +749,10 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
MAX_SYNC_HASH_REQSIZE)
goto out_exit_tfm;
+ BUILD_BUG_ON(HASH_MAX_DESCSIZE > MAX_SYNC_HASH_REQSIZE);
+ if (crypto_ahash_reqsize(hash) < HASH_MAX_DESCSIZE)
+ crypto_ahash_set_reqsize(hash, HASH_MAX_DESCSIZE);
+
return 0;
out_exit_tfm:
@@ -941,7 +897,7 @@ struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash)
return nhash;
}
- if (ahash_is_async(hash)) {
+ if (crypto_ahash_need_fallback(hash)) {
fb = crypto_clone_ahash(crypto_ahash_fb(hash));
err = PTR_ERR(fb);
if (IS_ERR(fb))
@@ -1003,10 +959,23 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
base->cra_type = &crypto_ahash_type;
base->cra_flags |= CRYPTO_ALG_TYPE_AHASH;
+ if ((base->cra_flags ^ CRYPTO_ALG_REQ_VIRT) &
+ (CRYPTO_ALG_ASYNC | CRYPTO_ALG_REQ_VIRT))
+ base->cra_flags |= CRYPTO_ALG_NEED_FALLBACK;
+
if (!alg->setkey)
alg->setkey = ahash_nosetkey;
- if (!alg->export_core || !alg->import_core) {
+ if (base->cra_flags & CRYPTO_AHASH_ALG_BLOCK_ONLY) {
+ BUILD_BUG_ON(MAX_ALGAPI_BLOCKSIZE >= 256);
+ if (!alg->finup)
+ return -EINVAL;
+
+ base->cra_reqsize += base->cra_blocksize + 1;
+ alg->halg.statesize += base->cra_blocksize + 1;
+ alg->export_core = alg->export;
+ alg->import_core = alg->import;
+ } else if (!alg->export_core || !alg->import_core) {
alg->export_core = ahash_default_export_core;
alg->import_core = ahash_default_import_core;
base->cra_flags |= CRYPTO_AHASH_ALG_NO_EXPORT_CORE;
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index bf177cf9be10..05ee817a3180 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -8,8 +8,8 @@
#ifndef _CRYPTO_HASH_H
#define _CRYPTO_HASH_H
-#include <linux/atomic.h>
#include <linux/crypto.h>
+#include <linux/scatterlist.h>
#include <linux/slab.h>
#include <linux/string.h>
@@ -65,6 +65,10 @@ struct ahash_request {
};
u8 *result;
+ struct scatterlist sg_head[2];
+ crypto_completion_t saved_complete;
+ void *saved_data;
+
void *__ctx[] CRYPTO_MINALIGN_ATTR;
};
@@ -488,7 +492,11 @@ int crypto_ahash_finup(struct ahash_request *req);
* -EBUSY if queue is full and request should be resubmitted later;
* other < 0 if an error occurred
*/
-int crypto_ahash_final(struct ahash_request *req);
+static inline int crypto_ahash_final(struct ahash_request *req)
+{
+ req->nbytes = 0;
+ return crypto_ahash_finup(req);
+}
/**
* crypto_ahash_digest() - calculate message digest for a buffer
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 04/11] crypto: hmac - Zero shash desc in setkey
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
` (2 preceding siblings ...)
2025-05-15 5:54 ` [v4 PATCH 03/11] crypto: ahash - Handle partial blocks in API Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 05/11] crypto: hmac - Add export_core and import_core Herbert Xu
` (7 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
The shash desc needs to be zeroed after use in setkey as it is
not finalised (finalisation automatically zeroes it).
Also remove the final function as it's been superseded by finup.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/hmac.c | 35 ++++++++++-------------------------
1 file changed, 10 insertions(+), 25 deletions(-)
diff --git a/crypto/hmac.c b/crypto/hmac.c
index ba36ddf50037..4517e04bfbaa 100644
--- a/crypto/hmac.c
+++ b/crypto/hmac.c
@@ -13,13 +13,11 @@
#include <crypto/hmac.h>
#include <crypto/internal/hash.h>
-#include <crypto/scatterwalk.h>
#include <linux/err.h>
#include <linux/fips.h>
-#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
-#include <linux/scatterlist.h>
+#include <linux/slab.h>
#include <linux/string.h>
struct hmac_ctx {
@@ -39,7 +37,7 @@ static int hmac_setkey(struct crypto_shash *parent,
u8 *ipad = &tctx->pads[0];
u8 *opad = &tctx->pads[ss];
SHASH_DESC_ON_STACK(shash, hash);
- unsigned int i;
+ int err, i;
if (fips_enabled && (keylen < 112 / 8))
return -EINVAL;
@@ -65,12 +63,14 @@ static int hmac_setkey(struct crypto_shash *parent,
opad[i] ^= HMAC_OPAD_VALUE;
}
- return crypto_shash_init(shash) ?:
- crypto_shash_update(shash, ipad, bs) ?:
- crypto_shash_export(shash, ipad) ?:
- crypto_shash_init(shash) ?:
- crypto_shash_update(shash, opad, bs) ?:
- crypto_shash_export(shash, opad);
+ err = crypto_shash_init(shash) ?:
+ crypto_shash_update(shash, ipad, bs) ?:
+ crypto_shash_export(shash, ipad) ?:
+ crypto_shash_init(shash) ?:
+ crypto_shash_update(shash, opad, bs) ?:
+ crypto_shash_export(shash, opad);
+ shash_desc_zero(shash);
+ return err;
}
static int hmac_export(struct shash_desc *pdesc, void *out)
@@ -105,20 +105,6 @@ static int hmac_update(struct shash_desc *pdesc,
return crypto_shash_update(desc, data, nbytes);
}
-static int hmac_final(struct shash_desc *pdesc, u8 *out)
-{
- struct crypto_shash *parent = pdesc->tfm;
- int ds = crypto_shash_digestsize(parent);
- int ss = crypto_shash_statesize(parent);
- const struct hmac_ctx *tctx = crypto_shash_ctx(parent);
- const u8 *opad = &tctx->pads[ss];
- struct shash_desc *desc = shash_desc_ctx(pdesc);
-
- return crypto_shash_final(desc, out) ?:
- crypto_shash_import(desc, opad) ?:
- crypto_shash_finup(desc, out, ds, out);
-}
-
static int hmac_finup(struct shash_desc *pdesc, const u8 *data,
unsigned int nbytes, u8 *out)
{
@@ -222,7 +208,6 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.descsize = sizeof(struct shash_desc) + salg->descsize;
inst->alg.init = hmac_init;
inst->alg.update = hmac_update;
- inst->alg.final = hmac_final;
inst->alg.finup = hmac_finup;
inst->alg.export = hmac_export;
inst->alg.import = hmac_import;
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 05/11] crypto: hmac - Add export_core and import_core
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
` (3 preceding siblings ...)
2025-05-15 5:54 ` [v4 PATCH 04/11] crypto: hmac - Zero shash desc in setkey Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 06/11] crypto: shash - Set reqsize in shash_alg Herbert Xu
` (6 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
Add export_import and import_core so that hmac can be used as a
fallback by block-only drivers.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/hmac.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/crypto/hmac.c b/crypto/hmac.c
index 4517e04bfbaa..e4749a1f93dd 100644
--- a/crypto/hmac.c
+++ b/crypto/hmac.c
@@ -90,6 +90,22 @@ static int hmac_import(struct shash_desc *pdesc, const void *in)
return crypto_shash_import(desc, in);
}
+static int hmac_export_core(struct shash_desc *pdesc, void *out)
+{
+ struct shash_desc *desc = shash_desc_ctx(pdesc);
+
+ return crypto_shash_export_core(desc, out);
+}
+
+static int hmac_import_core(struct shash_desc *pdesc, const void *in)
+{
+ const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm);
+ struct shash_desc *desc = shash_desc_ctx(pdesc);
+
+ desc->tfm = tctx->hash;
+ return crypto_shash_import_core(desc, in);
+}
+
static int hmac_init(struct shash_desc *pdesc)
{
const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm);
@@ -177,6 +193,7 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
return -ENOMEM;
spawn = shash_instance_ctx(inst);
+ mask |= CRYPTO_AHASH_ALG_NO_EXPORT_CORE;
err = crypto_grab_shash(spawn, shash_crypto_instance(inst),
crypto_attr_alg_name(tb[1]), 0, mask);
if (err)
@@ -211,6 +228,8 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
inst->alg.finup = hmac_finup;
inst->alg.export = hmac_export;
inst->alg.import = hmac_import;
+ inst->alg.export_core = hmac_export_core;
+ inst->alg.import_core = hmac_import_core;
inst->alg.setkey = hmac_setkey;
inst->alg.init_tfm = hmac_init_tfm;
inst->alg.clone_tfm = hmac_clone_tfm;
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 06/11] crypto: shash - Set reqsize in shash_alg
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
` (4 preceding siblings ...)
2025-05-15 5:54 ` [v4 PATCH 05/11] crypto: hmac - Add export_core and import_core Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 19:32 ` Eric Biggers
2025-05-15 5:54 ` [v4 PATCH 07/11] crypto: algapi - Add driver template support to crypto_inst_setname Herbert Xu
` (5 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
Make reqsize static for shash algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/ahash.c | 1 -
crypto/shash.c | 2 ++
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/crypto/ahash.c b/crypto/ahash.c
index cf8bbe7e54c0..bf8375bb32c9 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -286,7 +286,6 @@ static int crypto_init_ahash_using_shash(struct crypto_tfm *tfm)
crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
CRYPTO_TFM_NEED_KEY);
- crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
return 0;
}
diff --git a/crypto/shash.c b/crypto/shash.c
index 5bc74a72d5ad..37537d7995c7 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -511,6 +511,8 @@ static int shash_prepare_alg(struct shash_alg *alg)
if (alg->statesize > HASH_MAX_STATESIZE)
return -EINVAL;
+ base->cra_reqsize = sizeof(struct shash_desc) + alg->descsize;
+
return 0;
}
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 07/11] crypto: algapi - Add driver template support to crypto_inst_setname
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
` (5 preceding siblings ...)
2025-05-15 5:54 ` [v4 PATCH 06/11] crypto: shash - Set reqsize in shash_alg Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 19:33 ` Eric Biggers
2025-05-15 5:54 ` [v4 PATCH 08/11] crypto: testmgr - Ignore EEXIST on shash allocation Herbert Xu
` (4 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
Add support to crypto_inst_setname for having a driver template
name that differs from the algorithm template name.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/algapi.c | 8 ++++----
include/crypto/algapi.h | 12 ++++++++++--
2 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/crypto/algapi.c b/crypto/algapi.c
index 25b5519e3b71..e604d0d8b7b4 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -923,20 +923,20 @@ const char *crypto_attr_alg_name(struct rtattr *rta)
}
EXPORT_SYMBOL_GPL(crypto_attr_alg_name);
-int crypto_inst_setname(struct crypto_instance *inst, const char *name,
- struct crypto_alg *alg)
+int __crypto_inst_setname(struct crypto_instance *inst, const char *name,
+ const char *driver, struct crypto_alg *alg)
{
if (snprintf(inst->alg.cra_name, CRYPTO_MAX_ALG_NAME, "%s(%s)", name,
alg->cra_name) >= CRYPTO_MAX_ALG_NAME)
return -ENAMETOOLONG;
if (snprintf(inst->alg.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s(%s)",
- name, alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+ driver, alg->cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
return -ENAMETOOLONG;
return 0;
}
-EXPORT_SYMBOL_GPL(crypto_inst_setname);
+EXPORT_SYMBOL_GPL(__crypto_inst_setname);
void crypto_init_queue(struct crypto_queue *queue, unsigned int max_qlen)
{
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index 423e57eca351..188eface0a11 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -146,8 +146,16 @@ void *crypto_spawn_tfm2(struct crypto_spawn *spawn);
struct crypto_attr_type *crypto_get_attr_type(struct rtattr **tb);
int crypto_check_attr_type(struct rtattr **tb, u32 type, u32 *mask_ret);
const char *crypto_attr_alg_name(struct rtattr *rta);
-int crypto_inst_setname(struct crypto_instance *inst, const char *name,
- struct crypto_alg *alg);
+int __crypto_inst_setname(struct crypto_instance *inst, const char *name,
+ const char *driver, struct crypto_alg *alg);
+
+#define crypto_inst_setname(inst, name, ...) \
+ CONCATENATE(crypto_inst_setname_, COUNT_ARGS(__VA_ARGS__))( \
+ inst, name, ##__VA_ARGS__)
+#define crypto_inst_setname_1(inst, name, alg) \
+ __crypto_inst_setname(inst, name, name, alg)
+#define crypto_inst_setname_2(inst, name, driver, alg) \
+ __crypto_inst_setname(inst, name, driver, alg)
void crypto_init_queue(struct crypto_queue *queue, unsigned int max_qlen);
int crypto_enqueue_request(struct crypto_queue *queue,
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 08/11] crypto: testmgr - Ignore EEXIST on shash allocation
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
` (6 preceding siblings ...)
2025-05-15 5:54 ` [v4 PATCH 07/11] crypto: algapi - Add driver template support to crypto_inst_setname Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 09/11] crypto: hmac - Add ahash support Herbert Xu
` (3 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
Soon hmac will support ahash. For compatibility hmac still supports
shash so it is possible for two hmac algorithms to be registered at
the same time. The shash algorithm will have the driver name
"hmac-shash(XXX-driver)". Due to a quirk in the API, there is no way
to locate the shash algorithm using the name "hmac(XXX-driver)". It
has to be addressed as either "hmac(XXX)" or "hmac-shash(XXX-driver)".
Looking it up with "hmac(XXX-driver)" will simply trigger the creation
of another instance, and on the second instantiation this will fail
with EEXIST.
Catch the error EEXIST along with ENOENT since it is expected.
If a real shash algorithm came this way, it would be addressed using
the proper name "hmac-shash(XXX-driver)".
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/testmgr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index fc28000c27f5..ee682ad50e34 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -1869,7 +1869,7 @@ static int alloc_shash(const char *driver, u32 type, u32 mask,
tfm = crypto_alloc_shash(driver, type, mask);
if (IS_ERR(tfm)) {
- if (PTR_ERR(tfm) == -ENOENT) {
+ if (PTR_ERR(tfm) == -ENOENT || PTR_ERR(tfm) == -EEXIST) {
/*
* This algorithm is only available through the ahash
* API, not the shash API, so skip the shash tests.
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 09/11] crypto: hmac - Add ahash support
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
` (7 preceding siblings ...)
2025-05-15 5:54 ` [v4 PATCH 08/11] crypto: testmgr - Ignore EEXIST on shash allocation Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 10/11] crypto: testmgr - Use ahash for generic tfm Herbert Xu
` (2 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
Add ahash support to hmac so that drivers that can't do hmac in
hardware do not have to implement duplicate copies of hmac.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/ahash.c | 10 +-
crypto/hmac.c | 338 +++++++++++++++++++++++++++++++--
include/crypto/hash.h | 3 +-
include/crypto/internal/hash.h | 9 +
4 files changed, 345 insertions(+), 15 deletions(-)
diff --git a/crypto/ahash.c b/crypto/ahash.c
index bf8375bb32c9..e10bc2659ae4 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -846,7 +846,7 @@ int crypto_has_ahash(const char *alg_name, u32 type, u32 mask)
}
EXPORT_SYMBOL_GPL(crypto_has_ahash);
-static bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
+bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
{
struct crypto_alg *alg = &halg->base;
@@ -855,6 +855,7 @@ static bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey;
}
+EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey);
struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash)
{
@@ -1077,5 +1078,12 @@ int crypto_hash_digest(struct crypto_ahash *tfm, const u8 *data,
}
EXPORT_SYMBOL_GPL(crypto_hash_digest);
+void ahash_free_singlespawn_instance(struct ahash_instance *inst)
+{
+ crypto_drop_spawn(ahash_instance_ctx(inst));
+ kfree(inst);
+}
+EXPORT_SYMBOL_GPL(ahash_free_singlespawn_instance);
+
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Asynchronous cryptographic hash type");
diff --git a/crypto/hmac.c b/crypto/hmac.c
index e4749a1f93dd..148af460ae97 100644
--- a/crypto/hmac.c
+++ b/crypto/hmac.c
@@ -26,6 +26,12 @@ struct hmac_ctx {
u8 pads[];
};
+struct ahash_hmac_ctx {
+ struct crypto_ahash *hash;
+ /* Contains 'u8 ipad[statesize];', then 'u8 opad[statesize];' */
+ u8 pads[];
+};
+
static int hmac_setkey(struct crypto_shash *parent,
const u8 *inkey, unsigned int keylen)
{
@@ -173,21 +179,17 @@ static void hmac_exit_tfm(struct crypto_shash *parent)
crypto_free_shash(tctx->hash);
}
-static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+static int __hmac_create_shash(struct crypto_template *tmpl,
+ struct rtattr **tb, u32 mask)
{
struct shash_instance *inst;
struct crypto_shash_spawn *spawn;
struct crypto_alg *alg;
struct shash_alg *salg;
- u32 mask;
int err;
int ds;
int ss;
- err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SHASH, &mask);
- if (err)
- return err;
-
inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
if (!inst)
return -ENOMEM;
@@ -212,7 +214,8 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
ss < alg->cra_blocksize)
goto err_free_inst;
- err = crypto_inst_setname(shash_crypto_instance(inst), tmpl->name, alg);
+ err = crypto_inst_setname(shash_crypto_instance(inst), "hmac",
+ "hmac-shash", alg);
if (err)
goto err_free_inst;
@@ -245,20 +248,329 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
return err;
}
-static struct crypto_template hmac_tmpl = {
- .name = "hmac",
- .create = hmac_create,
- .module = THIS_MODULE,
+static int hmac_setkey_ahash(struct crypto_ahash *parent,
+ const u8 *inkey, unsigned int keylen)
+{
+ struct ahash_hmac_ctx *tctx = crypto_ahash_ctx(parent);
+ struct crypto_ahash *fb = crypto_ahash_fb(tctx->hash);
+ int ds = crypto_ahash_digestsize(parent);
+ int bs = crypto_ahash_blocksize(parent);
+ int ss = crypto_ahash_statesize(parent);
+ HASH_REQUEST_ON_STACK(req, fb);
+ u8 *opad = &tctx->pads[ss];
+ u8 *ipad = &tctx->pads[0];
+ int err, i;
+
+ if (fips_enabled && (keylen < 112 / 8))
+ return -EINVAL;
+
+ ahash_request_set_callback(req, 0, NULL, NULL);
+
+ if (keylen > bs) {
+ ahash_request_set_virt(req, inkey, ipad, keylen);
+ err = crypto_ahash_digest(req);
+ if (err)
+ goto out_zero_req;
+
+ keylen = ds;
+ } else
+ memcpy(ipad, inkey, keylen);
+
+ memset(ipad + keylen, 0, bs - keylen);
+ memcpy(opad, ipad, bs);
+
+ for (i = 0; i < bs; i++) {
+ ipad[i] ^= HMAC_IPAD_VALUE;
+ opad[i] ^= HMAC_OPAD_VALUE;
+ }
+
+ ahash_request_set_virt(req, ipad, NULL, bs);
+ err = crypto_ahash_init(req) ?:
+ crypto_ahash_update(req) ?:
+ crypto_ahash_export(req, ipad);
+
+ ahash_request_set_virt(req, opad, NULL, bs);
+ err = err ?:
+ crypto_ahash_init(req) ?:
+ crypto_ahash_update(req) ?:
+ crypto_ahash_export(req, opad);
+
+out_zero_req:
+ HASH_REQUEST_ZERO(req);
+ return err;
+}
+
+static int hmac_export_ahash(struct ahash_request *preq, void *out)
+{
+ return crypto_ahash_export(ahash_request_ctx(preq), out);
+}
+
+static int hmac_import_ahash(struct ahash_request *preq, const void *in)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(preq);
+ struct ahash_hmac_ctx *tctx = crypto_ahash_ctx(tfm);
+ struct ahash_request *req = ahash_request_ctx(preq);
+
+ ahash_request_set_tfm(req, tctx->hash);
+ return crypto_ahash_import(req, in);
+}
+
+static int hmac_export_core_ahash(struct ahash_request *preq, void *out)
+{
+ return crypto_ahash_export_core(ahash_request_ctx(preq), out);
+}
+
+static int hmac_import_core_ahash(struct ahash_request *preq, const void *in)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(preq);
+ struct ahash_hmac_ctx *tctx = crypto_ahash_ctx(tfm);
+ struct ahash_request *req = ahash_request_ctx(preq);
+
+ ahash_request_set_tfm(req, tctx->hash);
+ return crypto_ahash_import_core(req, in);
+}
+
+static int hmac_init_ahash(struct ahash_request *preq)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(preq);
+ struct ahash_hmac_ctx *tctx = crypto_ahash_ctx(tfm);
+
+ return hmac_import_ahash(preq, &tctx->pads[0]);
+}
+
+static int hmac_update_ahash(struct ahash_request *preq)
+{
+ struct ahash_request *req = ahash_request_ctx(preq);
+
+ ahash_request_set_callback(req, ahash_request_flags(preq),
+ preq->base.complete, preq->base.data);
+ if (ahash_request_isvirt(preq))
+ ahash_request_set_virt(req, preq->svirt, NULL, preq->nbytes);
+ else
+ ahash_request_set_crypt(req, preq->src, NULL, preq->nbytes);
+ return crypto_ahash_update(req);
+}
+
+static int hmac_finup_finish(struct ahash_request *preq, unsigned int mask)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(preq);
+ struct ahash_request *req = ahash_request_ctx(preq);
+ struct ahash_hmac_ctx *tctx = crypto_ahash_ctx(tfm);
+ int ds = crypto_ahash_digestsize(tfm);
+ int ss = crypto_ahash_statesize(tfm);
+ const u8 *opad = &tctx->pads[ss];
+
+ ahash_request_set_callback(req, ahash_request_flags(preq) & ~mask,
+ preq->base.complete, preq->base.data);
+ ahash_request_set_virt(req, preq->result, preq->result, ds);
+ return crypto_ahash_import(req, opad) ?:
+ crypto_ahash_finup(req);
+
+}
+
+static void hmac_finup_done(void *data, int err)
+{
+ struct ahash_request *preq = data;
+
+ if (err)
+ goto out;
+
+ err = hmac_finup_finish(preq, CRYPTO_TFM_REQ_MAY_SLEEP);
+ if (err == -EINPROGRESS || err == -EBUSY)
+ return;
+
+out:
+ ahash_request_complete(preq, err);
+}
+
+static int hmac_finup_ahash(struct ahash_request *preq)
+{
+ struct ahash_request *req = ahash_request_ctx(preq);
+
+ ahash_request_set_callback(req, ahash_request_flags(preq),
+ hmac_finup_done, preq);
+ if (ahash_request_isvirt(preq))
+ ahash_request_set_virt(req, preq->svirt, preq->result,
+ preq->nbytes);
+ else
+ ahash_request_set_crypt(req, preq->src, preq->result,
+ preq->nbytes);
+ return crypto_ahash_finup(req) ?:
+ hmac_finup_finish(preq, 0);
+}
+
+static int hmac_digest_ahash(struct ahash_request *preq)
+{
+ return hmac_init_ahash(preq) ?:
+ hmac_finup_ahash(preq);
+}
+
+static int hmac_init_ahash_tfm(struct crypto_ahash *parent)
+{
+ struct ahash_instance *inst = ahash_alg_instance(parent);
+ struct ahash_hmac_ctx *tctx = crypto_ahash_ctx(parent);
+ struct crypto_ahash *hash;
+
+ hash = crypto_spawn_ahash(ahash_instance_ctx(inst));
+ if (IS_ERR(hash))
+ return PTR_ERR(hash);
+
+ if (crypto_ahash_reqsize(parent) < sizeof(struct ahash_request) +
+ crypto_ahash_reqsize(hash))
+ return -EINVAL;
+
+ tctx->hash = hash;
+ return 0;
+}
+
+static int hmac_clone_ahash_tfm(struct crypto_ahash *dst,
+ struct crypto_ahash *src)
+{
+ struct ahash_hmac_ctx *sctx = crypto_ahash_ctx(src);
+ struct ahash_hmac_ctx *dctx = crypto_ahash_ctx(dst);
+ struct crypto_ahash *hash;
+
+ hash = crypto_clone_ahash(sctx->hash);
+ if (IS_ERR(hash))
+ return PTR_ERR(hash);
+
+ dctx->hash = hash;
+ return 0;
+}
+
+static void hmac_exit_ahash_tfm(struct crypto_ahash *parent)
+{
+ struct ahash_hmac_ctx *tctx = crypto_ahash_ctx(parent);
+
+ crypto_free_ahash(tctx->hash);
+}
+
+static int hmac_create_ahash(struct crypto_template *tmpl, struct rtattr **tb,
+ u32 mask)
+{
+ struct crypto_ahash_spawn *spawn;
+ struct ahash_instance *inst;
+ struct crypto_alg *alg;
+ struct hash_alg_common *halg;
+ int ds, ss, err;
+
+ inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+ if (!inst)
+ return -ENOMEM;
+ spawn = ahash_instance_ctx(inst);
+
+ mask |= CRYPTO_AHASH_ALG_NO_EXPORT_CORE;
+ err = crypto_grab_ahash(spawn, ahash_crypto_instance(inst),
+ crypto_attr_alg_name(tb[1]), 0, mask);
+ if (err)
+ goto err_free_inst;
+ halg = crypto_spawn_ahash_alg(spawn);
+ alg = &halg->base;
+
+ /* The underlying hash algorithm must not require a key */
+ err = -EINVAL;
+ if (crypto_hash_alg_needs_key(halg))
+ goto err_free_inst;
+
+ ds = halg->digestsize;
+ ss = halg->statesize;
+ if (ds > alg->cra_blocksize || ss < alg->cra_blocksize)
+ goto err_free_inst;
+
+ err = crypto_inst_setname(ahash_crypto_instance(inst), tmpl->name, alg);
+ if (err)
+ goto err_free_inst;
+
+ inst->alg.halg.base.cra_flags = alg->cra_flags &
+ CRYPTO_ALG_INHERITED_FLAGS;
+ inst->alg.halg.base.cra_flags |= CRYPTO_ALG_REQ_VIRT;
+ inst->alg.halg.base.cra_priority = alg->cra_priority + 100;
+ inst->alg.halg.base.cra_blocksize = alg->cra_blocksize;
+ inst->alg.halg.base.cra_ctxsize = sizeof(struct ahash_hmac_ctx) +
+ (ss * 2);
+ inst->alg.halg.base.cra_reqsize = sizeof(struct ahash_request) +
+ alg->cra_reqsize;
+
+ inst->alg.halg.digestsize = ds;
+ inst->alg.halg.statesize = ss;
+ inst->alg.init = hmac_init_ahash;
+ inst->alg.update = hmac_update_ahash;
+ inst->alg.finup = hmac_finup_ahash;
+ inst->alg.digest = hmac_digest_ahash;
+ inst->alg.export = hmac_export_ahash;
+ inst->alg.import = hmac_import_ahash;
+ inst->alg.export_core = hmac_export_core_ahash;
+ inst->alg.import_core = hmac_import_core_ahash;
+ inst->alg.setkey = hmac_setkey_ahash;
+ inst->alg.init_tfm = hmac_init_ahash_tfm;
+ inst->alg.clone_tfm = hmac_clone_ahash_tfm;
+ inst->alg.exit_tfm = hmac_exit_ahash_tfm;
+
+ inst->free = ahash_free_singlespawn_instance;
+
+ err = ahash_register_instance(tmpl, inst);
+ if (err) {
+err_free_inst:
+ ahash_free_singlespawn_instance(inst);
+ }
+ return err;
+}
+
+static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ struct crypto_attr_type *algt;
+ u32 mask;
+
+ algt = crypto_get_attr_type(tb);
+ if (IS_ERR(algt))
+ return PTR_ERR(algt);
+
+ mask = crypto_algt_inherited_mask(algt);
+
+ if (!((algt->type ^ CRYPTO_ALG_TYPE_AHASH) &
+ algt->mask & CRYPTO_ALG_TYPE_MASK))
+ return hmac_create_ahash(tmpl, tb, mask);
+
+ if ((algt->type ^ CRYPTO_ALG_TYPE_SHASH) &
+ algt->mask & CRYPTO_ALG_TYPE_MASK)
+ return -EINVAL;
+
+ return __hmac_create_shash(tmpl, tb, mask);
+}
+
+static int hmac_create_shash(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ u32 mask;
+ int err;
+
+ err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SHASH, &mask);
+ if (err)
+ return err == -EINVAL ? -ENOENT : err;
+
+ return __hmac_create_shash(tmpl, tb, mask);
+}
+
+static struct crypto_template hmac_tmpls[] = {
+ {
+ .name = "hmac",
+ .create = hmac_create,
+ .module = THIS_MODULE,
+ },
+ {
+ .name = "hmac-shash",
+ .create = hmac_create_shash,
+ .module = THIS_MODULE,
+ },
};
static int __init hmac_module_init(void)
{
- return crypto_register_template(&hmac_tmpl);
+ return crypto_register_templates(hmac_tmpls, ARRAY_SIZE(hmac_tmpls));
}
static void __exit hmac_module_exit(void)
{
- crypto_unregister_template(&hmac_tmpl);
+ crypto_unregister_templates(hmac_tmpls, ARRAY_SIZE(hmac_tmpls));
}
module_init(hmac_module_init);
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 05ee817a3180..6f6b9de12cd3 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -185,7 +185,8 @@ struct shash_desc {
* containing a 'struct s390_sha_ctx'.
*/
#define HASH_MAX_DESCSIZE (sizeof(struct shash_desc) + 360)
-#define MAX_SYNC_HASH_REQSIZE HASH_MAX_DESCSIZE
+#define MAX_SYNC_HASH_REQSIZE (sizeof(struct ahash_request) + \
+ HASH_MAX_DESCSIZE)
#define SHASH_DESC_ON_STACK(shash, ctx) \
char __##shash##_desc[sizeof(struct shash_desc) + HASH_MAX_DESCSIZE] \
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index e9de2bc34a10..0f85c543f80b 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -67,6 +67,7 @@ int crypto_register_ahashes(struct ahash_alg *algs, int count);
void crypto_unregister_ahashes(struct ahash_alg *algs, int count);
int ahash_register_instance(struct crypto_template *tmpl,
struct ahash_instance *inst);
+void ahash_free_singlespawn_instance(struct ahash_instance *inst);
int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
unsigned int keylen);
@@ -76,12 +77,20 @@ static inline bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
return alg->setkey != shash_no_setkey;
}
+bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg);
+
static inline bool crypto_shash_alg_needs_key(struct shash_alg *alg)
{
return crypto_shash_alg_has_setkey(alg) &&
!(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY);
}
+static inline bool crypto_hash_alg_needs_key(struct hash_alg_common *alg)
+{
+ return crypto_hash_alg_has_setkey(alg) &&
+ !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY);
+}
+
int crypto_grab_ahash(struct crypto_ahash_spawn *spawn,
struct crypto_instance *inst,
const char *name, u32 type, u32 mask);
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 10/11] crypto: testmgr - Use ahash for generic tfm
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
` (8 preceding siblings ...)
2025-05-15 5:54 ` [v4 PATCH 09/11] crypto: hmac - Add ahash support Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 19:30 ` Eric Biggers
2025-05-15 5:54 ` [v4 PATCH 11/11] crypto: testmgr - Add hash export format testing Herbert Xu
2025-05-15 19:35 ` [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Eric Biggers
11 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
As shash is being phased out, use ahash for the generic tfm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/testmgr.c | 37 ++++++++++++++++++-------------------
1 file changed, 18 insertions(+), 19 deletions(-)
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index ee682ad50e34..72005074a5c2 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -1699,7 +1699,7 @@ static int test_hash_vec(const struct hash_testvec *vec, unsigned int vec_num,
* Assumes the buffers in 'vec' were already allocated.
*/
static void generate_random_hash_testvec(struct rnd_state *rng,
- struct shash_desc *desc,
+ struct ahash_request *req,
struct hash_testvec *vec,
unsigned int maxkeysize,
unsigned int maxdatasize,
@@ -1721,16 +1721,17 @@ static void generate_random_hash_testvec(struct rnd_state *rng,
vec->ksize = prandom_u32_inclusive(rng, 1, maxkeysize);
generate_random_bytes(rng, (u8 *)vec->key, vec->ksize);
- vec->setkey_error = crypto_shash_setkey(desc->tfm, vec->key,
- vec->ksize);
+ vec->setkey_error = crypto_ahash_setkey(
+ crypto_ahash_reqtfm(req), vec->key, vec->ksize);
/* If the key couldn't be set, no need to continue to digest. */
if (vec->setkey_error)
goto done;
}
/* Digest */
- vec->digest_error = crypto_shash_digest(desc, vec->plaintext,
- vec->psize, (u8 *)vec->digest);
+ vec->digest_error = crypto_hash_digest(
+ crypto_ahash_reqtfm(req), vec->plaintext,
+ vec->psize, (u8 *)vec->digest);
done:
snprintf(name, max_namelen, "\"random: psize=%u ksize=%u\"",
vec->psize, vec->ksize);
@@ -1755,8 +1756,8 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
const char *driver = crypto_ahash_driver_name(tfm);
struct rnd_state rng;
char _generic_driver[CRYPTO_MAX_ALG_NAME];
- struct crypto_shash *generic_tfm = NULL;
- struct shash_desc *generic_desc = NULL;
+ struct ahash_request *generic_req = NULL;
+ struct crypto_ahash *generic_tfm = NULL;
unsigned int i;
struct hash_testvec vec = { 0 };
char vec_name[64];
@@ -1779,7 +1780,7 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
if (strcmp(generic_driver, driver) == 0) /* Already the generic impl? */
return 0;
- generic_tfm = crypto_alloc_shash(generic_driver, 0, 0);
+ generic_tfm = crypto_alloc_ahash(generic_driver, 0, 0);
if (IS_ERR(generic_tfm)) {
err = PTR_ERR(generic_tfm);
if (err == -ENOENT) {
@@ -1798,27 +1799,25 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
goto out;
}
- generic_desc = kzalloc(sizeof(*desc) +
- crypto_shash_descsize(generic_tfm), GFP_KERNEL);
- if (!generic_desc) {
+ generic_req = ahash_request_alloc(generic_tfm, GFP_KERNEL);
+ if (!generic_req) {
err = -ENOMEM;
goto out;
}
- generic_desc->tfm = generic_tfm;
/* Check the algorithm properties for consistency. */
- if (digestsize != crypto_shash_digestsize(generic_tfm)) {
+ if (digestsize != crypto_ahash_digestsize(generic_tfm)) {
pr_err("alg: hash: digestsize for %s (%u) doesn't match generic impl (%u)\n",
driver, digestsize,
- crypto_shash_digestsize(generic_tfm));
+ crypto_ahash_digestsize(generic_tfm));
err = -EINVAL;
goto out;
}
- if (blocksize != crypto_shash_blocksize(generic_tfm)) {
+ if (blocksize != crypto_ahash_blocksize(generic_tfm)) {
pr_err("alg: hash: blocksize for %s (%u) doesn't match generic impl (%u)\n",
- driver, blocksize, crypto_shash_blocksize(generic_tfm));
+ driver, blocksize, crypto_ahash_blocksize(generic_tfm));
err = -EINVAL;
goto out;
}
@@ -1837,7 +1836,7 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
}
for (i = 0; i < fuzz_iterations * 8; i++) {
- generate_random_hash_testvec(&rng, generic_desc, &vec,
+ generate_random_hash_testvec(&rng, generic_req, &vec,
maxkeysize, maxdatasize,
vec_name, sizeof(vec_name));
generate_random_testvec_config(&rng, cfg, cfgname,
@@ -1855,8 +1854,8 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
kfree(vec.key);
kfree(vec.plaintext);
kfree(vec.digest);
- crypto_free_shash(generic_tfm);
- kfree_sensitive(generic_desc);
+ ahash_request_free(generic_req);
+ crypto_free_ahash(generic_tfm);
return err;
}
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [v4 PATCH 11/11] crypto: testmgr - Add hash export format testing
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
` (9 preceding siblings ...)
2025-05-15 5:54 ` [v4 PATCH 10/11] crypto: testmgr - Use ahash for generic tfm Herbert Xu
@ 2025-05-15 5:54 ` Herbert Xu
2025-05-15 19:35 ` [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Eric Biggers
11 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-15 5:54 UTC (permalink / raw)
To: Linux Crypto Mailing List
Ensure that the hash state can be exported to and imported from
the generic algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
crypto/testmgr.c | 95 ++++++++++++++++++++++++++++++----
crypto/testmgr.h | 2 +
include/crypto/internal/hash.h | 6 +++
3 files changed, 94 insertions(+), 9 deletions(-)
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 72005074a5c2..737064b31480 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -17,10 +17,19 @@
*/
#include <crypto/aead.h>
-#include <crypto/hash.h>
+#include <crypto/acompress.h>
+#include <crypto/akcipher.h>
+#include <crypto/drbg.h>
+#include <crypto/internal/cipher.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/simd.h>
+#include <crypto/kpp.h>
+#include <crypto/rng.h>
+#include <crypto/sig.h>
#include <crypto/skcipher.h>
#include <linux/err.h>
#include <linux/fips.h>
+#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/once.h>
#include <linux/prandom.h>
@@ -28,14 +37,6 @@
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/uio.h>
-#include <crypto/rng.h>
-#include <crypto/drbg.h>
-#include <crypto/akcipher.h>
-#include <crypto/kpp.h>
-#include <crypto/acompress.h>
-#include <crypto/sig.h>
-#include <crypto/internal/cipher.h>
-#include <crypto/internal/simd.h>
#include "internal.h"
@@ -1464,6 +1465,49 @@ static int check_nonfinal_ahash_op(const char *op, int err,
return 0;
}
+static int check_ahash_export(struct ahash_request *req,
+ const struct hash_testvec *vec,
+ const char *vec_name,
+ const struct testvec_config *cfg,
+ const char *driver, u8 *hashstate)
+{
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ const unsigned int digestsize = crypto_ahash_digestsize(tfm);
+ HASH_FBREQ_ON_STACK(fbreq, req);
+ int err;
+
+ if (!vec->state)
+ return 0;
+
+ err = crypto_ahash_export(req, hashstate);
+ if (err) {
+ pr_err("alg: ahash: %s mixed export() failed with err %d on test vector %s, cfg=\"%s\"\n",
+ driver, err, vec_name, cfg->name);
+ return err;
+ }
+ err = crypto_ahash_import(req, vec->state);
+ if (err) {
+ pr_err("alg: ahash: %s mixed import() failed with err %d on test vector %s, cfg=\"%s\"\n",
+ driver, err, vec_name, cfg->name);
+ return err;
+ }
+ err = crypto_ahash_import(fbreq, hashstate);
+ if (err) {
+ pr_err("alg: ahash: %s fallback import() failed with err %d on test vector %s, cfg=\"%s\"\n",
+ crypto_ahash_driver_name(crypto_ahash_reqtfm(fbreq)), err, vec_name, cfg->name);
+ return err;
+ }
+ ahash_request_set_crypt(fbreq, NULL, hashstate, 0);
+ testmgr_poison(hashstate, digestsize + TESTMGR_POISON_LEN);
+ err = crypto_ahash_final(fbreq);
+ if (err) {
+ pr_err("alg: ahash: %s fallback final() failed with err %d on test vector %s, cfg=\"%s\"\n",
+ crypto_ahash_driver_name(crypto_ahash_reqtfm(fbreq)), err, vec_name, cfg->name);
+ return err;
+ }
+ return check_hash_result("ahash export", hashstate, digestsize, vec, vec_name, driver, cfg);
+}
+
/* Test one hash test vector in one configuration, using the ahash API */
static int test_ahash_vec_cfg(const struct hash_testvec *vec,
const char *vec_name,
@@ -1609,6 +1653,10 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
driver, vec_name, cfg);
if (err)
return err;
+ err = check_ahash_export(req, vec, vec_name, cfg,
+ driver, hashstate);
+ if (err)
+ return err;
err = do_ahash_op(crypto_ahash_final, req, &wait, cfg->nosimd);
if (err) {
pr_err("alg: ahash: %s final() failed with err %d on test vector %s, cfg=\"%s\"\n",
@@ -1732,6 +1780,17 @@ static void generate_random_hash_testvec(struct rnd_state *rng,
vec->digest_error = crypto_hash_digest(
crypto_ahash_reqtfm(req), vec->plaintext,
vec->psize, (u8 *)vec->digest);
+
+ if (vec->digest_error || !vec->state)
+ goto done;
+
+ ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
+ ahash_request_set_virt(req, vec->plaintext, (u8 *)vec->digest,
+ vec->psize);
+ crypto_ahash_init(req);
+ crypto_ahash_update(req);
+ crypto_ahash_export(req, (u8 *)vec->state);
+
done:
snprintf(name, max_namelen, "\"random: psize=%u ksize=%u\"",
vec->psize, vec->ksize);
@@ -1750,6 +1809,7 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
const unsigned int digestsize = crypto_ahash_digestsize(tfm);
+ const unsigned int statesize = crypto_ahash_statesize(tfm);
const unsigned int blocksize = crypto_ahash_blocksize(tfm);
const unsigned int maxdatasize = (2 * PAGE_SIZE) - TESTMGR_POISON_LEN;
const char *algname = crypto_hash_alg_common(tfm)->base.cra_name;
@@ -1822,6 +1882,22 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
goto out;
}
+ if (crypto_hash_no_export_core(tfm) ||
+ crypto_hash_no_export_core(generic_tfm))
+ ;
+ else if (statesize != crypto_ahash_statesize(generic_tfm)) {
+ pr_err("alg: hash: statesize for %s (%u) doesn't match generic impl (%u)\n",
+ driver, statesize,
+ crypto_ahash_statesize(generic_tfm));
+ err = -EINVAL;
+ goto out;
+ } else {
+ vec.state = kmalloc(statesize, GFP_KERNEL);
+ err = -ENOMEM;
+ if (!vec.state)
+ goto out;
+ }
+
/*
* Now generate test vectors using the generic implementation, and test
* the other implementation against them.
@@ -1854,6 +1930,7 @@ static int test_hash_vs_generic_impl(const char *generic_driver,
kfree(vec.key);
kfree(vec.plaintext);
kfree(vec.digest);
+ kfree(vec.state);
ahash_request_free(generic_req);
crypto_free_ahash(generic_tfm);
return err;
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 32d099ac9e73..5cf455a708b8 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -29,6 +29,7 @@
* hash_testvec: structure to describe a hash (message digest) test
* @key: Pointer to key (NULL if none)
* @plaintext: Pointer to source data
+ * @state: Pointer to expected state
* @digest: Pointer to expected digest
* @psize: Length of source data in bytes
* @ksize: Length of @key in bytes (0 if no key)
@@ -39,6 +40,7 @@
struct hash_testvec {
const char *key;
const char *plaintext;
+ const char *state;
const char *digest;
unsigned int psize;
unsigned short ksize;
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index 0f85c543f80b..f052afa6e7b0 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -91,6 +91,12 @@ static inline bool crypto_hash_alg_needs_key(struct hash_alg_common *alg)
!(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY);
}
+static inline bool crypto_hash_no_export_core(struct crypto_ahash *tfm)
+{
+ return crypto_hash_alg_common(tfm)->base.cra_flags &
+ CRYPTO_AHASH_ALG_NO_EXPORT_CORE;
+}
+
int crypto_grab_ahash(struct crypto_ahash_spawn *spawn,
struct crypto_instance *inst,
const char *name, u32 type, u32 mask);
--
2.39.5
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [v4 PATCH 10/11] crypto: testmgr - Use ahash for generic tfm
2025-05-15 5:54 ` [v4 PATCH 10/11] crypto: testmgr - Use ahash for generic tfm Herbert Xu
@ 2025-05-15 19:30 ` Eric Biggers
0 siblings, 0 replies; 21+ messages in thread
From: Eric Biggers @ 2025-05-15 19:30 UTC (permalink / raw)
To: Herbert Xu; +Cc: Linux Crypto Mailing List
On Thu, May 15, 2025 at 01:54:54PM +0800, Herbert Xu wrote:
> As shash is being phased out
Again, that seems weird when shash is what most users are actually using.
I would like to migrate most users to lib/crypto/ though, so eliminating shash
should make sense eventually. But that's for a different reason than why you
seem to be pushing it (you seem to want it to be replaced with the asynchronous
hash API). At the moment it seems premature to consider shash deprecated.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [v4 PATCH 06/11] crypto: shash - Set reqsize in shash_alg
2025-05-15 5:54 ` [v4 PATCH 06/11] crypto: shash - Set reqsize in shash_alg Herbert Xu
@ 2025-05-15 19:32 ` Eric Biggers
0 siblings, 0 replies; 21+ messages in thread
From: Eric Biggers @ 2025-05-15 19:32 UTC (permalink / raw)
To: Herbert Xu; +Cc: Linux Crypto Mailing List
On Thu, May 15, 2025 at 01:54:44PM +0800, Herbert Xu wrote:
> Make reqsize static for shash algorithms.
>
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit message doesn't explain why the change is being made.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [v4 PATCH 07/11] crypto: algapi - Add driver template support to crypto_inst_setname
2025-05-15 5:54 ` [v4 PATCH 07/11] crypto: algapi - Add driver template support to crypto_inst_setname Herbert Xu
@ 2025-05-15 19:33 ` Eric Biggers
0 siblings, 0 replies; 21+ messages in thread
From: Eric Biggers @ 2025-05-15 19:33 UTC (permalink / raw)
To: Herbert Xu; +Cc: Linux Crypto Mailing List
On Thu, May 15, 2025 at 01:54:47PM +0800, Herbert Xu wrote:
> Add support to crypto_inst_setname for having a driver template
> name that differs from the algorithm template name.
>
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit message doesn't mention why the change is being made.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
` (10 preceding siblings ...)
2025-05-15 5:54 ` [v4 PATCH 11/11] crypto: testmgr - Add hash export format testing Herbert Xu
@ 2025-05-15 19:35 ` Eric Biggers
2025-05-16 9:23 ` Herbert Xu
11 siblings, 1 reply; 21+ messages in thread
From: Eric Biggers @ 2025-05-15 19:35 UTC (permalink / raw)
To: Herbert Xu; +Cc: Linux Crypto Mailing List
On Thu, May 15, 2025 at 01:54:30PM +0800, Herbert Xu wrote:
> v4 switches the name of the hmac shash and ahash instances. The
> ahash instance will bear the hmac name while shash gets the driver
> name of hmac-shash.
That seems backwards. The shash one should be the regular one and ahash should
be special.
> A new test has been added to testmgr to ensure that all implementations
> of a given algorithm use the same export format.
Still lacks any explanation for why this even matters.
> crypto/ahash.c | 572 ++++++++++++++++-----------------
> crypto/algapi.c | 8 +-
> crypto/hmac.c | 392 +++++++++++++++++++---
> crypto/shash.c | 46 ++-
> crypto/testmgr.c | 134 ++++++--
> crypto/testmgr.h | 2 +
> include/crypto/algapi.h | 12 +-
> include/crypto/hash.h | 73 ++---
> include/crypto/internal/hash.h | 66 ++++
> 9 files changed, 883 insertions(+), 422 deletions(-)
>
> --
> 2.39.5
As usual, missing a base-commit. (Use the --base option to 'git format-patch')
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash
2025-05-15 19:35 ` [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Eric Biggers
@ 2025-05-16 9:23 ` Herbert Xu
2025-05-16 16:43 ` Eric Biggers
0 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2025-05-16 9:23 UTC (permalink / raw)
To: Eric Biggers; +Cc: Linux Crypto Mailing List
On Thu, May 15, 2025 at 12:35:29PM -0700, Eric Biggers wrote:
>
> That seems backwards. The shash one should be the regular one and ahash should
> be special.
That's how it was in v3 but because of the switch to ahash in
testmgr this blows up due to the quirk that the API cannot allocate
a name such as hmac(XXX-generic) if the driver name ends up being
hmac-ahash(XXX-generic) because neither the algorithm name (which
would be hmac(XXX) nor the driver name will match.
This coupled with the fact that shash will be removed anyway is
the reason behind the switch.
> Still lacks any explanation for why this even matters.
I've explained it many times before. The point is so that you
can fallback from async to sync at any point in time by exporting
the async hash state and importing it into the sync fallback that's
now allocated for every async ahash.
> As usual, missing a base-commit. (Use the --base option to 'git format-patch')
It's based on cryptodev.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash
2025-05-16 9:23 ` Herbert Xu
@ 2025-05-16 16:43 ` Eric Biggers
2025-05-17 0:45 ` Herbert Xu
0 siblings, 1 reply; 21+ messages in thread
From: Eric Biggers @ 2025-05-16 16:43 UTC (permalink / raw)
To: Herbert Xu; +Cc: Linux Crypto Mailing List
On Fri, May 16, 2025 at 05:23:03PM +0800, Herbert Xu wrote:
> On Thu, May 15, 2025 at 12:35:29PM -0700, Eric Biggers wrote:
> >
> > That seems backwards. The shash one should be the regular one and ahash should
> > be special.
>
> That's how it was in v3 but because of the switch to ahash in
> testmgr
So don't do that.
> > Still lacks any explanation for why this even matters.
>
> I've explained it many times before. The point is so that you
> can fallback from async to sync at any point in time by exporting
> the async hash state and importing it into the sync fallback that's
> now allocated for every async ahash.
So how come this hasn't been a problem until now?
> > As usual, missing a base-commit. (Use the --base option to 'git format-patch')
>
> It's based on cryptodev.
Which is a moving target. You still need to use base-commit.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash
2025-05-16 16:43 ` Eric Biggers
@ 2025-05-17 0:45 ` Herbert Xu
2025-05-17 1:17 ` Eric Biggers
0 siblings, 1 reply; 21+ messages in thread
From: Herbert Xu @ 2025-05-17 0:45 UTC (permalink / raw)
To: Eric Biggers; +Cc: Linux Crypto Mailing List
On Fri, May 16, 2025 at 09:43:26AM -0700, Eric Biggers wrote:
>
> So how come this hasn't been a problem until now?
It is the key to getting rid of the ban on memory allocation
for ahash drivers. Small memory allocations fail very rarely,
yet we're banning the use of all drivers doing any memory allocations
because they may fail.
With a consistent export format, we could simply fallback to
software when the rare OOM strikes, thus getting rid of the
ban on memory allocations.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash
2025-05-17 0:45 ` Herbert Xu
@ 2025-05-17 1:17 ` Eric Biggers
2025-05-17 1:25 ` Herbert Xu
0 siblings, 1 reply; 21+ messages in thread
From: Eric Biggers @ 2025-05-17 1:17 UTC (permalink / raw)
To: Herbert Xu; +Cc: Linux Crypto Mailing List
On Sat, May 17, 2025 at 08:45:00AM +0800, Herbert Xu wrote:
> On Fri, May 16, 2025 at 09:43:26AM -0700, Eric Biggers wrote:
> >
> > So how come this hasn't been a problem until now?
>
> It is the key to getting rid of the ban on memory allocation
> for ahash drivers. Small memory allocations fail very rarely,
> yet we're banning the use of all drivers doing any memory allocations
> because they may fail.
Does anyone really care? The fact that dm-crypt hasn't been able to use most of
the hardware drivers for years just further emphasizes that those drivers don't
really even matter. I remember seeing one complaint and that was it.
> With a consistent export format, we could simply fallback to
> software when the rare OOM strikes, thus getting rid of the
> ban on memory allocations.
There's already a huge quality problem with the drivers. The last thing they
need is to have special code that runs only when an OOM condition occurs, which
won't be tested.
Can they really not just use mempools?
I'll also note that the whole concept of fallback ciphers is kind of broken, as
was established earlier. The correct thing to do would be to fall back to
lib/crypto/, not to call into the legacy crypto API recursively.
- Eric
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash
2025-05-17 1:17 ` Eric Biggers
@ 2025-05-17 1:25 ` Herbert Xu
0 siblings, 0 replies; 21+ messages in thread
From: Herbert Xu @ 2025-05-17 1:25 UTC (permalink / raw)
To: Eric Biggers; +Cc: Linux Crypto Mailing List
On Fri, May 16, 2025 at 06:17:04PM -0700, Eric Biggers wrote:
>
> There's already a huge quality problem with the drivers. The last thing they
> need is to have special code that runs only when an OOM condition occurs, which
> won't be tested.
I totally agree that we have a quality problem with the drivers.
Which is the main reason why I moved the partial block handling
out. The less work the drivers do the less likely they're to
screw it up.
For test coverage, we could easily add something similar to
crypto_reenable_simd_for_test.
Another thing we could do is to just let the drivers fail in these
cases and return ENOMEM and handle the fallback in the Crypto API.
> Can they really not just use mempools?
I don't see how that solves the problem, they can still be exhausted
can't they?
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-05-17 1:25 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-15 5:54 [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 01/11] crypto: hash - Move core export and import into internel/hash.h Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 02/11] crypto: hash - Add export_core and import_core hooks Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 03/11] crypto: ahash - Handle partial blocks in API Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 04/11] crypto: hmac - Zero shash desc in setkey Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 05/11] crypto: hmac - Add export_core and import_core Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 06/11] crypto: shash - Set reqsize in shash_alg Herbert Xu
2025-05-15 19:32 ` Eric Biggers
2025-05-15 5:54 ` [v4 PATCH 07/11] crypto: algapi - Add driver template support to crypto_inst_setname Herbert Xu
2025-05-15 19:33 ` Eric Biggers
2025-05-15 5:54 ` [v4 PATCH 08/11] crypto: testmgr - Ignore EEXIST on shash allocation Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 09/11] crypto: hmac - Add ahash support Herbert Xu
2025-05-15 5:54 ` [v4 PATCH 10/11] crypto: testmgr - Use ahash for generic tfm Herbert Xu
2025-05-15 19:30 ` Eric Biggers
2025-05-15 5:54 ` [v4 PATCH 11/11] crypto: testmgr - Add hash export format testing Herbert Xu
2025-05-15 19:35 ` [v4 PATCH 00/11] crypto: Add partial block API and hmac to ahash Eric Biggers
2025-05-16 9:23 ` Herbert Xu
2025-05-16 16:43 ` Eric Biggers
2025-05-17 0:45 ` Herbert Xu
2025-05-17 1:17 ` Eric Biggers
2025-05-17 1:25 ` Herbert Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox