* [PATCH v3 0/3] Rework protected key AES for true asynch support
@ 2025-04-01 14:50 Harald Freudenberger
2025-04-01 14:50 ` [PATCH v3 1/3] s390/cpacf: Rework cpacf_pcc() to return condition code Harald Freudenberger
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Harald Freudenberger @ 2025-04-01 14:50 UTC (permalink / raw)
To: herbert, dengler, ifranzki, fcallies; +Cc: linux-crypto, linux-s390
This is a complete rework of the protected key AES (PAES) implementation.
The goal of this rework is to implement the 4 modes (ecb, cbc, ctr, xts)
in a real asynchronous fashion:
- init(), exit() and setkey() are synchronous and don't allocate any memory.
- the encrypt/decrypt functions first try to do the job in a synchronous
manner. If this fails, for example the protected key got invalid caused
by for example a guest suspend/resume or guest migration action, the
encrypt/decrypt is transfered to an instance of the crypto engine (see
below) for asynchronous processing.
These via crypto engine postponed requests are then handled via the
do_one_request() callback but may of course again run into a still
not converted key or the key is getting invalid. If the key is
still not converted, the first thread does the conversion and updates
the key status in the transformation context. The conversion is
invoked via pkey API with a new flag PKEY_XFLAG_NOMEMALLOC.
The pkey API used here - the function pkey_key2protkey() - uses
a new version of this in-kernel-API. A new flag PKEY_XFLAG_NOMEMALLOC
tells the PKEY layer (and subsidiary layers) that it must not allocate
any memory causing IO operations. Note that the patches for this
pkey/zcrypt/AP extensions are currently under review and yet not
upstream available. SO THIS PATCH DOES NOT COMPILE YET !!!
This patch together with the pkey/zcrypt/AP extensions should
toughen the paes crypto algorithms to truly meet the requirements
for in-kernel skcipher implementations and the usage patterns for
the dm-crypt and dm-integrity layers.
There is still some confusion about the way how paes treats the key
within the transformation context. The tfm context may be shared by
multiple requests running en/decryption with the same key. So the tfm
context is supposed to be read-only.
The s390 protected key support is in fact an encrypted key with the
wrapping key sitting in the firmware. On each invocation of a
protected key instruction the firmware unwraps the pkey and performs
the operation. Part of the protected key is a hash about the wrapping
key used - so the firmware is able to detect if a protected key
matches to the wrapping key or not. If there is a mismatch the cpacf
operation fails with cc 1 (key invalid). Such a situation can occur
for example with a kvm live guest migration to another machine where
the guest simple awakens in a new environment. As the wrapping key is
NOT transfered, after the reawakening all protected key cpacf
operations fail with "key invalid". There exist other situations
where a protected key cpacf operation may run into "key invalid" and
thus the code needs to be prepared for such cpacf failures.
The recovery is simple: via pkey API the source key material (in real
cases this is usually a secure key bound to a HSM) needs to generate
a new protected key which is the wrapped by the wrapping key of the
current firmware.
So the paes tfms hold the source key material to be able to
re-generate the protected key at any time. A naive implementation
would hold the protected key in some kind of running context (for
example the request context) and only the source key would be stored
in the tfm context. But the derivation of the protected key from the
source key is an expensive and time consuming process often involving
interaction with a crypto card. And such a naive implementation would
then for every tfm in use trigger the derivation process individual.
So why not store the protected key in tfm context and only the very
first process hitting the "invalid key" cc runs the derivation and
updates the protected key stored in the tfm. The only really important
thing is that the protected key update and cloning from this value
needs to be done in a atomic fashion.
Please note that there are still race conditions where the protected
key stored in the tfm may get updated by an (outdated) protected key
value. This is not an issue and the code handles this correctly by
again re-deriving the protected key. The only fact that matters, is
that the protected key must always be in a state where the cpacf
instructions can figure out if it is valid (the hash part of the
protected key matches to the hash of the wrapping key) or invalid
(and refuse the crypto operation with "invalid key").
Changelog:
v1 - first version. Applied and tested on top of the mentioned
pkey/zcrypt/AP changes. Selftests and multithreaded testcases
executed via AP_ALG interface run successful and even instrumented
code (with some sleeps to force asynch pathes) ran fine.
Code is good enough for a first code review and collecting feedback.
v2 - A new patch which does a slight rework of the cpacf_pcc() inline
function to return the condition code.
A rework of the paes implementation based on feedback from Herbert
and Ingo:
- the spinlock is now consequently used to protect updates and
changes on the protected key and protected key state within
the transformation context.
- setkey() is now synchronous
- the walk is now held in the request context and thus the
postponing of a request to the engine and later processing
can continue at exactly the same state.
- the param block needed for the cpacf instructions is constructed
once and held in the request context.
- if a request can't get handled synchronous, it is postponed
for asynch processing via an instance of the crpyto engine.
With v2 comes a patch which updates the crypto engine docu
in Documentation/crypto. Feel free to use it or drop it or
do some rework - at least it needs some review.
v2 was only posted internal to collect some feedback within IBM.
v3 - Slight improvements based on feedback from Finn.
Harald Freudenberger (3):
s390/cpacf: Rework cpacf_pcc() to return condition code
s390/crypto: Rework protected key AES for true asynch support
Documentation: crypto_engine: Update and extend crypto engine doc
Documentation/crypto/crypto_engine.rst | 78 +-
arch/s390/crypto/paes_s390.c | 1725 ++++++++++++++++--------
arch/s390/include/asm/cpacf.h | 18 +-
3 files changed, 1234 insertions(+), 587 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v3 1/3] s390/cpacf: Rework cpacf_pcc() to return condition code
2025-04-01 14:50 [PATCH v3 0/3] Rework protected key AES for true asynch support Harald Freudenberger
@ 2025-04-01 14:50 ` Harald Freudenberger
2025-04-14 10:14 ` Holger Dengler
2025-04-01 14:50 ` [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support Harald Freudenberger
2025-04-01 14:50 ` [PATCH v3 3/3] Documentation: crypto_engine: Update and extend crypto engine doc Harald Freudenberger
2 siblings, 1 reply; 10+ messages in thread
From: Harald Freudenberger @ 2025-04-01 14:50 UTC (permalink / raw)
To: herbert, dengler, ifranzki, fcallies; +Cc: linux-crypto, linux-s390
Some of the pcc sub-functions have a protected key as
input and thus may run into the situation that this
key may be invalid for example due to live guest migration
to another physical hardware.
Rework the inline assembler function cpacf_pcc() to
return the condition code (cc) as return value:
0 - cc code 0 (normal completion)
1 - cc code 1 (prot key wkvp mismatch or src op out of range)
2 - cc code 2 (something invalid, scalar multiply infinity, ...)
Note that cc 3 (partial completion) is handled within the asm code
and never returned.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
---
arch/s390/include/asm/cpacf.h | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/arch/s390/include/asm/cpacf.h b/arch/s390/include/asm/cpacf.h
index 59ab1192e2d5..54cb97603ec0 100644
--- a/arch/s390/include/asm/cpacf.h
+++ b/arch/s390/include/asm/cpacf.h
@@ -649,18 +649,30 @@ static inline void cpacf_trng(u8 *ucbuf, unsigned long ucbuf_len,
* instruction
* @func: the function code passed to PCC; see CPACF_KM_xxx defines
* @param: address of parameter block; see POP for details on each func
+ *
+ * Returns the condition code, this is
+ * 0 - cc code 0 (normal completion)
+ * 1 - cc code 1 (protected key wkvp mismatch or src operand out of range)
+ * 2 - cc code 2 (something invalid, scalar multiply infinity, ...)
+ * Condition code 3 (partial completion) is handled within the asm code
+ * and never returned.
*/
-static inline void cpacf_pcc(unsigned long func, void *param)
+static inline int cpacf_pcc(unsigned long func, void *param)
{
+ int cc;
+
asm volatile(
" lgr 0,%[fc]\n"
" lgr 1,%[pba]\n"
"0: .insn rre,%[opc] << 16,0,0\n" /* PCC opcode */
" brc 1,0b\n" /* handle partial completion */
- :
+ CC_IPM(cc)
+ : CC_OUT(cc, cc)
: [fc] "d" (func), [pba] "d" ((unsigned long)param),
[opc] "i" (CPACF_PCC)
- : "cc", "memory", "0", "1");
+ : CC_CLOBBER_LIST("memory", "0", "1"));
+
+ return CC_TRANSFORM(cc);
}
/**
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support
2025-04-01 14:50 [PATCH v3 0/3] Rework protected key AES for true asynch support Harald Freudenberger
2025-04-01 14:50 ` [PATCH v3 1/3] s390/cpacf: Rework cpacf_pcc() to return condition code Harald Freudenberger
@ 2025-04-01 14:50 ` Harald Freudenberger
2025-04-02 4:04 ` Herbert Xu
2025-04-25 14:56 ` Holger Dengler
2025-04-01 14:50 ` [PATCH v3 3/3] Documentation: crypto_engine: Update and extend crypto engine doc Harald Freudenberger
2 siblings, 2 replies; 10+ messages in thread
From: Harald Freudenberger @ 2025-04-01 14:50 UTC (permalink / raw)
To: herbert, dengler, ifranzki, fcallies; +Cc: linux-crypto, linux-s390
This is a complete rework of the protected key AES (PAES) implementation.
The goal of this rework is to implement the 4 modes (ecb, cbc, ctr, xts)
in a real asynchronous fashion:
- init(), exit() and setkey() are synchronous and don't allocate any memory.
- the encrypt/decrypt functions first try to do the job in a synchronous
manner. If this fails, for example the protected key got invalid caused
by for example a guest suspend/resume or guest migration action, the
encrypt/decrypt is transfered to an instance of the crypto engine (see
below) for asynchronous processing.
These via crypto engine postponed requests are then handled via the
do_one_request() callback but may of course again run into a still
not converted key or the key is getting invalid. If the key is
still not converted, the first thread does the conversion and updates
the key status in the transformation context. The conversion is
invoked via pkey API with a new flag PKEY_XFLAG_NOMEMALLOC.
The pkey API used here - the function pkey_key2protkey() - uses
a new version of this in-kernel-API. A new flag PKEY_XFLAG_NOMEMALLOC
tells the PKEY layer (and subsidiary layers) that it must not allocate
any memory causing IO operations. Note that the patches for this
pkey/zcrypt/AP extensions are currently under review and yet not
upstream available. SO THIS PATCH DOES NOT COMPILE YET.
This patch together with the pkey/zcrypt/AP extensions should
toughen the paes crypto algorithms to truly meet the requirements
for in-kernel skcipher implementations and the usage patterns for
the dm-crypt and dm-integrity layers.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
---
arch/s390/crypto/paes_s390.c | 1725 +++++++++++++++++++++++-----------
1 file changed, 1183 insertions(+), 542 deletions(-)
diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
index 646cbbf0678d..1d1f1a98ec4d 100644
--- a/arch/s390/crypto/paes_s390.c
+++ b/arch/s390/crypto/paes_s390.c
@@ -5,7 +5,7 @@
* s390 implementation of the AES Cipher Algorithm with protected keys.
*
* s390 Version:
- * Copyright IBM Corp. 2017, 2023
+ * Copyright IBM Corp. 2017, 2025
* Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
* Harald Freudenberger <freude@de.ibm.com>
*/
@@ -13,16 +13,17 @@
#define KMSG_COMPONENT "paes_s390"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
-#include <crypto/aes.h>
-#include <crypto/algapi.h>
-#include <linux/bug.h>
-#include <linux/err.h>
-#include <linux/module.h>
#include <linux/cpufeature.h>
+#include <linux/delay.h>
+#include <linux/err.h>
#include <linux/init.h>
+#include <linux/miscdevice.h>
+#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/spinlock.h>
-#include <linux/delay.h>
+#include <crypto/aes.h>
+#include <crypto/algapi.h>
+#include <crypto/engine.h>
#include <crypto/internal/skcipher.h>
#include <crypto/xts.h>
#include <asm/cpacf.h>
@@ -44,23 +45,55 @@ static DEFINE_MUTEX(ctrblk_lock);
static cpacf_mask_t km_functions, kmc_functions, kmctr_functions;
+static struct crypto_engine *paes_crypto_engine;
+#define MAX_QLEN 10
+
+/*
+ * protected key specific stuff
+ */
+
struct paes_protkey {
u32 type;
u32 len;
u8 protkey[PXTS_256_PROTKEY_SIZE];
};
-struct key_blob {
- /*
- * Small keys will be stored in the keybuf. Larger keys are
- * stored in extra allocated memory. In both cases does
- * key point to the memory where the key is stored.
- * The code distinguishes by checking keylen against
- * sizeof(keybuf). See the two following helper functions.
- */
- u8 *key;
- u8 keybuf[128];
+#define PK_STATE_NO_KEY 0
+#define PK_STATE_CONVERT_IN_PROGRESS 1
+#define PK_STATE_VALID 2
+
+struct s390_paes_ctx {
+ /* source key material used to derive a protected key from */
+ u8 keybuf[PAES_MAX_KEYSIZE];
+ unsigned int keylen;
+
+ /* cpacf function code to use with this protected key type */
+ long fc;
+
+ /* spinlock to atomic read/update all the following fields */
+ spinlock_t pk_lock;
+
+ /* see PK_STATE* defines above, < 0 holds convert failure rc */
+ int pk_state;
+ /* if state is valid, pk holds the protected key */
+ struct paes_protkey pk;
+};
+
+struct s390_pxts_ctx {
+ /* source key material used to derive a protected key from */
+ u8 keybuf[2 * PAES_MAX_KEYSIZE];
unsigned int keylen;
+
+ /* cpacf function code to use with this protected key type */
+ long fc;
+
+ /* spinlock to atomic read/update all the following fields */
+ spinlock_t pk_lock;
+
+ /* see PK_STATE* defines above, < 0 holds convert failure rc */
+ int pk_state;
+ /* if state is valid, pk[] hold(s) the protected key(s) */
+ struct paes_protkey pk[2];
};
/*
@@ -89,214 +122,344 @@ static inline u32 make_clrkey_token(const u8 *ck, size_t cklen, u8 *dest)
return sizeof(*token) + cklen;
}
-static inline int _key_to_kb(struct key_blob *kb,
- const u8 *key,
- unsigned int keylen)
+/*
+ * key_to_ctx() - Set key value into context, maybe construct
+ * a clear key token digestable by pkey from a clear key value.
+ */
+static inline int key_to_ctx(struct s390_paes_ctx *ctx,
+ const u8 *key, unsigned int keylen)
{
+ if (keylen > sizeof(ctx->keybuf))
+ return -EINVAL;
+
switch (keylen) {
case 16:
case 24:
case 32:
/* clear key value, prepare pkey clear key token in keybuf */
- memset(kb->keybuf, 0, sizeof(kb->keybuf));
- kb->keylen = make_clrkey_token(key, keylen, kb->keybuf);
- kb->key = kb->keybuf;
+ memset(ctx->keybuf, 0, sizeof(ctx->keybuf));
+ ctx->keylen = make_clrkey_token(key, keylen, ctx->keybuf);
break;
default:
/* other key material, let pkey handle this */
- if (keylen <= sizeof(kb->keybuf))
- kb->key = kb->keybuf;
- else {
- kb->key = kmalloc(keylen, GFP_KERNEL);
- if (!kb->key)
- return -ENOMEM;
- }
- memcpy(kb->key, key, keylen);
- kb->keylen = keylen;
+ memcpy(ctx->keybuf, key, keylen);
+ ctx->keylen = keylen;
break;
}
return 0;
}
-static inline int _xts_key_to_kb(struct key_blob *kb,
- const u8 *key,
- unsigned int keylen)
+/*
+ * xts_key_to_ctx() - Set key value into context, maybe construct
+ * a clear key token digestable by pkey from a clear key value.
+ */
+static inline int xts_key_to_ctx(struct s390_pxts_ctx *ctx,
+ const u8 *key, unsigned int keylen)
{
size_t cklen = keylen / 2;
- memset(kb->keybuf, 0, sizeof(kb->keybuf));
+ if (keylen > sizeof(ctx->keybuf))
+ return -EINVAL;
switch (keylen) {
case 32:
case 64:
/* clear key value, prepare pkey clear key tokens in keybuf */
- kb->key = kb->keybuf;
- kb->keylen = make_clrkey_token(key, cklen, kb->key);
- kb->keylen += make_clrkey_token(key + cklen, cklen,
- kb->key + kb->keylen);
+ memset(ctx->keybuf, 0, sizeof(ctx->keybuf));
+ ctx->keylen = make_clrkey_token(key, cklen, ctx->keybuf);
+ ctx->keylen += make_clrkey_token(key + cklen, cklen,
+ ctx->keybuf + ctx->keylen);
break;
default:
/* other key material, let pkey handle this */
- if (keylen <= sizeof(kb->keybuf)) {
- kb->key = kb->keybuf;
- } else {
- kb->key = kmalloc(keylen, GFP_KERNEL);
- if (!kb->key)
- return -ENOMEM;
- }
- memcpy(kb->key, key, keylen);
- kb->keylen = keylen;
+ memcpy(ctx->keybuf, key, keylen);
+ ctx->keylen = keylen;
break;
}
return 0;
}
-static inline void _free_kb_keybuf(struct key_blob *kb)
+/*
+ * Convert the raw key material into a protected key via PKEY api.
+ * This function may sleep - don't call in non-sleeping context.
+ */
+static inline int convert_key(const u8 *key, unsigned int keylen,
+ struct paes_protkey *pk)
{
- if (kb->key && kb->key != kb->keybuf
- && kb->keylen > sizeof(kb->keybuf)) {
- kfree_sensitive(kb->key);
- kb->key = NULL;
- }
- memzero_explicit(kb->keybuf, sizeof(kb->keybuf));
-}
-
-struct s390_paes_ctx {
- struct key_blob kb;
- struct paes_protkey pk;
- spinlock_t pk_lock;
- unsigned long fc;
-};
-
-struct s390_pxts_ctx {
- struct key_blob kb;
- struct paes_protkey pk[2];
- spinlock_t pk_lock;
- unsigned long fc;
-};
+ int rc, i;
-static inline int __paes_keyblob2pkey(const u8 *key, unsigned int keylen,
- struct paes_protkey *pk)
-{
- int i, rc = -EIO;
+ pk->len = sizeof(pk->protkey);
/* try three times in case of busy card */
- for (i = 0; rc && i < 3; i++) {
- if (rc == -EBUSY && in_task()) {
- if (msleep_interruptible(1000))
- return -EINTR;
+ for (rc = -EIO, i = 0; rc && i < 3; i++) {
+ if (rc == -EBUSY && msleep_interruptible((1 << i) * 100)) {
+ rc = -EINTR;
+ goto out;
}
- rc = pkey_key2protkey(key, keylen, pk->protkey, &pk->len,
- &pk->type, PKEY_XFLAG_NOMEMALLOC);
+ rc = pkey_key2protkey(key, keylen,
+ pk->protkey, &pk->len, &pk->type,
+ PKEY_XFLAG_NOMEMALLOC);
}
+out:
+ pr_debug("rc=%d\n", rc);
return rc;
}
-static inline int __paes_convert_key(struct s390_paes_ctx *ctx)
+/*
+ * (Re-)Convert the raw key material from the ctx into a
+ * protected key via convert_key() function. Update the
+ * pk_state, pk_type, pk_len and the protected key in
+ * the param block.
+ */
+static int paes_convert_key(struct s390_paes_ctx *ctx)
{
struct paes_protkey pk;
int rc;
- pk.len = sizeof(pk.protkey);
- rc = __paes_keyblob2pkey(ctx->kb.key, ctx->kb.keylen, &pk);
- if (rc)
- return rc;
+ spin_lock_bh(&ctx->pk_lock);
+ ctx->pk_state = PK_STATE_CONVERT_IN_PROGRESS;
+ spin_unlock_bh(&ctx->pk_lock);
+ rc = convert_key(ctx->keybuf, ctx->keylen, &pk);
+
+ /* update context */
spin_lock_bh(&ctx->pk_lock);
- memcpy(&ctx->pk, &pk, sizeof(pk));
+ if (rc) {
+ ctx->pk_state = rc;
+ } else {
+ ctx->pk_state = PK_STATE_VALID;
+ ctx->pk = pk;
+ }
spin_unlock_bh(&ctx->pk_lock);
- return 0;
+ memzero_explicit(&pk, sizeof(pk));
+ pr_debug("rc=%d\n", rc);
+ return rc;
}
-static int ecb_paes_init(struct crypto_skcipher *tfm)
+/*
+ * (Re-)Convert the raw xts key material from the ctx into
+ * a protected key via convert_key() function. Update the
+ * pk_state, pk_type, pk_len and the protected key in
+ * the param block.
+ */
+static int pxts_convert_key(struct s390_pxts_ctx *ctx)
{
- struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct paes_protkey pk0, pk1;
+ size_t split_keylen;
+ int rc;
- ctx->kb.key = NULL;
- spin_lock_init(&ctx->pk_lock);
+ spin_lock_bh(&ctx->pk_lock);
+ ctx->pk_state = PK_STATE_CONVERT_IN_PROGRESS;
+ spin_unlock_bh(&ctx->pk_lock);
- return 0;
-}
+ rc = convert_key(ctx->keybuf, ctx->keylen, &pk0);
+ if (rc)
+ goto out;
-static void ecb_paes_exit(struct crypto_skcipher *tfm)
-{
- struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ switch (pk0.type) {
+ case PKEY_KEYTYPE_AES_128:
+ case PKEY_KEYTYPE_AES_256:
+ /* second keytoken required */
+ if (ctx->keylen % 2) {
+ rc = -EINVAL;
+ goto out;
+ }
+ split_keylen = ctx->keylen / 2;
+ rc = convert_key(ctx->keybuf + split_keylen,
+ split_keylen, &pk1);
+ if (rc)
+ goto out;
+ if (pk0.type != pk1.type) {
+ rc = -EINVAL;
+ goto out;
+ }
+ break;
+ case PKEY_KEYTYPE_AES_XTS_128:
+ case PKEY_KEYTYPE_AES_XTS_256:
+ /* single key */
+ pk1.type = 0;
+ break;
+ default:
+ /* unsupported protected keytype */
+ rc = -EINVAL;
+ goto out;
+ }
+
+out:
+ /* update context */
+ spin_lock_bh(&ctx->pk_lock);
+ if (rc) {
+ ctx->pk_state = rc;
+ } else {
+ ctx->pk_state = PK_STATE_VALID;
+ ctx->pk[0] = pk0;
+ ctx->pk[1] = pk1;
+ }
+ spin_unlock_bh(&ctx->pk_lock);
- _free_kb_keybuf(&ctx->kb);
+ memzero_explicit(&pk0, sizeof(pk0));
+ memzero_explicit(&pk1, sizeof(pk1));
+ pr_debug("rc=%d\n", rc);
+ return rc;
}
-static inline int __ecb_paes_set_key(struct s390_paes_ctx *ctx)
+/*
+ * PAES ECB implementation
+ */
+
+struct ecb_param {
+ u8 key[PAES_256_PROTKEY_SIZE];
+} __packed;
+
+struct s390_pecb_req_ctx {
+ unsigned long modifier;
+ struct skcipher_walk walk;
+ bool param_init_done;
+ struct ecb_param param;
+};
+
+static int ecb_paes_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
+ unsigned int key_len)
{
- unsigned long fc;
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ long fc;
int rc;
- rc = __paes_convert_key(ctx);
+ /* set raw key into context */
+ rc = key_to_ctx(ctx, in_key, key_len);
if (rc)
- return rc;
+ goto out;
- /* Pick the correct function code based on the protected key type */
- fc = (ctx->pk.type == PKEY_KEYTYPE_AES_128) ? CPACF_KM_PAES_128 :
- (ctx->pk.type == PKEY_KEYTYPE_AES_192) ? CPACF_KM_PAES_192 :
- (ctx->pk.type == PKEY_KEYTYPE_AES_256) ? CPACF_KM_PAES_256 : 0;
+ /* convert key into protected key */
+ rc = paes_convert_key(ctx);
+ if (rc)
+ goto out;
- /* Check if the function code is available */
+ /* Pick the correct function code based on the protected key type */
+ switch (ctx->pk.type) {
+ case PKEY_KEYTYPE_AES_128:
+ fc = CPACF_KM_PAES_128;
+ break;
+ case PKEY_KEYTYPE_AES_192:
+ fc = CPACF_KM_PAES_192;
+ break;
+ case PKEY_KEYTYPE_AES_256:
+ fc = CPACF_KM_PAES_256;
+ break;
+ default:
+ fc = 0;
+ break;
+ }
ctx->fc = (fc && cpacf_test_func(&km_functions, fc)) ? fc : 0;
- return ctx->fc ? 0 : -EINVAL;
+ rc = fc ? 0 : -EINVAL;
+
+out:
+ pr_debug("rc=%d\n", rc);
+ return rc;
}
-static int ecb_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
- unsigned int key_len)
+static int ecb_paes_do_crypt(struct s390_paes_ctx *ctx,
+ struct s390_pecb_req_ctx *req_ctx,
+ bool maysleep)
{
- struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
- int rc;
+ struct ecb_param *param = &req_ctx->param;
+ struct skcipher_walk *walk = &req_ctx->walk;
+ unsigned int nbytes, n, k;
+ int pk_state, rc;
+
+ if (!req_ctx->param_init_done) {
+ /* fetch and check protected key state */
+ spin_lock_bh(&ctx->pk_lock);
+ pk_state = ctx->pk_state;
+ memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
+ spin_unlock_bh(&ctx->pk_lock);
+ switch (pk_state) {
+ case PK_STATE_NO_KEY:
+ rc = -ENOKEY;
+ goto out;
+ case PK_STATE_CONVERT_IN_PROGRESS:
+ rc = -EKEYEXPIRED;
+ goto out;
+ case PK_STATE_VALID:
+ req_ctx->param_init_done = true;
+ break;
+ default:
+ rc = pk_state < 0 ? pk_state : -EIO;
+ goto out;
+ }
+ }
- _free_kb_keybuf(&ctx->kb);
- rc = _key_to_kb(&ctx->kb, in_key, key_len);
- if (rc)
- return rc;
+ rc = 0;
+
+ /* always walk on the ... */
+ while ((nbytes = walk->nbytes) != 0) {
+ /* only use complete blocks */
+ n = nbytes & ~(AES_BLOCK_SIZE - 1);
+ k = cpacf_km(ctx->fc | req_ctx->modifier, param,
+ walk->dst.virt.addr, walk->src.virt.addr, n);
+ if (k)
+ rc = skcipher_walk_done(walk, nbytes - k);
+ if (k < n) {
+ if (!maysleep) {
+ rc = -EKEYEXPIRED;
+ goto out;
+ }
+ rc = paes_convert_key(ctx);
+ if (rc)
+ goto out;
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
+ spin_unlock_bh(&ctx->pk_lock);
+ }
+ }
- return __ecb_paes_set_key(ctx);
+out:
+ pr_debug("rc=%d\n", rc);
+ return rc;
}
static int ecb_paes_crypt(struct skcipher_request *req, unsigned long modifier)
{
+ struct s390_pecb_req_ctx *req_ctx = skcipher_request_ctx(req);
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
- struct {
- u8 key[PAES_256_PROTKEY_SIZE];
- } param;
- struct skcipher_walk walk;
- unsigned int nbytes, n, k;
+ struct skcipher_walk *walk = &req_ctx->walk;
int rc;
- rc = skcipher_walk_virt(&walk, req, false);
+ /*
+ * First try synchronous. If this fails for any reason
+ * schedule this request asynchronous via crypto engine.
+ */
+
+ rc = skcipher_walk_virt(walk, req, false);
if (rc)
- return rc;
+ goto out;
- spin_lock_bh(&ctx->pk_lock);
- memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
- spin_unlock_bh(&ctx->pk_lock);
+ req_ctx->modifier = modifier;
+ req_ctx->param_init_done = false;
- while ((nbytes = walk.nbytes) != 0) {
- /* only use complete blocks */
- n = nbytes & ~(AES_BLOCK_SIZE - 1);
- k = cpacf_km(ctx->fc | modifier, ¶m,
- walk.dst.virt.addr, walk.src.virt.addr, n);
- if (k)
- rc = skcipher_walk_done(&walk, nbytes - k);
- if (k < n) {
- if (__paes_convert_key(ctx))
- return skcipher_walk_done(&walk, -EIO);
- spin_lock_bh(&ctx->pk_lock);
- memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
- spin_unlock_bh(&ctx->pk_lock);
- }
+ rc = ecb_paes_do_crypt(ctx, req_ctx, false);
+ if (rc != -EKEYEXPIRED) {
+ if (rc)
+ skcipher_walk_done(walk, rc);
+ goto out;
}
+
+ rc = crypto_transfer_skcipher_request_to_engine(paes_crypto_engine, req);
+ if (rc)
+ goto out;
+
+ rc = -EINPROGRESS;
+
+out:
+ if (rc != -EINPROGRESS)
+ memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
+ pr_debug("rc=%d\n", rc);
return rc;
}
@@ -310,112 +473,242 @@ static int ecb_paes_decrypt(struct skcipher_request *req)
return ecb_paes_crypt(req, CPACF_DECRYPT);
}
-static struct skcipher_alg ecb_paes_alg = {
- .base.cra_name = "ecb(paes)",
- .base.cra_driver_name = "ecb-paes-s390",
- .base.cra_priority = 401, /* combo: aes + ecb + 1 */
- .base.cra_blocksize = AES_BLOCK_SIZE,
- .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
- .base.cra_module = THIS_MODULE,
- .base.cra_list = LIST_HEAD_INIT(ecb_paes_alg.base.cra_list),
- .init = ecb_paes_init,
- .exit = ecb_paes_exit,
- .min_keysize = PAES_MIN_KEYSIZE,
- .max_keysize = PAES_MAX_KEYSIZE,
- .setkey = ecb_paes_set_key,
- .encrypt = ecb_paes_encrypt,
- .decrypt = ecb_paes_decrypt,
-};
-
-static int cbc_paes_init(struct crypto_skcipher *tfm)
+static int ecb_paes_init(struct crypto_skcipher *tfm)
{
struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
- ctx->kb.key = NULL;
+ memset(ctx, 0, sizeof(*ctx));
spin_lock_init(&ctx->pk_lock);
+ crypto_skcipher_set_reqsize(tfm, sizeof(struct s390_pecb_req_ctx));
+
return 0;
}
-static void cbc_paes_exit(struct crypto_skcipher *tfm)
+static void ecb_paes_exit(struct crypto_skcipher *tfm)
{
struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
- _free_kb_keybuf(&ctx->kb);
+ memzero_explicit(ctx, sizeof(*ctx));
}
-static inline int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
+static int ecb_paes_do_one_request(struct crypto_engine *engine, void *areq)
{
- unsigned long fc;
+ struct skcipher_request *req = skcipher_request_cast(areq);
+ struct s390_pecb_req_ctx *req_ctx = skcipher_request_ctx(req);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk *walk = &req_ctx->walk;
int rc;
- rc = __paes_convert_key(ctx);
- if (rc)
- return rc;
-
- /* Pick the correct function code based on the protected key type */
- fc = (ctx->pk.type == PKEY_KEYTYPE_AES_128) ? CPACF_KMC_PAES_128 :
- (ctx->pk.type == PKEY_KEYTYPE_AES_192) ? CPACF_KMC_PAES_192 :
- (ctx->pk.type == PKEY_KEYTYPE_AES_256) ? CPACF_KMC_PAES_256 : 0;
+ /* walk has already been prepared */
- /* Check if the function code is available */
- ctx->fc = (fc && cpacf_test_func(&kmc_functions, fc)) ? fc : 0;
+ rc = ecb_paes_do_crypt(ctx, req_ctx, true);
+ if (rc != -EKEYEXPIRED) {
+ if (rc)
+ skcipher_walk_done(walk, rc);
+ goto complete;
+ }
- return ctx->fc ? 0 : -EINVAL;
+ /*
+ * Protected key expired, conversion is in process.
+ * Trigger a re-schedule of this request by returning
+ * -ENOSPC ("hardware queue is full") to the crypto engine.
+ * To avoid immediately re-invocation of this callback,
+ * tell the scheduler to voluntarily give up the CPU here.
+ */
+ yield();
+ pr_debug("rescheduling request\n");
+ return -ENOSPC;
+
+complete:
+ memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
+ pr_debug("request complete with rc=%d\n", rc);
+ local_bh_disable();
+ crypto_finalize_skcipher_request(engine, req, rc);
+ local_bh_enable();
+ return rc;
}
-static int cbc_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
- unsigned int key_len)
+static struct skcipher_engine_alg ecb_paes_alg = {
+ .base = {
+ .base.cra_name = "ecb(paes)",
+ .base.cra_driver_name = "ecb-paes-s390",
+ .base.cra_priority = 401, /* combo: aes + ecb + 1 */
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_list = LIST_HEAD_INIT(ecb_paes_alg.base.base.cra_list),
+ .init = ecb_paes_init,
+ .exit = ecb_paes_exit,
+ .min_keysize = PAES_MIN_KEYSIZE,
+ .max_keysize = PAES_MAX_KEYSIZE,
+ .setkey = ecb_paes_setkey,
+ .encrypt = ecb_paes_encrypt,
+ .decrypt = ecb_paes_decrypt,
+ },
+ .op = {
+ .do_one_request = ecb_paes_do_one_request,
+ },
+};
+
+/*
+ * PAES CBC implementation
+ */
+
+struct cbc_param {
+ u8 iv[AES_BLOCK_SIZE];
+ u8 key[PAES_256_PROTKEY_SIZE];
+} __packed;
+
+struct s390_pcbc_req_ctx {
+ unsigned long modifier;
+ struct skcipher_walk walk;
+ bool param_init_done;
+ struct cbc_param param;
+};
+
+static int cbc_paes_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
+ unsigned int key_len)
{
struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ long fc;
int rc;
- _free_kb_keybuf(&ctx->kb);
- rc = _key_to_kb(&ctx->kb, in_key, key_len);
+ /* set raw key into context */
+ rc = key_to_ctx(ctx, in_key, key_len);
if (rc)
- return rc;
+ goto out;
+
+ /* convert raw key into protected key */
+ rc = paes_convert_key(ctx);
+ if (rc)
+ goto out;
+
+ /* Pick the correct function code based on the protected key type */
+ switch (ctx->pk.type) {
+ case PKEY_KEYTYPE_AES_128:
+ fc = CPACF_KMC_PAES_128;
+ break;
+ case PKEY_KEYTYPE_AES_192:
+ fc = CPACF_KMC_PAES_192;
+ break;
+ case PKEY_KEYTYPE_AES_256:
+ fc = CPACF_KMC_PAES_256;
+ break;
+ default:
+ fc = 0;
+ break;
+ }
+ ctx->fc = (fc && cpacf_test_func(&kmc_functions, fc)) ? fc : 0;
- return __cbc_paes_set_key(ctx);
+ rc = fc ? 0 : -EINVAL;
+
+out:
+ pr_debug("rc=%d\n", rc);
+ return rc;
}
-static int cbc_paes_crypt(struct skcipher_request *req, unsigned long modifier)
+static int cbc_paes_do_crypt(struct s390_paes_ctx *ctx,
+ struct s390_pcbc_req_ctx *req_ctx,
+ bool maysleep)
{
- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
- struct {
- u8 iv[AES_BLOCK_SIZE];
- u8 key[PAES_256_PROTKEY_SIZE];
- } param;
- struct skcipher_walk walk;
+ struct cbc_param *param = &req_ctx->param;
+ struct skcipher_walk *walk = &req_ctx->walk;
unsigned int nbytes, n, k;
- int rc;
-
- rc = skcipher_walk_virt(&walk, req, false);
- if (rc)
- return rc;
+ int pk_state, rc;
+
+ if (!req_ctx->param_init_done) {
+ /* fetch and check protected key state */
+ spin_lock_bh(&ctx->pk_lock);
+ pk_state = ctx->pk_state;
+ memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
+ spin_unlock_bh(&ctx->pk_lock);
+ switch (pk_state) {
+ case PK_STATE_NO_KEY:
+ rc = -ENOKEY;
+ goto out;
+ case PK_STATE_CONVERT_IN_PROGRESS:
+ rc = -EKEYEXPIRED;
+ goto out;
+ case PK_STATE_VALID:
+ req_ctx->param_init_done = true;
+ break;
+ default:
+ rc = pk_state < 0 ? pk_state : -EIO;
+ goto out;
+ }
+ }
- memcpy(param.iv, walk.iv, AES_BLOCK_SIZE);
- spin_lock_bh(&ctx->pk_lock);
- memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
- spin_unlock_bh(&ctx->pk_lock);
+ memcpy(param->iv, walk->iv, AES_BLOCK_SIZE);
+ rc = 0;
- while ((nbytes = walk.nbytes) != 0) {
+ /* always walk on the ... */
+ while ((nbytes = walk->nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
- k = cpacf_kmc(ctx->fc | modifier, ¶m,
- walk.dst.virt.addr, walk.src.virt.addr, n);
+ k = cpacf_kmc(ctx->fc | req_ctx->modifier, param,
+ walk->dst.virt.addr, walk->src.virt.addr, n);
if (k) {
- memcpy(walk.iv, param.iv, AES_BLOCK_SIZE);
- rc = skcipher_walk_done(&walk, nbytes - k);
+ memcpy(walk->iv, param->iv, AES_BLOCK_SIZE);
+ rc = skcipher_walk_done(walk, nbytes - k);
}
if (k < n) {
- if (__paes_convert_key(ctx))
- return skcipher_walk_done(&walk, -EIO);
+ if (!maysleep) {
+ rc = -EKEYEXPIRED;
+ goto out;
+ }
+ rc = paes_convert_key(ctx);
+ if (rc)
+ goto out;
spin_lock_bh(&ctx->pk_lock);
- memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
+ memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
spin_unlock_bh(&ctx->pk_lock);
}
}
+
+out:
+ pr_debug("rc=%d\n", rc);
+ return rc;
+}
+
+static int cbc_paes_crypt(struct skcipher_request *req, unsigned long modifier)
+{
+ struct s390_pcbc_req_ctx *req_ctx = skcipher_request_ctx(req);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk *walk = &req_ctx->walk;
+ int rc;
+
+ /*
+ * First try synchronous. If this fails for any reason
+ * schedule this request asynchronous via crypto engine.
+ */
+
+ rc = skcipher_walk_virt(walk, req, false);
+ if (rc)
+ goto out;
+
+ req_ctx->modifier = modifier;
+ req_ctx->param_init_done = false;
+
+ rc = cbc_paes_do_crypt(ctx, req_ctx, false);
+ if (rc != -EKEYEXPIRED) {
+ if (rc)
+ skcipher_walk_done(walk, rc);
+ goto out;
+ }
+
+ rc = crypto_transfer_skcipher_request_to_engine(paes_crypto_engine, req);
+ if (rc)
+ goto out;
+
+ rc = -EINPROGRESS;
+
+out:
+ if (rc != -EINPROGRESS)
+ memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
+ pr_debug("rc=%d\n", rc);
return rc;
}
@@ -429,496 +722,839 @@ static int cbc_paes_decrypt(struct skcipher_request *req)
return cbc_paes_crypt(req, CPACF_DECRYPT);
}
-static struct skcipher_alg cbc_paes_alg = {
- .base.cra_name = "cbc(paes)",
- .base.cra_driver_name = "cbc-paes-s390",
- .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
- .base.cra_blocksize = AES_BLOCK_SIZE,
- .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
- .base.cra_module = THIS_MODULE,
- .base.cra_list = LIST_HEAD_INIT(cbc_paes_alg.base.cra_list),
- .init = cbc_paes_init,
- .exit = cbc_paes_exit,
- .min_keysize = PAES_MIN_KEYSIZE,
- .max_keysize = PAES_MAX_KEYSIZE,
- .ivsize = AES_BLOCK_SIZE,
- .setkey = cbc_paes_set_key,
- .encrypt = cbc_paes_encrypt,
- .decrypt = cbc_paes_decrypt,
-};
-
-static int xts_paes_init(struct crypto_skcipher *tfm)
+static int cbc_paes_init(struct crypto_skcipher *tfm)
{
- struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
- ctx->kb.key = NULL;
+ memset(ctx, 0, sizeof(*ctx));
spin_lock_init(&ctx->pk_lock);
+ crypto_skcipher_set_reqsize(tfm, sizeof(struct s390_pcbc_req_ctx));
+
return 0;
}
-static void xts_paes_exit(struct crypto_skcipher *tfm)
+static void cbc_paes_exit(struct crypto_skcipher *tfm)
{
- struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
- _free_kb_keybuf(&ctx->kb);
+ memzero_explicit(ctx, sizeof(*ctx));
}
-static inline int __xts_paes_convert_key(struct s390_pxts_ctx *ctx)
+static int cbc_paes_do_one_request(struct crypto_engine *engine, void *areq)
{
- struct paes_protkey pk0, pk1;
- size_t split_keylen;
+ struct skcipher_request *req = skcipher_request_cast(areq);
+ struct s390_pcbc_req_ctx *req_ctx = skcipher_request_ctx(req);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk *walk = &req_ctx->walk;
int rc;
- pk0.len = sizeof(pk0.protkey);
- pk1.len = sizeof(pk1.protkey);
+ /* walk has already been prepared */
- rc = __paes_keyblob2pkey(ctx->kb.key, ctx->kb.keylen, &pk0);
- if (rc)
- return rc;
+ rc = cbc_paes_do_crypt(ctx, req_ctx, true);
+ if (rc != -EKEYEXPIRED) {
+ if (rc)
+ skcipher_walk_done(walk, rc);
+ goto complete;
+ }
- switch (pk0.type) {
- case PKEY_KEYTYPE_AES_128:
- case PKEY_KEYTYPE_AES_256:
- /* second keytoken required */
- if (ctx->kb.keylen % 2)
- return -EINVAL;
- split_keylen = ctx->kb.keylen / 2;
+ /*
+ * Protected key expired, conversion is in process.
+ * Trigger a re-schedule of this request by returning
+ * -ENOSPC ("hardware queue is full") to the crypto engine.
+ * To avoid immediately re-invocation of this callback,
+ * tell the scheduler to voluntarily give up the CPU here.
+ */
+ yield();
+ pr_debug("rescheduling request\n");
+ return -ENOSPC;
+
+complete:
+ memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
+ pr_debug("request complete with rc=%d\n", rc);
+ local_bh_disable();
+ crypto_finalize_skcipher_request(engine, req, rc);
+ local_bh_enable();
+ return rc;
+}
- rc = __paes_keyblob2pkey(ctx->kb.key + split_keylen,
- split_keylen, &pk1);
- if (rc)
- return rc;
+static struct skcipher_engine_alg cbc_paes_alg = {
+ .base = {
+ .base.cra_name = "cbc(paes)",
+ .base.cra_driver_name = "cbc-paes-s390",
+ .base.cra_priority = 402, /* cbc-paes-s390 + 1 */
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_list = LIST_HEAD_INIT(cbc_paes_alg.base.base.cra_list),
+ .init = cbc_paes_init,
+ .exit = cbc_paes_exit,
+ .min_keysize = PAES_MIN_KEYSIZE,
+ .max_keysize = PAES_MAX_KEYSIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = cbc_paes_setkey,
+ .encrypt = cbc_paes_encrypt,
+ .decrypt = cbc_paes_decrypt,
+ },
+ .op = {
+ .do_one_request = cbc_paes_do_one_request,
+ },
+};
- if (pk0.type != pk1.type)
- return -EINVAL;
- break;
- case PKEY_KEYTYPE_AES_XTS_128:
- case PKEY_KEYTYPE_AES_XTS_256:
- /* single key */
- pk1.type = 0;
- break;
- default:
- /* unsupported protected keytype */
- return -EINVAL;
- }
+/*
+ * PAES CTR implementation
+ */
- spin_lock_bh(&ctx->pk_lock);
- ctx->pk[0] = pk0;
- ctx->pk[1] = pk1;
- spin_unlock_bh(&ctx->pk_lock);
+struct ctr_param {
+ u8 key[PAES_256_PROTKEY_SIZE];
+} __packed;
- return 0;
-}
+struct s390_pctr_req_ctx {
+ unsigned long modifier;
+ struct skcipher_walk walk;
+ bool param_init_done;
+ struct ctr_param param;
+};
-static inline int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
+static int ctr_paes_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
+ unsigned int key_len)
{
- unsigned long fc;
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ long fc;
int rc;
- rc = __xts_paes_convert_key(ctx);
+ /* set raw key into context */
+ rc = key_to_ctx(ctx, in_key, key_len);
if (rc)
- return rc;
+ goto out;
+
+ /* convert raw key into protected key */
+ rc = paes_convert_key(ctx);
+ if (rc)
+ goto out;
/* Pick the correct function code based on the protected key type */
- switch (ctx->pk[0].type) {
+ switch (ctx->pk.type) {
case PKEY_KEYTYPE_AES_128:
- fc = CPACF_KM_PXTS_128;
+ fc = CPACF_KMCTR_PAES_128;
break;
- case PKEY_KEYTYPE_AES_256:
- fc = CPACF_KM_PXTS_256;
+ case PKEY_KEYTYPE_AES_192:
+ fc = CPACF_KMCTR_PAES_192;
break;
- case PKEY_KEYTYPE_AES_XTS_128:
- fc = CPACF_KM_PXTS_128_FULL;
- break;
- case PKEY_KEYTYPE_AES_XTS_256:
- fc = CPACF_KM_PXTS_256_FULL;
+ case PKEY_KEYTYPE_AES_256:
+ fc = CPACF_KMCTR_PAES_256;
break;
default:
fc = 0;
break;
}
+ ctx->fc = (fc && cpacf_test_func(&kmctr_functions, fc)) ? fc : 0;
- /* Check if the function code is available */
- ctx->fc = (fc && cpacf_test_func(&km_functions, fc)) ? fc : 0;
+ rc = fc ? 0 : -EINVAL;
- return ctx->fc ? 0 : -EINVAL;
+out:
+ pr_debug("rc=%d\n", rc);
+ return rc;
}
-static int xts_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
- unsigned int in_keylen)
+static inline unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
+{
+ unsigned int i, n;
+
+ /* only use complete blocks, max. PAGE_SIZE */
+ memcpy(ctrptr, iv, AES_BLOCK_SIZE);
+ n = (nbytes > PAGE_SIZE) ? PAGE_SIZE : nbytes & ~(AES_BLOCK_SIZE - 1);
+ for (i = (n / AES_BLOCK_SIZE) - 1; i > 0; i--) {
+ memcpy(ctrptr + AES_BLOCK_SIZE, ctrptr, AES_BLOCK_SIZE);
+ crypto_inc(ctrptr + AES_BLOCK_SIZE, AES_BLOCK_SIZE);
+ ctrptr += AES_BLOCK_SIZE;
+ }
+ return n;
+}
+
+static int ctr_paes_do_crypt(struct s390_paes_ctx *ctx,
+ struct s390_pctr_req_ctx *req_ctx,
+ bool maysleep)
+{
+ struct ctr_param *param = &req_ctx->param;
+ struct skcipher_walk *walk = &req_ctx->walk;
+ u8 buf[AES_BLOCK_SIZE], *ctrptr;
+ unsigned int nbytes, n, k;
+ int pk_state, locked, rc;
+
+ if (!req_ctx->param_init_done) {
+ /* fetch and check protected key state */
+ spin_lock_bh(&ctx->pk_lock);
+ pk_state = ctx->pk_state;
+ memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
+ spin_unlock_bh(&ctx->pk_lock);
+ switch (pk_state) {
+ case PK_STATE_NO_KEY:
+ rc = -ENOKEY;
+ goto out;
+ case PK_STATE_CONVERT_IN_PROGRESS:
+ rc = -EKEYEXPIRED;
+ goto out;
+ case PK_STATE_VALID:
+ req_ctx->param_init_done = true;
+ break;
+ default:
+ rc = pk_state < 0 ? pk_state : -EIO;
+ goto out;
+ }
+ }
+
+ rc = 0;
+ locked = mutex_trylock(&ctrblk_lock);
+
+ /* always walk on the ... */
+ while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
+ n = AES_BLOCK_SIZE;
+ if (nbytes >= 2 * AES_BLOCK_SIZE && locked)
+ n = __ctrblk_init(ctrblk, walk->iv, nbytes);
+ ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk->iv;
+ k = cpacf_kmctr(ctx->fc, param, walk->dst.virt.addr,
+ walk->src.virt.addr, n, ctrptr);
+ if (k) {
+ if (ctrptr == ctrblk)
+ memcpy(walk->iv, ctrptr + k - AES_BLOCK_SIZE,
+ AES_BLOCK_SIZE);
+ crypto_inc(walk->iv, AES_BLOCK_SIZE);
+ rc = skcipher_walk_done(walk, nbytes - k);
+ }
+ if (k < n) {
+ if (!maysleep) {
+ if (locked)
+ mutex_unlock(&ctrblk_lock);
+ rc = -EKEYEXPIRED;
+ goto out;
+ }
+ rc = paes_convert_key(ctx);
+ if (rc) {
+ if (locked)
+ mutex_unlock(&ctrblk_lock);
+ goto out;
+ }
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
+ spin_unlock_bh(&ctx->pk_lock);
+ }
+ }
+ if (locked)
+ mutex_unlock(&ctrblk_lock);
+
+ /* final block may be < AES_BLOCK_SIZE, copy only nbytes */
+ if (nbytes) {
+ memset(buf, 0, AES_BLOCK_SIZE);
+ memcpy(buf, walk->src.virt.addr, nbytes);
+ while (1) {
+ if (cpacf_kmctr(ctx->fc, param, buf,
+ buf, AES_BLOCK_SIZE,
+ walk->iv) == AES_BLOCK_SIZE)
+ break;
+ if (!maysleep) {
+ rc = -EKEYEXPIRED;
+ goto out;
+ }
+ rc = paes_convert_key(ctx);
+ if (rc)
+ goto out;
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
+ spin_unlock_bh(&ctx->pk_lock);
+ }
+ memcpy(walk->dst.virt.addr, buf, nbytes);
+ crypto_inc(walk->iv, AES_BLOCK_SIZE);
+ rc = skcipher_walk_done(walk, 0);
+ }
+
+out:
+ pr_debug("rc=%d\n", rc);
+ return rc;
+}
+
+static int ctr_paes_crypt(struct skcipher_request *req)
+{
+ struct s390_pctr_req_ctx *req_ctx = skcipher_request_ctx(req);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk *walk = &req_ctx->walk;
+ int rc;
+
+ /*
+ * First try synchronous. If this fails for any reason
+ * schedule this request asynchronous via crypto engine.
+ */
+
+ rc = skcipher_walk_virt(walk, req, false);
+ if (rc)
+ goto out;
+
+ req_ctx->param_init_done = false;
+
+ rc = ctr_paes_do_crypt(ctx, req_ctx, false);
+ if (rc != -EKEYEXPIRED) {
+ if (rc)
+ skcipher_walk_done(walk, rc);
+ goto out;
+ }
+
+ rc = crypto_transfer_skcipher_request_to_engine(paes_crypto_engine, req);
+ if (rc)
+ goto out;
+
+ rc = -EINPROGRESS;
+
+out:
+ if (rc != -EINPROGRESS)
+ memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
+ pr_debug("rc=%d\n", rc);
+ return rc;
+}
+
+static int ctr_paes_init(struct crypto_skcipher *tfm)
+{
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ memset(ctx, 0, sizeof(*ctx));
+ spin_lock_init(&ctx->pk_lock);
+
+ crypto_skcipher_set_reqsize(tfm, sizeof(struct s390_pctr_req_ctx));
+
+ return 0;
+}
+
+static void ctr_paes_exit(struct crypto_skcipher *tfm)
+{
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ memzero_explicit(ctx, sizeof(*ctx));
+}
+
+static int ctr_paes_do_one_request(struct crypto_engine *engine, void *areq)
+{
+ struct skcipher_request *req = skcipher_request_cast(areq);
+ struct s390_pctr_req_ctx *req_ctx = skcipher_request_ctx(req);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk *walk = &req_ctx->walk;
+ int rc;
+
+ /* walk has already been prepared */
+
+ rc = ctr_paes_do_crypt(ctx, req_ctx, true);
+ if (rc != -EKEYEXPIRED) {
+ if (rc)
+ skcipher_walk_done(walk, rc);
+ goto complete;
+ }
+
+ /*
+ * Protected key expired, conversion is in process.
+ * Trigger a re-schedule of this request by returning
+ * -ENOSPC ("hardware queue is full") to the crypto engine.
+ * To avoid immediately re-invocation of this callback,
+ * tell the scheduler to voluntarily give up the CPU here.
+ */
+ yield();
+ pr_debug("rescheduling request\n");
+ return -ENOSPC;
+
+complete:
+ memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
+ pr_debug("request complete with rc=%d\n", rc);
+ local_bh_disable();
+ crypto_finalize_skcipher_request(engine, req, rc);
+ local_bh_enable();
+ return rc;
+}
+
+static struct skcipher_engine_alg ctr_paes_alg = {
+ .base = {
+ .base.cra_name = "ctr(paes)",
+ .base.cra_driver_name = "ctr-paes-s390",
+ .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
+ .base.cra_blocksize = 1,
+ .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_list = LIST_HEAD_INIT(ctr_paes_alg.base.base.cra_list),
+ .init = ctr_paes_init,
+ .exit = ctr_paes_exit,
+ .min_keysize = PAES_MIN_KEYSIZE,
+ .max_keysize = PAES_MAX_KEYSIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = ctr_paes_setkey,
+ .encrypt = ctr_paes_crypt,
+ .decrypt = ctr_paes_crypt,
+ .chunksize = AES_BLOCK_SIZE,
+ },
+ .op = {
+ .do_one_request = ctr_paes_do_one_request,
+ },
+};
+
+/*
+ * PAES XTS implementation
+ */
+
+struct xts_full_km_param {
+ u8 key[64];
+ u8 tweak[16];
+ u8 nap[16];
+ u8 wkvp[32];
+} __packed;
+
+struct xts_km_param {
+ u8 key[PAES_256_PROTKEY_SIZE];
+ u8 init[16];
+} __packed;
+
+struct xts_pcc_param {
+ u8 key[PAES_256_PROTKEY_SIZE];
+ u8 tweak[16];
+ u8 block[16];
+ u8 bit[16];
+ u8 xts[16];
+} __packed;
+
+struct s390_pxts_req_ctx {
+ unsigned long modifier;
+ struct skcipher_walk walk;
+ bool param_init_done;
+ union {
+ struct xts_full_km_param full_km_param;
+ struct xts_km_param km_param;
+ } param;
+};
+
+static int xts_paes_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
+ unsigned int in_keylen)
{
struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
u8 ckey[2 * AES_MAX_KEY_SIZE];
unsigned int ckey_len;
+ long fc;
int rc;
if ((in_keylen == 32 || in_keylen == 64) &&
xts_verify_key(tfm, in_key, in_keylen))
return -EINVAL;
- _free_kb_keybuf(&ctx->kb);
- rc = _xts_key_to_kb(&ctx->kb, in_key, in_keylen);
+ /* set raw key into context */
+ rc = xts_key_to_ctx(ctx, in_key, in_keylen);
if (rc)
- return rc;
+ goto out;
- rc = __xts_paes_set_key(ctx);
+ /* convert raw key(s) into protected key(s) */
+ rc = pxts_convert_key(ctx);
if (rc)
- return rc;
+ goto out;
- /*
- * It is not possible on a single protected key (e.g. full AES-XTS) to
- * check, if k1 and k2 are the same.
- */
- if (ctx->pk[0].type == PKEY_KEYTYPE_AES_XTS_128 ||
- ctx->pk[0].type == PKEY_KEYTYPE_AES_XTS_256)
- return 0;
/*
* xts_verify_key verifies the key length is not odd and makes
* sure that the two keys are not the same. This can be done
- * on the two protected keys as well
+ * on the two protected keys as well - but not for full xts keys.
*/
- ckey_len = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ?
- AES_KEYSIZE_128 : AES_KEYSIZE_256;
- memcpy(ckey, ctx->pk[0].protkey, ckey_len);
- memcpy(ckey + ckey_len, ctx->pk[1].protkey, ckey_len);
- return xts_verify_key(tfm, ckey, 2*ckey_len);
+ if (ctx->pk[0].type == PKEY_KEYTYPE_AES_128 ||
+ ctx->pk[0].type == PKEY_KEYTYPE_AES_256) {
+ ckey_len = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ?
+ AES_KEYSIZE_128 : AES_KEYSIZE_256;
+ memcpy(ckey, ctx->pk[0].protkey, ckey_len);
+ memcpy(ckey + ckey_len, ctx->pk[1].protkey, ckey_len);
+ rc = xts_verify_key(tfm, ckey, 2 * ckey_len);
+ memzero_explicit(ckey, sizeof(ckey));
+ if (rc)
+ goto out;
+ }
+
+ /* Pick the correct function code based on the protected key type */
+ switch (ctx->pk[0].type) {
+ case PKEY_KEYTYPE_AES_128:
+ fc = CPACF_KM_PXTS_128;
+ break;
+ case PKEY_KEYTYPE_AES_256:
+ fc = CPACF_KM_PXTS_256;
+ break;
+ case PKEY_KEYTYPE_AES_XTS_128:
+ fc = CPACF_KM_PXTS_128_FULL;
+ break;
+ case PKEY_KEYTYPE_AES_XTS_256:
+ fc = CPACF_KM_PXTS_256_FULL;
+ break;
+ default:
+ fc = 0;
+ break;
+ }
+ ctx->fc = (fc && cpacf_test_func(&km_functions, fc)) ? fc : 0;
+
+ rc = fc ? 0 : -EINVAL;
+
+out:
+ pr_debug("rc=%d\n", rc);
+ return rc;
}
-static int paes_xts_crypt_full(struct skcipher_request *req,
- unsigned long modifier)
+static int xts_paes_do_crypt_fullkey(struct s390_pxts_ctx *ctx,
+ struct s390_pxts_req_ctx *req_ctx,
+ bool maysleep)
{
- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct xts_full_km_param *param = &req_ctx->param.full_km_param;
+ struct skcipher_walk *walk = &req_ctx->walk;
unsigned int keylen, offset, nbytes, n, k;
- struct {
- u8 key[64];
- u8 tweak[16];
- u8 nap[16];
- u8 wkvp[32];
- } fxts_param = {
- .nap = {0},
- };
- struct skcipher_walk walk;
- int rc;
-
- rc = skcipher_walk_virt(&walk, req, false);
- if (rc)
- return rc;
+ int rc = 0;
keylen = (ctx->pk[0].type == PKEY_KEYTYPE_AES_XTS_128) ? 32 : 64;
offset = (ctx->pk[0].type == PKEY_KEYTYPE_AES_XTS_128) ? 32 : 0;
- spin_lock_bh(&ctx->pk_lock);
- memcpy(fxts_param.key + offset, ctx->pk[0].protkey, keylen);
- memcpy(fxts_param.wkvp, ctx->pk[0].protkey + keylen,
- sizeof(fxts_param.wkvp));
- spin_unlock_bh(&ctx->pk_lock);
- memcpy(fxts_param.tweak, walk.iv, sizeof(fxts_param.tweak));
- fxts_param.nap[0] = 0x01; /* initial alpha power (1, little-endian) */
+ if (!req_ctx->param_init_done) {
+ memset(param, 0, sizeof(*param));
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(param->key + offset, ctx->pk[0].protkey, keylen);
+ memcpy(param->wkvp, ctx->pk[0].protkey + keylen, sizeof(param->wkvp));
+ spin_unlock_bh(&ctx->pk_lock);
+ memcpy(param->tweak, walk->iv, sizeof(param->tweak));
+ param->nap[0] = 0x01; /* initial alpha power (1, little-endian) */
+ req_ctx->param_init_done = true;
+ }
- while ((nbytes = walk.nbytes) != 0) {
+ /* always walk on the ... */
+ while ((nbytes = walk->nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
- k = cpacf_km(ctx->fc | modifier, fxts_param.key + offset,
- walk.dst.virt.addr, walk.src.virt.addr, n);
+ k = cpacf_km(ctx->fc | req_ctx->modifier, param->key + offset,
+ walk->dst.virt.addr, walk->src.virt.addr, n);
if (k)
- rc = skcipher_walk_done(&walk, nbytes - k);
+ rc = skcipher_walk_done(walk, nbytes - k);
if (k < n) {
- if (__xts_paes_convert_key(ctx))
- return skcipher_walk_done(&walk, -EIO);
+ if (!maysleep) {
+ rc = -EKEYEXPIRED;
+ goto out;
+ }
+ rc = pxts_convert_key(ctx);
+ if (rc)
+ goto out;
spin_lock_bh(&ctx->pk_lock);
- memcpy(fxts_param.key + offset, ctx->pk[0].protkey,
- keylen);
- memcpy(fxts_param.wkvp, ctx->pk[0].protkey + keylen,
- sizeof(fxts_param.wkvp));
+ memcpy(param->key + offset, ctx->pk[0].protkey, keylen);
+ memcpy(param->wkvp, ctx->pk[0].protkey + keylen, sizeof(param->wkvp));
spin_unlock_bh(&ctx->pk_lock);
}
}
+out:
+ pr_debug("rc=%d\n", rc);
return rc;
}
-static int paes_xts_crypt(struct skcipher_request *req, unsigned long modifier)
+static inline int __xts_2keys_prep_param(struct s390_pxts_ctx *ctx,
+ struct xts_km_param *param,
+ struct skcipher_walk *walk,
+ unsigned int keylen,
+ unsigned int offset, bool maysleep)
{
- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
- unsigned int keylen, offset, nbytes, n, k;
- struct {
- u8 key[PAES_256_PROTKEY_SIZE];
- u8 tweak[16];
- u8 block[16];
- u8 bit[16];
- u8 xts[16];
- } pcc_param;
- struct {
- u8 key[PAES_256_PROTKEY_SIZE];
- u8 init[16];
- } xts_param;
- struct skcipher_walk walk;
- int rc;
+ struct xts_pcc_param pcc_param;
+ unsigned long cc = 1;
+ int rc = 0;
+
+ while (cc) {
+ memset(&pcc_param, 0, sizeof(pcc_param));
+ memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(pcc_param.key + offset, ctx->pk[1].protkey, keylen);
+ memcpy(param->key + offset, ctx->pk[0].protkey, keylen);
+ spin_unlock_bh(&ctx->pk_lock);
+ cc = cpacf_pcc(ctx->fc, pcc_param.key + offset);
+ if (cc) {
+ if (!maysleep) {
+ rc = -EKEYEXPIRED;
+ break;
+ }
+ rc = pxts_convert_key(ctx);
+ if (rc)
+ break;
+ continue;
+ }
+ memcpy(param->init, pcc_param.xts, 16);
+ }
- rc = skcipher_walk_virt(&walk, req, false);
- if (rc)
- return rc;
+ memzero_explicit(pcc_param.key, sizeof(pcc_param.key));
+ return rc;
+}
+
+static int xts_paes_do_crypt_2keys(struct s390_pxts_ctx *ctx,
+ struct s390_pxts_req_ctx *req_ctx,
+ bool maysleep)
+{
+ struct xts_km_param *param = &req_ctx->param.km_param;
+ struct skcipher_walk *walk = &req_ctx->walk;
+ unsigned int keylen, offset, nbytes, n, k;
+ int rc = 0;
keylen = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 48 : 64;
offset = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 16 : 0;
- memset(&pcc_param, 0, sizeof(pcc_param));
- memcpy(pcc_param.tweak, walk.iv, sizeof(pcc_param.tweak));
- spin_lock_bh(&ctx->pk_lock);
- memcpy(pcc_param.key + offset, ctx->pk[1].protkey, keylen);
- memcpy(xts_param.key + offset, ctx->pk[0].protkey, keylen);
- spin_unlock_bh(&ctx->pk_lock);
- cpacf_pcc(ctx->fc, pcc_param.key + offset);
- memcpy(xts_param.init, pcc_param.xts, 16);
+ if (!req_ctx->param_init_done) {
+ rc = __xts_2keys_prep_param(ctx, param, walk,
+ keylen, offset, maysleep);
+ if (rc)
+ goto out;
+ req_ctx->param_init_done = true;
+ }
- while ((nbytes = walk.nbytes) != 0) {
+ /* always walk on the ... */
+ while ((nbytes = walk->nbytes) != 0) {
/* only use complete blocks */
n = nbytes & ~(AES_BLOCK_SIZE - 1);
- k = cpacf_km(ctx->fc | modifier, xts_param.key + offset,
- walk.dst.virt.addr, walk.src.virt.addr, n);
+ k = cpacf_km(ctx->fc | req_ctx->modifier, param->key + offset,
+ walk->dst.virt.addr, walk->src.virt.addr, n);
if (k)
- rc = skcipher_walk_done(&walk, nbytes - k);
+ rc = skcipher_walk_done(walk, nbytes - k);
if (k < n) {
- if (__xts_paes_convert_key(ctx))
- return skcipher_walk_done(&walk, -EIO);
+ if (!maysleep) {
+ rc = -EKEYEXPIRED;
+ goto out;
+ }
+ rc = pxts_convert_key(ctx);
+ if (rc)
+ goto out;
spin_lock_bh(&ctx->pk_lock);
- memcpy(xts_param.key + offset,
- ctx->pk[0].protkey, keylen);
+ memcpy(param->key + offset, ctx->pk[0].protkey, keylen);
spin_unlock_bh(&ctx->pk_lock);
}
}
+out:
+ pr_debug("rc=%d\n", rc);
return rc;
}
-static inline int xts_paes_crypt(struct skcipher_request *req, unsigned long modifier)
+static int xts_paes_do_crypt(struct s390_pxts_ctx *ctx,
+ struct s390_pxts_req_ctx *req_ctx,
+ bool maysleep)
{
- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ int pk_state, rc;
+ /* fetch and check protected key state */
+ spin_lock_bh(&ctx->pk_lock);
+ pk_state = ctx->pk_state;
+ spin_unlock_bh(&ctx->pk_lock);
+ switch (pk_state) {
+ case PK_STATE_NO_KEY:
+ rc = -ENOKEY;
+ goto out;
+ case PK_STATE_CONVERT_IN_PROGRESS:
+ rc = -EKEYEXPIRED;
+ goto out;
+ case PK_STATE_VALID:
+ break;
+ default:
+ rc = pk_state < 0 ? pk_state : -EIO;
+ goto out;
+ }
+
+ /* Call the 'real' crypt function based on the xts prot key type. */
switch (ctx->fc) {
case CPACF_KM_PXTS_128:
case CPACF_KM_PXTS_256:
- return paes_xts_crypt(req, modifier);
+ rc = xts_paes_do_crypt_2keys(ctx, req_ctx, maysleep);
+ break;
case CPACF_KM_PXTS_128_FULL:
case CPACF_KM_PXTS_256_FULL:
- return paes_xts_crypt_full(req, modifier);
+ rc = xts_paes_do_crypt_fullkey(ctx, req_ctx, maysleep);
+ break;
default:
- return -EINVAL;
+ rc = -EINVAL;
}
-}
-static int xts_paes_encrypt(struct skcipher_request *req)
-{
- return xts_paes_crypt(req, 0);
+out:
+ pr_debug("rc=%d\n", rc);
+ return rc;
}
-static int xts_paes_decrypt(struct skcipher_request *req)
+static inline int xts_paes_crypt(struct skcipher_request *req, unsigned long modifier)
{
- return xts_paes_crypt(req, CPACF_DECRYPT);
-}
+ struct s390_pxts_req_ctx *req_ctx = skcipher_request_ctx(req);
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk *walk = &req_ctx->walk;
+ int rc;
-static struct skcipher_alg xts_paes_alg = {
- .base.cra_name = "xts(paes)",
- .base.cra_driver_name = "xts-paes-s390",
- .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
- .base.cra_blocksize = AES_BLOCK_SIZE,
- .base.cra_ctxsize = sizeof(struct s390_pxts_ctx),
- .base.cra_module = THIS_MODULE,
- .base.cra_list = LIST_HEAD_INIT(xts_paes_alg.base.cra_list),
- .init = xts_paes_init,
- .exit = xts_paes_exit,
- .min_keysize = 2 * PAES_MIN_KEYSIZE,
- .max_keysize = 2 * PAES_MAX_KEYSIZE,
- .ivsize = AES_BLOCK_SIZE,
- .setkey = xts_paes_set_key,
- .encrypt = xts_paes_encrypt,
- .decrypt = xts_paes_decrypt,
-};
+ /*
+ * First try synchronous. If this fails for any reason
+ * schedule this request asynchronous via crypto engine.
+ */
-static int ctr_paes_init(struct crypto_skcipher *tfm)
-{
- struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ rc = skcipher_walk_virt(walk, req, false);
+ if (rc)
+ goto out;
- ctx->kb.key = NULL;
- spin_lock_init(&ctx->pk_lock);
+ req_ctx->modifier = modifier;
+ req_ctx->param_init_done = false;
- return 0;
-}
+ rc = xts_paes_do_crypt(ctx, req_ctx, false);
+ if (rc != -EKEYEXPIRED) {
+ if (rc)
+ skcipher_walk_done(walk, rc);
+ goto out;
+ }
-static void ctr_paes_exit(struct crypto_skcipher *tfm)
-{
- struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
+ rc = crypto_transfer_skcipher_request_to_engine(paes_crypto_engine, req);
+ if (rc)
+ goto out;
+
+ rc = -EINPROGRESS;
- _free_kb_keybuf(&ctx->kb);
+out:
+ if (rc != -EINPROGRESS)
+ memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
+ pr_debug("rc=%d\n", rc);
+ return rc;
}
-static inline int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
+static int xts_paes_encrypt(struct skcipher_request *req)
{
- unsigned long fc;
- int rc;
-
- rc = __paes_convert_key(ctx);
- if (rc)
- return rc;
-
- /* Pick the correct function code based on the protected key type */
- fc = (ctx->pk.type == PKEY_KEYTYPE_AES_128) ? CPACF_KMCTR_PAES_128 :
- (ctx->pk.type == PKEY_KEYTYPE_AES_192) ? CPACF_KMCTR_PAES_192 :
- (ctx->pk.type == PKEY_KEYTYPE_AES_256) ?
- CPACF_KMCTR_PAES_256 : 0;
-
- /* Check if the function code is available */
- ctx->fc = (fc && cpacf_test_func(&kmctr_functions, fc)) ? fc : 0;
+ return xts_paes_crypt(req, 0);
+}
- return ctx->fc ? 0 : -EINVAL;
+static int xts_paes_decrypt(struct skcipher_request *req)
+{
+ return xts_paes_crypt(req, CPACF_DECRYPT);
}
-static int ctr_paes_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
- unsigned int key_len)
+static int xts_paes_init(struct crypto_skcipher *tfm)
{
- struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
- int rc;
+ struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
- _free_kb_keybuf(&ctx->kb);
- rc = _key_to_kb(&ctx->kb, in_key, key_len);
- if (rc)
- return rc;
+ memset(ctx, 0, sizeof(*ctx));
+ spin_lock_init(&ctx->pk_lock);
+
+ crypto_skcipher_set_reqsize(tfm, sizeof(struct s390_pxts_req_ctx));
- return __ctr_paes_set_key(ctx);
+ return 0;
}
-static unsigned int __ctrblk_init(u8 *ctrptr, u8 *iv, unsigned int nbytes)
+static void xts_paes_exit(struct crypto_skcipher *tfm)
{
- unsigned int i, n;
+ struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
- /* only use complete blocks, max. PAGE_SIZE */
- memcpy(ctrptr, iv, AES_BLOCK_SIZE);
- n = (nbytes > PAGE_SIZE) ? PAGE_SIZE : nbytes & ~(AES_BLOCK_SIZE - 1);
- for (i = (n / AES_BLOCK_SIZE) - 1; i > 0; i--) {
- memcpy(ctrptr + AES_BLOCK_SIZE, ctrptr, AES_BLOCK_SIZE);
- crypto_inc(ctrptr + AES_BLOCK_SIZE, AES_BLOCK_SIZE);
- ctrptr += AES_BLOCK_SIZE;
- }
- return n;
+ memzero_explicit(ctx, sizeof(*ctx));
}
-static int ctr_paes_crypt(struct skcipher_request *req)
+static int xts_paes_do_one_request(struct crypto_engine *engine, void *areq)
{
+ struct skcipher_request *req = skcipher_request_cast(areq);
+ struct s390_pxts_req_ctx *req_ctx = skcipher_request_ctx(req);
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
- u8 buf[AES_BLOCK_SIZE], *ctrptr;
- struct {
- u8 key[PAES_256_PROTKEY_SIZE];
- } param;
- struct skcipher_walk walk;
- unsigned int nbytes, n, k;
- int rc, locked;
-
- rc = skcipher_walk_virt(&walk, req, false);
- if (rc)
- return rc;
-
- spin_lock_bh(&ctx->pk_lock);
- memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
- spin_unlock_bh(&ctx->pk_lock);
+ struct s390_pxts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct skcipher_walk *walk = &req_ctx->walk;
+ int rc;
- locked = mutex_trylock(&ctrblk_lock);
+ /* walk has already been prepared */
- while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
- n = AES_BLOCK_SIZE;
- if (nbytes >= 2*AES_BLOCK_SIZE && locked)
- n = __ctrblk_init(ctrblk, walk.iv, nbytes);
- ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk.iv;
- k = cpacf_kmctr(ctx->fc, ¶m, walk.dst.virt.addr,
- walk.src.virt.addr, n, ctrptr);
- if (k) {
- if (ctrptr == ctrblk)
- memcpy(walk.iv, ctrptr + k - AES_BLOCK_SIZE,
- AES_BLOCK_SIZE);
- crypto_inc(walk.iv, AES_BLOCK_SIZE);
- rc = skcipher_walk_done(&walk, nbytes - k);
- }
- if (k < n) {
- if (__paes_convert_key(ctx)) {
- if (locked)
- mutex_unlock(&ctrblk_lock);
- return skcipher_walk_done(&walk, -EIO);
- }
- spin_lock_bh(&ctx->pk_lock);
- memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
- spin_unlock_bh(&ctx->pk_lock);
- }
+ rc = xts_paes_do_crypt(ctx, req_ctx, true);
+ if (rc != -EKEYEXPIRED) {
+ if (rc)
+ skcipher_walk_done(walk, rc);
+ goto complete;
}
- if (locked)
- mutex_unlock(&ctrblk_lock);
+
/*
- * final block may be < AES_BLOCK_SIZE, copy only nbytes
+ * Protected key expired, conversion is in process.
+ * Trigger a re-schedule of this request by returning
+ * -ENOSPC ("hardware queue is full") to the crypto engine.
+ * To avoid immediately re-invocation of this callback,
+ * tell the scheduler to voluntarily give up the CPU here.
*/
- if (nbytes) {
- memset(buf, 0, AES_BLOCK_SIZE);
- memcpy(buf, walk.src.virt.addr, nbytes);
- while (1) {
- if (cpacf_kmctr(ctx->fc, ¶m, buf,
- buf, AES_BLOCK_SIZE,
- walk.iv) == AES_BLOCK_SIZE)
- break;
- if (__paes_convert_key(ctx))
- return skcipher_walk_done(&walk, -EIO);
- spin_lock_bh(&ctx->pk_lock);
- memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
- spin_unlock_bh(&ctx->pk_lock);
- }
- memcpy(walk.dst.virt.addr, buf, nbytes);
- crypto_inc(walk.iv, AES_BLOCK_SIZE);
- rc = skcipher_walk_done(&walk, nbytes);
- }
-
+ yield();
+ pr_debug("rescheduling request\n");
+ return -ENOSPC;
+
+complete:
+ memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
+ pr_debug("request complete with rc=%d\n", rc);
+ local_bh_disable();
+ crypto_finalize_skcipher_request(engine, req, rc);
+ local_bh_enable();
return rc;
}
-static struct skcipher_alg ctr_paes_alg = {
- .base.cra_name = "ctr(paes)",
- .base.cra_driver_name = "ctr-paes-s390",
- .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
- .base.cra_blocksize = 1,
- .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
- .base.cra_module = THIS_MODULE,
- .base.cra_list = LIST_HEAD_INIT(ctr_paes_alg.base.cra_list),
- .init = ctr_paes_init,
- .exit = ctr_paes_exit,
- .min_keysize = PAES_MIN_KEYSIZE,
- .max_keysize = PAES_MAX_KEYSIZE,
- .ivsize = AES_BLOCK_SIZE,
- .setkey = ctr_paes_set_key,
- .encrypt = ctr_paes_crypt,
- .decrypt = ctr_paes_crypt,
- .chunksize = AES_BLOCK_SIZE,
+static struct skcipher_engine_alg xts_paes_alg = {
+ .base = {
+ .base.cra_name = "xts(paes)",
+ .base.cra_driver_name = "xts-paes-s390",
+ .base.cra_priority = 402, /* ecb-paes-s390 + 1 */
+ .base.cra_blocksize = AES_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct s390_pxts_ctx),
+ .base.cra_module = THIS_MODULE,
+ .base.cra_list = LIST_HEAD_INIT(xts_paes_alg.base.base.cra_list),
+ .init = xts_paes_init,
+ .exit = xts_paes_exit,
+ .min_keysize = 2 * PAES_MIN_KEYSIZE,
+ .max_keysize = 2 * PAES_MAX_KEYSIZE,
+ .ivsize = AES_BLOCK_SIZE,
+ .setkey = xts_paes_setkey,
+ .encrypt = xts_paes_encrypt,
+ .decrypt = xts_paes_decrypt,
+ },
+ .op = {
+ .do_one_request = xts_paes_do_one_request,
+ },
+};
+
+/*
+ * alg register, unregister, module init, exit
+ */
+
+static struct miscdevice paes_dev = {
+ .name = "paes",
+ .minor = MISC_DYNAMIC_MINOR,
};
-static inline void __crypto_unregister_skcipher(struct skcipher_alg *alg)
+static inline void __crypto_unregister_skcipher(struct skcipher_engine_alg *alg)
{
- if (!list_empty(&alg->base.cra_list))
- crypto_unregister_skcipher(alg);
+ if (!list_empty(&alg->base.base.cra_list))
+ crypto_engine_unregister_skcipher(alg);
}
static void paes_s390_fini(void)
{
+ if (paes_crypto_engine) {
+ crypto_engine_stop(paes_crypto_engine);
+ crypto_engine_exit(paes_crypto_engine);
+ }
__crypto_unregister_skcipher(&ctr_paes_alg);
__crypto_unregister_skcipher(&xts_paes_alg);
__crypto_unregister_skcipher(&cbc_paes_alg);
__crypto_unregister_skcipher(&ecb_paes_alg);
if (ctrblk)
- free_page((unsigned long) ctrblk);
+ free_page((unsigned long)ctrblk);
+ misc_deregister(&paes_dev);
}
static int __init paes_s390_init(void)
{
int rc;
+ /* register a simple paes pseudo misc device */
+ rc = misc_register(&paes_dev);
+ if (rc)
+ return rc;
+
+ /* with this pseudo devie alloc and start a crypto engine */
+ paes_crypto_engine =
+ crypto_engine_alloc_init_and_set(paes_dev.this_device,
+ true, NULL, false, MAX_QLEN);
+ if (!paes_crypto_engine) {
+ rc = -ENOMEM;
+ goto out_err;
+ }
+ rc = crypto_engine_start(paes_crypto_engine);
+ if (rc) {
+ crypto_engine_exit(paes_crypto_engine);
+ paes_crypto_engine = NULL;
+ goto out_err;
+ }
+
/* Query available functions for KM, KMC and KMCTR */
cpacf_query(CPACF_KM, &km_functions);
cpacf_query(CPACF_KMC, &kmc_functions);
@@ -927,40 +1563,45 @@ static int __init paes_s390_init(void)
if (cpacf_test_func(&km_functions, CPACF_KM_PAES_128) ||
cpacf_test_func(&km_functions, CPACF_KM_PAES_192) ||
cpacf_test_func(&km_functions, CPACF_KM_PAES_256)) {
- rc = crypto_register_skcipher(&ecb_paes_alg);
+ rc = crypto_engine_register_skcipher(&ecb_paes_alg);
if (rc)
goto out_err;
+ pr_debug("%s registered\n", ecb_paes_alg.base.base.cra_driver_name);
}
if (cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_128) ||
cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_192) ||
cpacf_test_func(&kmc_functions, CPACF_KMC_PAES_256)) {
- rc = crypto_register_skcipher(&cbc_paes_alg);
+ rc = crypto_engine_register_skcipher(&cbc_paes_alg);
if (rc)
goto out_err;
+ pr_debug("%s registered\n", cbc_paes_alg.base.base.cra_driver_name);
}
if (cpacf_test_func(&km_functions, CPACF_KM_PXTS_128) ||
cpacf_test_func(&km_functions, CPACF_KM_PXTS_256)) {
- rc = crypto_register_skcipher(&xts_paes_alg);
+ rc = crypto_engine_register_skcipher(&xts_paes_alg);
if (rc)
goto out_err;
+ pr_debug("%s registered\n", xts_paes_alg.base.base.cra_driver_name);
}
if (cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_128) ||
cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_192) ||
cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_256)) {
- ctrblk = (u8 *) __get_free_page(GFP_KERNEL);
+ ctrblk = (u8 *)__get_free_page(GFP_KERNEL);
if (!ctrblk) {
rc = -ENOMEM;
goto out_err;
}
- rc = crypto_register_skcipher(&ctr_paes_alg);
+ rc = crypto_engine_register_skcipher(&ctr_paes_alg);
if (rc)
goto out_err;
+ pr_debug("%s registered\n", ctr_paes_alg.base.base.cra_driver_name);
}
return 0;
+
out_err:
paes_s390_fini();
return rc;
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH v3 3/3] Documentation: crypto_engine: Update and extend crypto engine doc
2025-04-01 14:50 [PATCH v3 0/3] Rework protected key AES for true asynch support Harald Freudenberger
2025-04-01 14:50 ` [PATCH v3 1/3] s390/cpacf: Rework cpacf_pcc() to return condition code Harald Freudenberger
2025-04-01 14:50 ` [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support Harald Freudenberger
@ 2025-04-01 14:50 ` Harald Freudenberger
2 siblings, 0 replies; 10+ messages in thread
From: Harald Freudenberger @ 2025-04-01 14:50 UTC (permalink / raw)
To: herbert, dengler, ifranzki, fcallies; +Cc: linux-crypto, linux-s390
Remove outdated wording about crypto engine context.
Rephrase and extend the usage of the crypto engine.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
---
Documentation/crypto/crypto_engine.rst | 78 ++++++++++++--------------
1 file changed, 36 insertions(+), 42 deletions(-)
diff --git a/Documentation/crypto/crypto_engine.rst b/Documentation/crypto/crypto_engine.rst
index d562ea17d994..3665d8062c07 100644
--- a/Documentation/crypto/crypto_engine.rst
+++ b/Documentation/crypto/crypto_engine.rst
@@ -6,25 +6,35 @@ Crypto Engine
Overview
--------
The crypto engine (CE) API is a crypto queue manager.
+It is the in-kernel crypto way to enqueue asynchronous crypto requests
+instead of instantiating your own workqueue.
Requirement
-----------
-You must put, at the start of your transform context your_tfm_ctx, the structure
-crypto_engine:
-
-::
-
- struct your_tfm_ctx {
- struct crypto_engine engine;
- ...
- };
-
-The crypto engine only manages asynchronous requests in the form of
-crypto_async_request. It cannot know the underlying request type and thus only
-has access to the transform structure. It is not possible to access the context
-using container_of. In addition, the engine knows nothing about your
-structure "``struct your_tfm_ctx``". The engine assumes (requires) the placement
-of the known member ``struct crypto_engine`` at the beginning.
+For registration with the use of an crypto engine instance the
+transformation needs to implement the respective ``struct *_engine_alg``.
+For example a skcipher transformation implements
+``struct skcipher_engine_alg``. This struct consists of the usual
+transformation struct (for example ``struct skcipher_alg``) plus a
+``struct crypto_engine_op`` which provides the callback used by the
+crypto engine to run the asynchronous requests.
+
+The transformation implements the callback function
+``int (*do_one_request)(struct crypto_engine *engine, void *areq)``.
+This callback is invoked by the engine to process asynchronous
+requests which have been previously pushed to the engine with one of
+the ``crypto_transfer_*_request_to_engine()``.
+The ``do_one_request()`` implementation needs to handle the request
+and on successful processing completes the request with a call to
+``crypto_finalize_*_request()`` and a return value of 0. A return
+value other than 0 indicates an error condition and the request is
+unsuccessful marked as completed with this error value by the engine.
+A special treatment is done for the return value ``-ENOSPC``. At
+allocation of the engine instance via
+``crypto_engine_alloc_init_and_set(..., bool retry_support, ...)``
+with the ``retry_support`` parameter set to true, the engine instance
+handles the ``-ENOSPC`` by re-queuing the request into the backlog and
+at a later time the callback is invoked again to process this request.
Order of operations
-------------------
@@ -33,35 +43,19 @@ Start it via ``crypto_engine_start()``. When finished with your work, shut down
engine using ``crypto_engine_stop()`` and destroy the engine with
``crypto_engine_exit()``.
-Before transferring any request, you have to fill the context enginectx by
-providing functions for the following:
-
-* ``prepare_crypt_hardware``: Called once before any prepare functions are
- called.
-
-* ``unprepare_crypt_hardware``: Called once after all unprepare functions have
- been called.
-
-* ``prepare_cipher_request``/``prepare_hash_request``: Called before each
- corresponding request is performed. If some processing or other preparatory
- work is required, do it here.
-
-* ``unprepare_cipher_request``/``unprepare_hash_request``: Called after each
- request is handled. Clean up / undo what was done in the prepare function.
-
-* ``cipher_one_request``/``hash_one_request``: Handle the current request by
- performing the operation.
-
-Note that these functions access the crypto_async_request structure
-associated with the received request. You are able to retrieve the original
-request by using:
+Before transferring any request, you may provide additional callback
+functions within the ``struct engine`` instance you got from the alloc
+call:
-::
+* ``prepare_crypt_hardware``: Called once before any
+ ``do_one_request()`` invocations are done.
- container_of(areq, struct yourrequesttype_request, base);
+* ``unprepare_crypt_hardware``: Called once after the
+ ``do_one_request()`` are done.
-When your driver receives a crypto_request, you must to transfer it to
-the crypto engine via one of:
+When your driver receives a crypto_request, and you want this request
+to be processed asynchronously, you must transfer it to the crypto
+engine via one of:
* crypto_transfer_aead_request_to_engine()
--
2.43.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support
2025-04-01 14:50 ` [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support Harald Freudenberger
@ 2025-04-02 4:04 ` Herbert Xu
2025-05-06 14:02 ` Harald Freudenberger
2025-04-25 14:56 ` Holger Dengler
1 sibling, 1 reply; 10+ messages in thread
From: Herbert Xu @ 2025-04-02 4:04 UTC (permalink / raw)
To: Harald Freudenberger
Cc: dengler, ifranzki, fcallies, linux-crypto, linux-s390
On Tue, Apr 01, 2025 at 04:50:47PM +0200, Harald Freudenberger wrote:
>
> +static int ecb_paes_do_crypt(struct s390_paes_ctx *ctx,
> + struct s390_pecb_req_ctx *req_ctx,
> + bool maysleep)
...
> + /* always walk on the ... */
> + while ((nbytes = walk->nbytes) != 0) {
> + /* only use complete blocks */
> + n = nbytes & ~(AES_BLOCK_SIZE - 1);
> + k = cpacf_km(ctx->fc | req_ctx->modifier, param,
> + walk->dst.virt.addr, walk->src.virt.addr, n);
> + if (k)
> + rc = skcipher_walk_done(walk, nbytes - k);
> + if (k < n) {
> + if (!maysleep) {
> + rc = -EKEYEXPIRED;
> + goto out;
> + }
So this leaves the skcipher walk in a mapped state, to be resumed in
a work queue later. Now I don't believe you guys have the horror of
HIGHMEM so it's not fatal, but it's still a bit of a hack and worthy
of a comment to at least stop people from other architectures copying
this.
> + rc = paes_convert_key(ctx);
At first I thought this was racy, but then I realised that it is not
because only the crypto_engine thread gets called with maysleep ==
true. Since there is only one crypto_engine thread this is safe.
I think this is not really obvious though and worthy of a comment to
explain the reliance on the single crypto engine thread.
There is one more subtle issue to do with request ordering. Because
networking requires packets to not be reordered, we enforce this in
the Crypto API. An algorithm must not reorder the requests sent to
the same tfm.
To do that here, once a ctx goes into the crypto_engine, all future
requests to the same ctx must also go through the crypto_engine, as
long as at the time of the request being submitted prior work is still
outstanding.
The easiest way would be to have a counter in the ctx that keeps
track of how many requests are currently outstanding in the engine.
Then in paes_do_crypt you'd simply check the counter, and if it is
non-zero you always put the request into the engine.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 1/3] s390/cpacf: Rework cpacf_pcc() to return condition code
2025-04-01 14:50 ` [PATCH v3 1/3] s390/cpacf: Rework cpacf_pcc() to return condition code Harald Freudenberger
@ 2025-04-14 10:14 ` Holger Dengler
0 siblings, 0 replies; 10+ messages in thread
From: Holger Dengler @ 2025-04-14 10:14 UTC (permalink / raw)
To: Harald Freudenberger, herbert, ifranzki, fcallies
Cc: linux-crypto, linux-s390
On 01/04/2025 16:50, Harald Freudenberger wrote:
> Some of the pcc sub-functions have a protected key as
> input and thus may run into the situation that this
> key may be invalid for example due to live guest migration
> to another physical hardware.
>
> Rework the inline assembler function cpacf_pcc() to
> return the condition code (cc) as return value:
> 0 - cc code 0 (normal completion)
> 1 - cc code 1 (prot key wkvp mismatch or src op out of range)
> 2 - cc code 2 (something invalid, scalar multiply infinity, ...)
> Note that cc 3 (partial completion) is handled within the asm code
> and never returned.
>
> Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
--
Mit freundlichen Grüßen / Kind regards
Holger Dengler
--
IBM Systems, Linux on IBM Z Development
dengler@linux.ibm.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support
2025-04-01 14:50 ` [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support Harald Freudenberger
2025-04-02 4:04 ` Herbert Xu
@ 2025-04-25 14:56 ` Holger Dengler
2025-05-06 14:13 ` Harald Freudenberger
1 sibling, 1 reply; 10+ messages in thread
From: Holger Dengler @ 2025-04-25 14:56 UTC (permalink / raw)
To: Harald Freudenberger
Cc: linux-crypto, linux-s390, herbert, fcallies, ifranzki
On 01/04/2025 16:50, Harald Freudenberger wrote:
> This is a complete rework of the protected key AES (PAES) implementation.
> The goal of this rework is to implement the 4 modes (ecb, cbc, ctr, xts)
> in a real asynchronous fashion:
> - init(), exit() and setkey() are synchronous and don't allocate any memory.
> - the encrypt/decrypt functions first try to do the job in a synchronous
> manner. If this fails, for example the protected key got invalid caused
> by for example a guest suspend/resume or guest migration action, the
reword: please drop one of the "for example".
> encrypt/decrypt is transfered to an instance of the crypto engine (see
typo: transferred
> below) for asynchronous processing.
> These via crypto engine postponed requests are then handled via the
> do_one_request() callback but may of course again run into a still
reword: please drop at least one "via". Proposal (if I got it correctly): "These postponed requests are then handled by the crypto engine by calling the do_one_request() callback ..."
> not converted key or the key is getting invalid. If the key is
> still not converted, the first thread does the conversion and updates
> the key status in the transformation context. The conversion is
> invoked via pkey API with a new flag PKEY_XFLAG_NOMEMALLOC.
>
> The pkey API used here - the function pkey_key2protkey() - uses
> a new version of this in-kernel-API. A new flag PKEY_XFLAG_NOMEMALLOC
> tells the PKEY layer (and subsidiary layers) that it must not allocate
> any memory causing IO operations. Note that the patches for this
> pkey/zcrypt/AP extensions are currently under review and yet not
> upstream available. SO THIS PATCH DOES NOT COMPILE YET.
As the ap-rework series is now on its way, you can remove parts of this paragraph.
> This patch together with the pkey/zcrypt/AP extensions should
> toughen the paes crypto algorithms to truly meet the requirements
> for in-kernel skcipher implementations and the usage patterns for
> the dm-crypt and dm-integrity layers.
>
> Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
It is very hard to review this patch. If there is any chance to split this up into smaller pieces, please do it.
This is the first part of the review, covering mainly common parts and ecb. The other modes will follow later.
See my comments below.
> ---
> arch/s390/crypto/paes_s390.c | 1725 +++++++++++++++++++++++-----------
> 1 file changed, 1183 insertions(+), 542 deletions(-)
>
> diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
> index 646cbbf0678d..1d1f1a98ec4d 100644
> --- a/arch/s390/crypto/paes_s390.c
> +++ b/arch/s390/crypto/paes_s390.c
> @@ -5,7 +5,7 @@
> * s390 implementation of the AES Cipher Algorithm with protected keys.
> *
> * s390 Version:
> - * Copyright IBM Corp. 2017, 2023
> + * Copyright IBM Corp. 2017, 2025
> * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
> * Harald Freudenberger <freude@de.ibm.com>
> */
> @@ -13,16 +13,17 @@
> #define KMSG_COMPONENT "paes_s390"
> #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
>
> -#include <crypto/aes.h>
> -#include <crypto/algapi.h>
> -#include <linux/bug.h>
> -#include <linux/err.h>
> -#include <linux/module.h>
> #include <linux/cpufeature.h>
> +#include <linux/delay.h>
> +#include <linux/err.h>
> #include <linux/init.h>
> +#include <linux/miscdevice.h>
> +#include <linux/module.h>
> #include <linux/mutex.h>
> #include <linux/spinlock.h>
> -#include <linux/delay.h>
> +#include <crypto/aes.h>
> +#include <crypto/algapi.h>
> +#include <crypto/engine.h>
> #include <crypto/internal/skcipher.h>
> #include <crypto/xts.h>
> #include <asm/cpacf.h>
> @@ -44,23 +45,55 @@ static DEFINE_MUTEX(ctrblk_lock);
>
> static cpacf_mask_t km_functions, kmc_functions, kmctr_functions;
>
> +static struct crypto_engine *paes_crypto_engine;
> +#define MAX_QLEN 10
> +
> +/*
> + * protected key specific stuff
> + */
> +
> struct paes_protkey {
> u32 type;
> u32 len;
> u8 protkey[PXTS_256_PROTKEY_SIZE];
> };
>
> -struct key_blob {
> - /*
> - * Small keys will be stored in the keybuf. Larger keys are
> - * stored in extra allocated memory. In both cases does
> - * key point to the memory where the key is stored.
> - * The code distinguishes by checking keylen against
> - * sizeof(keybuf). See the two following helper functions.
> - */
> - u8 *key;
> - u8 keybuf[128];
> +#define PK_STATE_NO_KEY 0
> +#define PK_STATE_CONVERT_IN_PROGRESS 1
> +#define PK_STATE_VALID 2
Please use an enum here.
> +
> +struct s390_paes_ctx {
> + /* source key material used to derive a protected key from */
> + u8 keybuf[PAES_MAX_KEYSIZE];
> + unsigned int keylen;
> +
> + /* cpacf function code to use with this protected key type */
> + long fc;
> +
> + /* spinlock to atomic read/update all the following fields */
> + spinlock_t pk_lock;
> +
> + /* see PK_STATE* defines above, < 0 holds convert failure rc */
> + int pk_state;
I see no advantage to split the value range. On the contrary, it makes the status handling more complex.
I would prefer to use an enum for pk_state and use another element for the conversion rc.
> + /* if state is valid, pk holds the protected key */
> + struct paes_protkey pk;
> +};
> +
> +struct s390_pxts_ctx {
> + /* source key material used to derive a protected key from */
> + u8 keybuf[2 * PAES_MAX_KEYSIZE];
> unsigned int keylen;
> +
> + /* cpacf function code to use with this protected key type */
> + long fc;
> +
> + /* spinlock to atomic read/update all the following fields */
> + spinlock_t pk_lock;
> +
> + /* see PK_STATE* defines above, < 0 holds convert failure rc */
> + int pk_state;
Same here.
> + /* if state is valid, pk[] hold(s) the protected key(s) */
> + struct paes_protkey pk[2];
> };
>
> /*
> @@ -89,214 +122,344 @@ static inline u32 make_clrkey_token(const u8 *ck, size_t cklen, u8 *dest)
> return sizeof(*token) + cklen;
> }
>
> -static inline int _key_to_kb(struct key_blob *kb,
> - const u8 *key,
> - unsigned int keylen)
> +/*
> + * key_to_ctx() - Set key value into context, maybe construct
> + * a clear key token digestable by pkey from a clear key value.
> + */
> +static inline int key_to_ctx(struct s390_paes_ctx *ctx,
> + const u8 *key, unsigned int keylen)
The function name implies a transformation of a key into a context, not just a set of a context element. What about paes_ctx_setkey()?
> {
> + if (keylen > sizeof(ctx->keybuf))
> + return -EINVAL;
> +
> switch (keylen) {
> case 16:
> case 24:
> case 32:
> /* clear key value, prepare pkey clear key token in keybuf */
> - memset(kb->keybuf, 0, sizeof(kb->keybuf));
> - kb->keylen = make_clrkey_token(key, keylen, kb->keybuf);
> - kb->key = kb->keybuf;
> + memset(ctx->keybuf, 0, sizeof(ctx->keybuf));
> + ctx->keylen = make_clrkey_token(key, keylen, ctx->keybuf);
> break;
> default:
> /* other key material, let pkey handle this */
> - if (keylen <= sizeof(kb->keybuf))
> - kb->key = kb->keybuf;
> - else {
> - kb->key = kmalloc(keylen, GFP_KERNEL);
> - if (!kb->key)
> - return -ENOMEM;
> - }
> - memcpy(kb->key, key, keylen);
> - kb->keylen = keylen;
> + memcpy(ctx->keybuf, key, keylen);
> + ctx->keylen = keylen;
> break;
> }
>
> return 0;
> }
>
> -static inline int _xts_key_to_kb(struct key_blob *kb,
> - const u8 *key,
> - unsigned int keylen)
> +/*
> + * xts_key_to_ctx() - Set key value into context, maybe construct
> + * a clear key token digestable by pkey from a clear key value.
> + */
> +static inline int xts_key_to_ctx(struct s390_pxts_ctx *ctx,
> + const u8 *key, unsigned int keylen)
Same here, the function name implies a transformation of a key into a context, not just a set of a context element. What about pxts_ctx_setkey()?
> {
> size_t cklen = keylen / 2;
>
[...]
> +static int ecb_paes_do_crypt(struct s390_paes_ctx *ctx,
> + struct s390_pecb_req_ctx *req_ctx,
> + bool maysleep)
> {
> - struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> - int rc;
> + struct ecb_param *param = &req_ctx->param;
> + struct skcipher_walk *walk = &req_ctx->walk;
> + unsigned int nbytes, n, k;
> + int pk_state, rc;
> +
> + if (!req_ctx->param_init_done) {
> + /* fetch and check protected key state */
> + spin_lock_bh(&ctx->pk_lock);
> + pk_state = ctx->pk_state;
> + memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
I would prefer to use the size of param->key instead of a constant value as length.
> + spin_unlock_bh(&ctx->pk_lock);
> + switch (pk_state) {
> + case PK_STATE_NO_KEY:
> + rc = -ENOKEY;
> + goto out;
> + case PK_STATE_CONVERT_IN_PROGRESS:
> + rc = -EKEYEXPIRED;
> + goto out;
> + case PK_STATE_VALID:
> + req_ctx->param_init_done = true;
> + break;
> + default:
> + rc = pk_state < 0 ? pk_state : -EIO;
> + goto out;
> + }
> + }
>
> - _free_kb_keybuf(&ctx->kb);
> - rc = _key_to_kb(&ctx->kb, in_key, key_len);
> - if (rc)
> - return rc;
> + rc = 0;
Modify the param block in req_ctx only if the protected key is valid.
int rc = 0;
if (!req_ctx->param_init_done) {
/* fetch and check protected key state */
spin_lock_bh(&ctx->pk_lock);
switch (ctx->pk_state) {
case PK_STATE_NO_KEY:
rc = -ENOKEY;
break;
case PK_STATE_CONVERT_IN_PROGRESS:
rc = -EKEYEXPIRED;
break;
case PK_STATE_VALID:
memcpy(param->key, ctx->pk.protkey, sizeof(param->key));
req_ctx->param_init_done = true;
break;
default:
rc = pk_state < 0 ? pk_state : -EIO;
break;
}
spin_unlock_bh(&ctx->pk_lock);
if (rc)
goto out;
}
> +
> + /* always walk on the ... */
What does this comment mean? I'm afraid, I don't get it.
> + while ((nbytes = walk->nbytes) != 0) {
> + /* only use complete blocks */
> + n = nbytes & ~(AES_BLOCK_SIZE - 1);
> + k = cpacf_km(ctx->fc | req_ctx->modifier, param,
> + walk->dst.virt.addr, walk->src.virt.addr, n);
> + if (k)
> + rc = skcipher_walk_done(walk, nbytes - k);
> + if (k < n) {
> + if (!maysleep) {
> + rc = -EKEYEXPIRED;
> + goto out;
> + }
> + rc = paes_convert_key(ctx);
> + if (rc)
> + goto out;
> + spin_lock_bh(&ctx->pk_lock);
> + memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
> + spin_unlock_bh(&ctx->pk_lock);
> + }
> + }
>
> - return __ecb_paes_set_key(ctx);
> +out:
> + pr_debug("rc=%d\n", rc);
> + return rc;
> }
>
> static int ecb_paes_crypt(struct skcipher_request *req, unsigned long modifier)
> {
> + struct s390_pecb_req_ctx *req_ctx = skcipher_request_ctx(req);
> struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> - struct {
> - u8 key[PAES_256_PROTKEY_SIZE];
> - } param;
> - struct skcipher_walk walk;
> - unsigned int nbytes, n, k;
> + struct skcipher_walk *walk = &req_ctx->walk;
> int rc;
>
> - rc = skcipher_walk_virt(&walk, req, false);
> + /*
> + * First try synchronous. If this fails for any reason
> + * schedule this request asynchronous via crypto engine.
> + */
> +
> + rc = skcipher_walk_virt(walk, req, false);
> if (rc)
> - return rc;
> + goto out;
>
> - spin_lock_bh(&ctx->pk_lock);
> - memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
> - spin_unlock_bh(&ctx->pk_lock);
> + req_ctx->modifier = modifier;
> + req_ctx->param_init_done = false;
>
> - while ((nbytes = walk.nbytes) != 0) {
> - /* only use complete blocks */
> - n = nbytes & ~(AES_BLOCK_SIZE - 1);
> - k = cpacf_km(ctx->fc | modifier, ¶m,
> - walk.dst.virt.addr, walk.src.virt.addr, n);
> - if (k)
> - rc = skcipher_walk_done(&walk, nbytes - k);
> - if (k < n) {
> - if (__paes_convert_key(ctx))
> - return skcipher_walk_done(&walk, -EIO);
> - spin_lock_bh(&ctx->pk_lock);
> - memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
> - spin_unlock_bh(&ctx->pk_lock);
> - }
> + rc = ecb_paes_do_crypt(ctx, req_ctx, false);
> + if (rc != -EKEYEXPIRED) {
> + if (rc)
> + skcipher_walk_done(walk, rc);
> + goto out;
> }
> +
> + rc = crypto_transfer_skcipher_request_to_engine(paes_crypto_engine, req);
> + if (rc)
> + goto out;
> +
> + rc = -EINPROGRESS;
> +
> +out:
> + if (rc != -EINPROGRESS)
> + memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
> + pr_debug("rc=%d\n", rc);
> return rc;
If took me a while to find the synchronous good case code path. I would prefer to handle the various cases separately, either with a switch/case or by explicit checks in the main path.
rc = ecb_paes_do_crypt(ctx, req_ctx, false);
if (rc == -EKEYEXPIRED) {
rc = crypto_transfer_skcipher_request_to_engine(paes_crypto_engine, req);
rc = rc ?: -EINPROGRESS;
} else if (rc) {
skcipher_walk_done(walk, rc);
}
if (rc != -EINPROGRESS)
memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
pr_debug("rc=%d\n", rc);
return rc;
> }
>
> @@ -310,112 +473,242 @@ static int ecb_paes_decrypt(struct skcipher_request *req)
> return ecb_paes_crypt(req, CPACF_DECRYPT);
> }
>
> -static struct skcipher_alg ecb_paes_alg = {
> - .base.cra_name = "ecb(paes)",
> - .base.cra_driver_name = "ecb-paes-s390",
> - .base.cra_priority = 401, /* combo: aes + ecb + 1 */
> - .base.cra_blocksize = AES_BLOCK_SIZE,
> - .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
> - .base.cra_module = THIS_MODULE,
> - .base.cra_list = LIST_HEAD_INIT(ecb_paes_alg.base.cra_list),
> - .init = ecb_paes_init,
> - .exit = ecb_paes_exit,
> - .min_keysize = PAES_MIN_KEYSIZE,
> - .max_keysize = PAES_MAX_KEYSIZE,
> - .setkey = ecb_paes_set_key,
> - .encrypt = ecb_paes_encrypt,
> - .decrypt = ecb_paes_decrypt,
> -};
> -
> -static int cbc_paes_init(struct crypto_skcipher *tfm)
> +static int ecb_paes_init(struct crypto_skcipher *tfm)
> {
> struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> - ctx->kb.key = NULL;
> + memset(ctx, 0, sizeof(*ctx));
> spin_lock_init(&ctx->pk_lock);
>
> + crypto_skcipher_set_reqsize(tfm, sizeof(struct s390_pecb_req_ctx));
> +
> return 0;
> }
>
> -static void cbc_paes_exit(struct crypto_skcipher *tfm)
> +static void ecb_paes_exit(struct crypto_skcipher *tfm)
> {
> struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> - _free_kb_keybuf(&ctx->kb);
> + memzero_explicit(ctx, sizeof(*ctx));
> }
>
> -static inline int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
> +static int ecb_paes_do_one_request(struct crypto_engine *engine, void *areq)
> {
> - unsigned long fc;
> + struct skcipher_request *req = skcipher_request_cast(areq);
> + struct s390_pecb_req_ctx *req_ctx = skcipher_request_ctx(req);
> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
> + struct skcipher_walk *walk = &req_ctx->walk;
> int rc;
>
> - rc = __paes_convert_key(ctx);
> - if (rc)
> - return rc;
> -
> - /* Pick the correct function code based on the protected key type */
> - fc = (ctx->pk.type == PKEY_KEYTYPE_AES_128) ? CPACF_KMC_PAES_128 :
> - (ctx->pk.type == PKEY_KEYTYPE_AES_192) ? CPACF_KMC_PAES_192 :
> - (ctx->pk.type == PKEY_KEYTYPE_AES_256) ? CPACF_KMC_PAES_256 : 0;
> + /* walk has already been prepared */
>
> - /* Check if the function code is available */
> - ctx->fc = (fc && cpacf_test_func(&kmc_functions, fc)) ? fc : 0;
> + rc = ecb_paes_do_crypt(ctx, req_ctx, true);
> + if (rc != -EKEYEXPIRED) {
> + if (rc)
> + skcipher_walk_done(walk, rc);
> + goto complete;
> + }
Same here, I would prefer to reverse the logic of the error handling.
>
> - return ctx->fc ? 0 : -EINVAL;
> + /*
> + * Protected key expired, conversion is in process.
> + * Trigger a re-schedule of this request by returning
> + * -ENOSPC ("hardware queue is full") to the crypto engine.
> + * To avoid immediately re-invocation of this callback,
> + * tell the scheduler to voluntarily give up the CPU here.
> + */
> + yield();
As mentioned by checkpatch.pl, the use of yield() should be avoided. Please use alternatives (e.g. cond_reschedule()).
> + pr_debug("rescheduling request\n");
> + return -ENOSPC;
> +
> +complete:
> + memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
> + pr_debug("request complete with rc=%d\n", rc);
> + local_bh_disable();
> + crypto_finalize_skcipher_request(engine, req, rc);
> + local_bh_enable();
> + return rc;
> }
[...]
--
Mit freundlichen Grüßen / Kind regards
Holger Dengler
--
IBM Systems, Linux on IBM Z Development
dengler@linux.ibm.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support
2025-04-02 4:04 ` Herbert Xu
@ 2025-05-06 14:02 ` Harald Freudenberger
2025-05-07 3:35 ` Herbert Xu
0 siblings, 1 reply; 10+ messages in thread
From: Harald Freudenberger @ 2025-05-06 14:02 UTC (permalink / raw)
To: Herbert Xu; +Cc: dengler, ifranzki, fcallies, linux-crypto, linux-s390
On 2025-04-02 06:04, Herbert Xu wrote:
> On Tue, Apr 01, 2025 at 04:50:47PM +0200, Harald Freudenberger wrote:
>>
>> +static int ecb_paes_do_crypt(struct s390_paes_ctx *ctx,
>> + struct s390_pecb_req_ctx *req_ctx,
>> + bool maysleep)
>
> ...
>
>> + /* always walk on the ... */
>> + while ((nbytes = walk->nbytes) != 0) {
>> + /* only use complete blocks */
>> + n = nbytes & ~(AES_BLOCK_SIZE - 1);
>> + k = cpacf_km(ctx->fc | req_ctx->modifier, param,
>> + walk->dst.virt.addr, walk->src.virt.addr, n);
>> + if (k)
>> + rc = skcipher_walk_done(walk, nbytes - k);
>> + if (k < n) {
>> + if (!maysleep) {
>> + rc = -EKEYEXPIRED;
>> + goto out;
>> + }
>
> So this leaves the skcipher walk in a mapped state, to be resumed in
> a work queue later. Now I don't believe you guys have the horror of
> HIGHMEM so it's not fatal, but it's still a bit of a hack and worthy
> of a comment to at least stop people from other architectures copying
> this.
>
v4 will have this clearly documented.
>> + rc = paes_convert_key(ctx);
>
> At first I thought this was racy, but then I realised that it is not
> because only the crypto_engine thread gets called with maysleep ==
> true. Since there is only one crypto_engine thread this is safe.
>
> I think this is not really obvious though and worthy of a comment to
> explain the reliance on the single crypto engine thread.
>
This is racy but the code can handle that. The cpacf instruction
refuses to do any operations if the converted key material (the
"protected" key)
is invalid. So it is in fact thinkable and possible to replace an fresh
protected key with an older (in the meantime invalid) protected key. As
the
cpacf instruction detects this, refuses to operate with an invalid key
and
the calling code triggers a (re-)conversion this does no harm. So it
is racy but may only lead to additional conversions but never to invalid
data on en- or decrypted.
> There is one more subtle issue to do with request ordering. Because
> networking requires packets to not be reordered, we enforce this in
> the Crypto API. An algorithm must not reorder the requests sent to
> the same tfm.
>
> To do that here, once a ctx goes into the crypto_engine, all future
> requests to the same ctx must also go through the crypto_engine, as
> long as at the time of the request being submitted prior work is still
> outstanding.
>
> The easiest way would be to have a counter in the ctx that keeps
> track of how many requests are currently outstanding in the engine.
> Then in paes_do_crypt you'd simply check the counter, and if it is
> non-zero you always put the request into the engine.
>
I am struggling with that. The thing is how to keep this information.
I extended the request context with a bool field telling me that there
is/was a request pushed to the engine and thus all following crypto
operations on this request need to go via engine.
BUT ... the request context is not initial zeroized and there is no
init() for a request and thus one does not know on invocation of the
skcipher encrypt or decrypt function if the value of the bool field
is taken for serious or needs initialization. Same would happen if
there is a counter instead - how to initially set the counter value
to 0? Any hints on this are welcome.
> Cheers,
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support
2025-04-25 14:56 ` Holger Dengler
@ 2025-05-06 14:13 ` Harald Freudenberger
0 siblings, 0 replies; 10+ messages in thread
From: Harald Freudenberger @ 2025-05-06 14:13 UTC (permalink / raw)
To: Holger Dengler; +Cc: linux-crypto, linux-s390, herbert, fcallies, ifranzki
On 2025-04-25 16:56, Holger Dengler wrote:
> On 01/04/2025 16:50, Harald Freudenberger wrote:
>> This is a complete rework of the protected key AES (PAES)
>> implementation.
>> The goal of this rework is to implement the 4 modes (ecb, cbc, ctr,
>> xts)
>> in a real asynchronous fashion:
>> - init(), exit() and setkey() are synchronous and don't allocate any
>> memory.
>> - the encrypt/decrypt functions first try to do the job in a
>> synchronous
>> manner. If this fails, for example the protected key got invalid
>> caused
>> by for example a guest suspend/resume or guest migration action, the
>
> reword: please drop one of the "for example".
>
done
>> encrypt/decrypt is transfered to an instance of the crypto engine
>> (see
>
> typo: transferred
>
done
>> below) for asynchronous processing.
>> These via crypto engine postponed requests are then handled via the
>> do_one_request() callback but may of course again run into a still
>
> reword: please drop at least one "via". Proposal (if I got it
> correctly): "These postponed requests are then handled by the crypto
> engine by calling the do_one_request() callback ..."
>
done
>> not converted key or the key is getting invalid. If the key is
>> still not converted, the first thread does the conversion and
>> updates
>> the key status in the transformation context. The conversion is
>> invoked via pkey API with a new flag PKEY_XFLAG_NOMEMALLOC.
>>
>> The pkey API used here - the function pkey_key2protkey() - uses
>> a new version of this in-kernel-API. A new flag PKEY_XFLAG_NOMEMALLOC
>> tells the PKEY layer (and subsidiary layers) that it must not allocate
>> any memory causing IO operations. Note that the patches for this
>> pkey/zcrypt/AP extensions are currently under review and yet not
>> upstream available. SO THIS PATCH DOES NOT COMPILE YET.
>
> As the ap-rework series is now on its way, you can remove parts of
> this paragraph.
>
done
>> This patch together with the pkey/zcrypt/AP extensions should
>> toughen the paes crypto algorithms to truly meet the requirements
>> for in-kernel skcipher implementations and the usage patterns for
>> the dm-crypt and dm-integrity layers.
>>
>> Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
>
> It is very hard to review this patch. If there is any chance to split
> this up into smaller pieces, please do it.
> This is the first part of the review, covering mainly common parts and
> ecb. The other modes will follow later.
> See my comments below.
>
Sorry but I don't see how I could split this rework into digestible
units.
>> ---
>> arch/s390/crypto/paes_s390.c | 1725
>> +++++++++++++++++++++++-----------
>> 1 file changed, 1183 insertions(+), 542 deletions(-)
>>
>> diff --git a/arch/s390/crypto/paes_s390.c
>> b/arch/s390/crypto/paes_s390.c
>> index 646cbbf0678d..1d1f1a98ec4d 100644
>> --- a/arch/s390/crypto/paes_s390.c
>> +++ b/arch/s390/crypto/paes_s390.c
...
>> +#define PK_STATE_NO_KEY 0
>> +#define PK_STATE_CONVERT_IN_PROGRESS 1
>> +#define PK_STATE_VALID 2
>
> Please use an enum here.
>
I'd like to keep the one variable pk_state with either a negative errno
value if the key conversion failed, or a >= value with one of the listed
states.
>> +
>> +struct s390_paes_ctx {
>> + /* source key material used to derive a protected key from */
>> + u8 keybuf[PAES_MAX_KEYSIZE];
>> + unsigned int keylen;
>> +
>> + /* cpacf function code to use with this protected key type */
>> + long fc;
>> +
>> + /* spinlock to atomic read/update all the following fields */
>> + spinlock_t pk_lock;
>> +
>> + /* see PK_STATE* defines above, < 0 holds convert failure rc */
>> + int pk_state;
>
> I see no advantage to split the value range. On the contrary, it makes
> the status handling more complex.
> I would prefer to use an enum for pk_state and use another element for
> the conversion rc.
>
>> + /* if state is valid, pk holds the protected key */
>> + struct paes_protkey pk;
>> +};
>> +
>> +struct s390_pxts_ctx {
>> + /* source key material used to derive a protected key from */
>> + u8 keybuf[2 * PAES_MAX_KEYSIZE];
>> unsigned int keylen;
>> +
>> + /* cpacf function code to use with this protected key type */
>> + long fc;
>> +
>> + /* spinlock to atomic read/update all the following fields */
>> + spinlock_t pk_lock;
>> +
>> + /* see PK_STATE* defines above, < 0 holds convert failure rc */
>> + int pk_state;
>
> Same here.
>
>> + /* if state is valid, pk[] hold(s) the protected key(s) */
>> + struct paes_protkey pk[2];
>> };
>>
>> /*
>> @@ -89,214 +122,344 @@ static inline u32 make_clrkey_token(const u8
>> *ck, size_t cklen, u8 *dest)
>> return sizeof(*token) + cklen;
>> }
>>
>> -static inline int _key_to_kb(struct key_blob *kb,
>> - const u8 *key,
>> - unsigned int keylen)
>> +/*
>> + * key_to_ctx() - Set key value into context, maybe construct
>> + * a clear key token digestable by pkey from a clear key value.
>> + */
>> +static inline int key_to_ctx(struct s390_paes_ctx *ctx,
>> + const u8 *key, unsigned int keylen)
>
> The function name implies a transformation of a key into a context,
> not just a set of a context element. What about paes_ctx_setkey()?
>
done, this is now paes_ctx_setkey()
>> {
>> + if (keylen > sizeof(ctx->keybuf))
>> + return -EINVAL;
>> +
>> switch (keylen) {
>> case 16:
>> case 24:
>> case 32:
>> /* clear key value, prepare pkey clear key token in keybuf */
>> - memset(kb->keybuf, 0, sizeof(kb->keybuf));
>> - kb->keylen = make_clrkey_token(key, keylen, kb->keybuf);
>> - kb->key = kb->keybuf;
>> + memset(ctx->keybuf, 0, sizeof(ctx->keybuf));
>> + ctx->keylen = make_clrkey_token(key, keylen, ctx->keybuf);
>> break;
>> default:
>> /* other key material, let pkey handle this */
>> - if (keylen <= sizeof(kb->keybuf))
>> - kb->key = kb->keybuf;
>> - else {
>> - kb->key = kmalloc(keylen, GFP_KERNEL);
>> - if (!kb->key)
>> - return -ENOMEM;
>> - }
>> - memcpy(kb->key, key, keylen);
>> - kb->keylen = keylen;
>> + memcpy(ctx->keybuf, key, keylen);
>> + ctx->keylen = keylen;
>> break;
>> }
>>
>> return 0;
>> }
>>
>> -static inline int _xts_key_to_kb(struct key_blob *kb,
>> - const u8 *key,
>> - unsigned int keylen)
>> +/*
>> + * xts_key_to_ctx() - Set key value into context, maybe construct
>> + * a clear key token digestable by pkey from a clear key value.
>> + */
>> +static inline int xts_key_to_ctx(struct s390_pxts_ctx *ctx,
>> + const u8 *key, unsigned int keylen)
>
> Same here, the function name implies a transformation of a key into a
> context, not just a set of a context element. What about
> pxts_ctx_setkey()?
done
>
>> {
>> size_t cklen = keylen / 2;
>>
> [...]
>> +static int ecb_paes_do_crypt(struct s390_paes_ctx *ctx,
>> + struct s390_pecb_req_ctx *req_ctx,
>> + bool maysleep)
>> {
>> - struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>> - int rc;
>> + struct ecb_param *param = &req_ctx->param;
>> + struct skcipher_walk *walk = &req_ctx->walk;
>> + unsigned int nbytes, n, k;
>> + int pk_state, rc;
>> +
>> + if (!req_ctx->param_init_done) {
>> + /* fetch and check protected key state */
>> + spin_lock_bh(&ctx->pk_lock);
>> + pk_state = ctx->pk_state;
>> + memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
>
> I would prefer to use the size of param->key instead of a constant
> value as length.
>
I checked all these occurrences and rewrote where possible to use
sizeof().
>> + spin_unlock_bh(&ctx->pk_lock);
>> + switch (pk_state) {
>> + case PK_STATE_NO_KEY:
>> + rc = -ENOKEY;
>> + goto out;
>> + case PK_STATE_CONVERT_IN_PROGRESS:
>> + rc = -EKEYEXPIRED;
>> + goto out;
>> + case PK_STATE_VALID:
>> + req_ctx->param_init_done = true;
>> + break;
>> + default:
>> + rc = pk_state < 0 ? pk_state : -EIO;
>> + goto out;
>> + }
>> + }
>>
>> - _free_kb_keybuf(&ctx->kb);
>> - rc = _key_to_kb(&ctx->kb, in_key, key_len);
>> - if (rc)
>> - return rc;
>> + rc = 0;
>
> Modify the param block in req_ctx only if the protected key is valid.
>
> int rc = 0;
>
> if (!req_ctx->param_init_done) {
> /* fetch and check protected key state */
> spin_lock_bh(&ctx->pk_lock);
> switch (ctx->pk_state) {
> case PK_STATE_NO_KEY:
> rc = -ENOKEY;
> break;
> case PK_STATE_CONVERT_IN_PROGRESS:
> rc = -EKEYEXPIRED;
> break;
> case PK_STATE_VALID:
> memcpy(param->key, ctx->pk.protkey, sizeof(param->key));
> req_ctx->param_init_done = true;
> break;
> default:
> rc = pk_state < 0 ? pk_state : -EIO;
> break;
> }
> spin_unlock_bh(&ctx->pk_lock);
> if (rc)
> goto out;
> }
>
done (for all 4 algs)
>> +
>> + /* always walk on the ... */
>
> What does this comment mean? I'm afraid, I don't get it.
>
removed, but added a comment about the walk instead as suggested by
Herbert Xu
>> + while ((nbytes = walk->nbytes) != 0) {
>> + /* only use complete blocks */
>> + n = nbytes & ~(AES_BLOCK_SIZE - 1);
>> + k = cpacf_km(ctx->fc | req_ctx->modifier, param,
>> + walk->dst.virt.addr, walk->src.virt.addr, n);
>> + if (k)
>> + rc = skcipher_walk_done(walk, nbytes - k);
>> + if (k < n) {
>> + if (!maysleep) {
>> + rc = -EKEYEXPIRED;
>> + goto out;
>> + }
>> + rc = paes_convert_key(ctx);
>> + if (rc)
>> + goto out;
>> + spin_lock_bh(&ctx->pk_lock);
>> + memcpy(param->key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
>> + spin_unlock_bh(&ctx->pk_lock);
>> + }
>> + }
>>
>> - return __ecb_paes_set_key(ctx);
>> +out:
>> + pr_debug("rc=%d\n", rc);
>> + return rc;
>> }
>>
>> static int ecb_paes_crypt(struct skcipher_request *req, unsigned long
>> modifier)
>> {
>> + struct s390_pecb_req_ctx *req_ctx = skcipher_request_ctx(req);
>> struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>> struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>> - struct {
>> - u8 key[PAES_256_PROTKEY_SIZE];
>> - } param;
>> - struct skcipher_walk walk;
>> - unsigned int nbytes, n, k;
>> + struct skcipher_walk *walk = &req_ctx->walk;
>> int rc;
>>
>> - rc = skcipher_walk_virt(&walk, req, false);
>> + /*
>> + * First try synchronous. If this fails for any reason
>> + * schedule this request asynchronous via crypto engine.
>> + */
>> +
>> + rc = skcipher_walk_virt(walk, req, false);
>> if (rc)
>> - return rc;
>> + goto out;
>>
>> - spin_lock_bh(&ctx->pk_lock);
>> - memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
>> - spin_unlock_bh(&ctx->pk_lock);
>> + req_ctx->modifier = modifier;
>> + req_ctx->param_init_done = false;
>>
>> - while ((nbytes = walk.nbytes) != 0) {
>> - /* only use complete blocks */
>> - n = nbytes & ~(AES_BLOCK_SIZE - 1);
>> - k = cpacf_km(ctx->fc | modifier, ¶m,
>> - walk.dst.virt.addr, walk.src.virt.addr, n);
>> - if (k)
>> - rc = skcipher_walk_done(&walk, nbytes - k);
>> - if (k < n) {
>> - if (__paes_convert_key(ctx))
>> - return skcipher_walk_done(&walk, -EIO);
>> - spin_lock_bh(&ctx->pk_lock);
>> - memcpy(param.key, ctx->pk.protkey, PAES_256_PROTKEY_SIZE);
>> - spin_unlock_bh(&ctx->pk_lock);
>> - }
>> + rc = ecb_paes_do_crypt(ctx, req_ctx, false);
>> + if (rc != -EKEYEXPIRED) {
>> + if (rc)
>> + skcipher_walk_done(walk, rc);
>> + goto out;
>> }
>> +
>> + rc = crypto_transfer_skcipher_request_to_engine(paes_crypto_engine,
>> req);
>> + if (rc)
>> + goto out;
>> +
>> + rc = -EINPROGRESS;
>> +
>> +out:
>> + if (rc != -EINPROGRESS)
>> + memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
>> + pr_debug("rc=%d\n", rc);
>> return rc;
>
> If took me a while to find the synchronous good case code path. I
> would prefer to handle the various cases separately, either with a
> switch/case or by explicit checks in the main path.
>
> rc = ecb_paes_do_crypt(ctx, req_ctx, false);
> if (rc == -EKEYEXPIRED) {
> rc = crypto_transfer_skcipher_request_to_engine(paes_crypto_engine,
> req);
> rc = rc ?: -EINPROGRESS;
> } else if (rc) {
> skcipher_walk_done(walk, rc);
> }
>
> if (rc != -EINPROGRESS)
> memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
> pr_debug("rc=%d\n", rc);
> return rc;
>
reworked this and 7 other places.
>> }
>>
>> @@ -310,112 +473,242 @@ static int ecb_paes_decrypt(struct
>> skcipher_request *req)
>> return ecb_paes_crypt(req, CPACF_DECRYPT);
>> }
>>
>> -static struct skcipher_alg ecb_paes_alg = {
>> - .base.cra_name = "ecb(paes)",
>> - .base.cra_driver_name = "ecb-paes-s390",
>> - .base.cra_priority = 401, /* combo: aes + ecb + 1 */
>> - .base.cra_blocksize = AES_BLOCK_SIZE,
>> - .base.cra_ctxsize = sizeof(struct s390_paes_ctx),
>> - .base.cra_module = THIS_MODULE,
>> - .base.cra_list = LIST_HEAD_INIT(ecb_paes_alg.base.cra_list),
>> - .init = ecb_paes_init,
>> - .exit = ecb_paes_exit,
>> - .min_keysize = PAES_MIN_KEYSIZE,
>> - .max_keysize = PAES_MAX_KEYSIZE,
>> - .setkey = ecb_paes_set_key,
>> - .encrypt = ecb_paes_encrypt,
>> - .decrypt = ecb_paes_decrypt,
>> -};
>> -
>> -static int cbc_paes_init(struct crypto_skcipher *tfm)
>> +static int ecb_paes_init(struct crypto_skcipher *tfm)
>> {
>> struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>
>> - ctx->kb.key = NULL;
>> + memset(ctx, 0, sizeof(*ctx));
>> spin_lock_init(&ctx->pk_lock);
>>
>> + crypto_skcipher_set_reqsize(tfm, sizeof(struct s390_pecb_req_ctx));
>> +
>> return 0;
>> }
>>
>> -static void cbc_paes_exit(struct crypto_skcipher *tfm)
>> +static void ecb_paes_exit(struct crypto_skcipher *tfm)
>> {
>> struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>>
>> - _free_kb_keybuf(&ctx->kb);
>> + memzero_explicit(ctx, sizeof(*ctx));
>> }
>>
>> -static inline int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
>> +static int ecb_paes_do_one_request(struct crypto_engine *engine, void
>> *areq)
>> {
>> - unsigned long fc;
>> + struct skcipher_request *req = skcipher_request_cast(areq);
>> + struct s390_pecb_req_ctx *req_ctx = skcipher_request_ctx(req);
>> + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>> + struct s390_paes_ctx *ctx = crypto_skcipher_ctx(tfm);
>> + struct skcipher_walk *walk = &req_ctx->walk;
>> int rc;
>>
>> - rc = __paes_convert_key(ctx);
>> - if (rc)
>> - return rc;
>> -
>> - /* Pick the correct function code based on the protected key type */
>> - fc = (ctx->pk.type == PKEY_KEYTYPE_AES_128) ? CPACF_KMC_PAES_128 :
>> - (ctx->pk.type == PKEY_KEYTYPE_AES_192) ? CPACF_KMC_PAES_192 :
>> - (ctx->pk.type == PKEY_KEYTYPE_AES_256) ? CPACF_KMC_PAES_256 : 0;
>> + /* walk has already been prepared */
>>
>> - /* Check if the function code is available */
>> - ctx->fc = (fc && cpacf_test_func(&kmc_functions, fc)) ? fc : 0;
>> + rc = ecb_paes_do_crypt(ctx, req_ctx, true);
>> + if (rc != -EKEYEXPIRED) {
>> + if (rc)
>> + skcipher_walk_done(walk, rc);
>> + goto complete;
>> + }
>
> Same here, I would prefer to reverse the logic of the error handling.
>
>>
>> - return ctx->fc ? 0 : -EINVAL;
>> + /*
>> + * Protected key expired, conversion is in process.
>> + * Trigger a re-schedule of this request by returning
>> + * -ENOSPC ("hardware queue is full") to the crypto engine.
>> + * To avoid immediately re-invocation of this callback,
>> + * tell the scheduler to voluntarily give up the CPU here.
>> + */
>> + yield();
>
> As mentioned by checkpatch.pl, the use of yield() should be avoided.
> Please use alternatives (e.g. cond_reschedule()).
>
done - replaced with cond_resched()
>> + pr_debug("rescheduling request\n");
>> + return -ENOSPC;
>> +
>> +complete:
>> + memzero_explicit(&req_ctx->param, sizeof(req_ctx->param));
>> + pr_debug("request complete with rc=%d\n", rc);
>> + local_bh_disable();
>> + crypto_finalize_skcipher_request(engine, req, rc);
>> + local_bh_enable();
>> + return rc;
>> }
>
> [...]
Thanks, see v4 for the changes.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support
2025-05-06 14:02 ` Harald Freudenberger
@ 2025-05-07 3:35 ` Herbert Xu
0 siblings, 0 replies; 10+ messages in thread
From: Herbert Xu @ 2025-05-07 3:35 UTC (permalink / raw)
To: Harald Freudenberger
Cc: dengler, ifranzki, fcallies, linux-crypto, linux-s390
On Tue, May 06, 2025 at 04:02:41PM +0200, Harald Freudenberger wrote:
>
> > > + rc = paes_convert_key(ctx);
> >
> > At first I thought this was racy, but then I realised that it is not
> > because only the crypto_engine thread gets called with maysleep ==
> > true. Since there is only one crypto_engine thread this is safe.
> >
> > I think this is not really obvious though and worthy of a comment to
> > explain the reliance on the single crypto engine thread.
> >
>
> This is racy but the code can handle that. The cpacf instruction
> refuses to do any operations if the converted key material (the "protected"
> key)
> is invalid. So it is in fact thinkable and possible to replace an fresh
> protected key with an older (in the meantime invalid) protected key. As the
> cpacf instruction detects this, refuses to operate with an invalid key and
> the calling code triggers a (re-)conversion this does no harm. So it
> is racy but may only lead to additional conversions but never to invalid
> data on en- or decrypted.
Perhaps add this as a comment in the code?
> I am struggling with that. The thing is how to keep this information.
> I extended the request context with a bool field telling me that there
> is/was a request pushed to the engine and thus all following crypto
> operations on this request need to go via engine.
> BUT ... the request context is not initial zeroized and there is no
> init() for a request and thus one does not know on invocation of the
> skcipher encrypt or decrypt function if the value of the bool field
> is taken for serious or needs initialization. Same would happen if
> there is a counter instead - how to initially set the counter value
> to 0? Any hints on this are welcome.
I think the easiest would be to copy how simd + cryptd does it.
In simd we have the same problem of a fallback path through async
cryptd, and we need to keep using that fallback once we start until
it fully drains.
So right after entering encrypt/decrypt, you check whether the
fallback is in use, and if it is, just do that and that's it.
Otherwise continue as usual:
static int simd_skcipher_encrypt(struct skcipher_request *req)
{
...
if (!crypto_simd_usable() ||
(in_atomic() && cryptd_skcipher_queued(ctx->cryptd_tfm)))
Take fallback path
else
Continue on normal path
So for paes this would look like:
static int paes_skcipher_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
if (paes_skcipher_queued(tfm))
return paes_skcipher_fallback(req);
Continue on normal path
Where paes_skcipher_queued(tfm) is just a simple ref count of the
number of entries queued by that tfm onto the fallback path. IOW
every time you enqueue something you increase the refcount and
every time you dequeue something you decrease it too.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-05-07 3:35 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-01 14:50 [PATCH v3 0/3] Rework protected key AES for true asynch support Harald Freudenberger
2025-04-01 14:50 ` [PATCH v3 1/3] s390/cpacf: Rework cpacf_pcc() to return condition code Harald Freudenberger
2025-04-14 10:14 ` Holger Dengler
2025-04-01 14:50 ` [PATCH v3 2/3] s390/crypto: Rework protected key AES for true asynch support Harald Freudenberger
2025-04-02 4:04 ` Herbert Xu
2025-05-06 14:02 ` Harald Freudenberger
2025-05-07 3:35 ` Herbert Xu
2025-04-25 14:56 ` Holger Dengler
2025-05-06 14:13 ` Harald Freudenberger
2025-04-01 14:50 ` [PATCH v3 3/3] Documentation: crypto_engine: Update and extend crypto engine doc Harald Freudenberger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).