linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] crypto: Add lskcipher API type
@ 2023-09-14  8:28 Herbert Xu
  2023-09-14  8:28 ` [PATCH 1/8] crypto: aead - Add crypto_has_aead Herbert Xu
                   ` (8 more replies)
  0 siblings, 9 replies; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:28 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: Ard Biesheuvel

This series introduces the lskcipher API type.  Its relationship
to skcipher is the same as that between shash and ahash.

This series only converts ecb and cbc to the new algorithm type.
Once all templates have been moved over, we can then convert the
cipher implementations such as aes-generic.

Ard, if you have some spare cycles you can help with either the
templates or the cipher algorithm conversions.  The latter will
be applied once the templates have been completely moved over.

Just let me know which ones you'd like to do so I won't touch
them.

Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 1/8] crypto: aead - Add crypto_has_aead
  2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
@ 2023-09-14  8:28 ` Herbert Xu
  2023-09-14  8:28 ` [PATCH 2/8] ipsec: Stop using crypto_has_alg Herbert Xu
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:28 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: Ard Biesheuvel

Add the helper crypto_has_aead.  This is meant to replace the
existing use of crypto_has_alg to locate AEAD algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/aead.c         |  6 ++++++
 include/crypto/aead.h | 12 ++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/crypto/aead.c b/crypto/aead.c
index d5ba204ebdbf..54906633566a 100644
--- a/crypto/aead.c
+++ b/crypto/aead.c
@@ -269,6 +269,12 @@ struct crypto_aead *crypto_alloc_aead(const char *alg_name, u32 type, u32 mask)
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_aead);
 
+int crypto_has_aead(const char *alg_name, u32 type, u32 mask)
+{
+	return crypto_type_has_alg(alg_name, &crypto_aead_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_has_aead);
+
 static int aead_prepare_alg(struct aead_alg *alg)
 {
 	struct crypto_istat_aead *istat = aead_get_stat(alg);
diff --git a/include/crypto/aead.h b/include/crypto/aead.h
index 35e45b854a6f..51382befbe37 100644
--- a/include/crypto/aead.h
+++ b/include/crypto/aead.h
@@ -217,6 +217,18 @@ static inline void crypto_free_aead(struct crypto_aead *tfm)
 	crypto_destroy_tfm(tfm, crypto_aead_tfm(tfm));
 }
 
+/**
+ * crypto_has_aead() - Search for the availability of an aead.
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ *	      aead
+ * @type: specifies the type of the aead
+ * @mask: specifies the mask for the aead
+ *
+ * Return: true when the aead is known to the kernel crypto API; false
+ *	   otherwise
+ */
+int crypto_has_aead(const char *alg_name, u32 type, u32 mask);
+
 static inline const char *crypto_aead_driver_name(struct crypto_aead *tfm)
 {
 	return crypto_tfm_alg_driver_name(crypto_aead_tfm(tfm));
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 2/8] ipsec: Stop using crypto_has_alg
  2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
  2023-09-14  8:28 ` [PATCH 1/8] crypto: aead - Add crypto_has_aead Herbert Xu
@ 2023-09-14  8:28 ` Herbert Xu
  2023-09-14  8:28 ` [PATCH 3/8] crypto: hash - Hide CRYPTO_ALG_TYPE_AHASH_MASK Herbert Xu
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:28 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: Ard Biesheuvel

Stop using the obsolete crypto_has_alg helper that is type-agnostic.
Instead use the type-specific helpers such as the newly added
crypto_has_aead.

This means that changes in the underlying type/mask values won't
affect IPsec.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 net/xfrm/xfrm_algo.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/net/xfrm/xfrm_algo.c b/net/xfrm/xfrm_algo.c
index 094734fbec96..41533c631431 100644
--- a/net/xfrm/xfrm_algo.c
+++ b/net/xfrm/xfrm_algo.c
@@ -5,6 +5,7 @@
  * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
  */
 
+#include <crypto/aead.h>
 #include <crypto/hash.h>
 #include <crypto/skcipher.h>
 #include <linux/module.h>
@@ -644,38 +645,33 @@ static inline int calg_entries(void)
 }
 
 struct xfrm_algo_list {
+	int (*find)(const char *name, u32 type, u32 mask);
 	struct xfrm_algo_desc *algs;
 	int entries;
-	u32 type;
-	u32 mask;
 };
 
 static const struct xfrm_algo_list xfrm_aead_list = {
+	.find = crypto_has_aead,
 	.algs = aead_list,
 	.entries = ARRAY_SIZE(aead_list),
-	.type = CRYPTO_ALG_TYPE_AEAD,
-	.mask = CRYPTO_ALG_TYPE_MASK,
 };
 
 static const struct xfrm_algo_list xfrm_aalg_list = {
+	.find = crypto_has_ahash,
 	.algs = aalg_list,
 	.entries = ARRAY_SIZE(aalg_list),
-	.type = CRYPTO_ALG_TYPE_HASH,
-	.mask = CRYPTO_ALG_TYPE_HASH_MASK,
 };
 
 static const struct xfrm_algo_list xfrm_ealg_list = {
+	.find = crypto_has_skcipher,
 	.algs = ealg_list,
 	.entries = ARRAY_SIZE(ealg_list),
-	.type = CRYPTO_ALG_TYPE_SKCIPHER,
-	.mask = CRYPTO_ALG_TYPE_MASK,
 };
 
 static const struct xfrm_algo_list xfrm_calg_list = {
+	.find = crypto_has_comp,
 	.algs = calg_list,
 	.entries = ARRAY_SIZE(calg_list),
-	.type = CRYPTO_ALG_TYPE_COMPRESS,
-	.mask = CRYPTO_ALG_TYPE_MASK,
 };
 
 static struct xfrm_algo_desc *xfrm_find_algo(
@@ -696,8 +692,7 @@ static struct xfrm_algo_desc *xfrm_find_algo(
 		if (!probe)
 			break;
 
-		status = crypto_has_alg(list[i].name, algo_list->type,
-					algo_list->mask);
+		status = algo_list->find(list[i].name, 0, 0);
 		if (!status)
 			break;
 
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 3/8] crypto: hash - Hide CRYPTO_ALG_TYPE_AHASH_MASK
  2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
  2023-09-14  8:28 ` [PATCH 1/8] crypto: aead - Add crypto_has_aead Herbert Xu
  2023-09-14  8:28 ` [PATCH 2/8] ipsec: Stop using crypto_has_alg Herbert Xu
@ 2023-09-14  8:28 ` Herbert Xu
  2023-09-14  8:28 ` [PATCH 4/8] crypto: skcipher - Add lskcipher Herbert Xu
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:28 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: Ard Biesheuvel

Move the macro CRYPTO_ALG_TYPE_AHASH_MASK out of linux/crypto.h
and into crypto/ahash.c so that it's not visible to users of the
Crypto API.

Also remove the unused CRYPTO_ALG_TYPE_HASH_MASK macro.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/ahash.c         | 2 ++
 include/linux/crypto.h | 2 --
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index 709ef0940799..213bb3e9f245 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -21,6 +21,8 @@
 
 #include "hash.h"
 
+#define CRYPTO_ALG_TYPE_AHASH_MASK	0x0000000e
+
 static const struct crypto_type crypto_ahash_type;
 
 struct ahash_request_priv {
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index 31f6fee0c36c..a0780deb017a 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -35,8 +35,6 @@
 #define CRYPTO_ALG_TYPE_SHASH		0x0000000e
 #define CRYPTO_ALG_TYPE_AHASH		0x0000000f
 
-#define CRYPTO_ALG_TYPE_HASH_MASK	0x0000000e
-#define CRYPTO_ALG_TYPE_AHASH_MASK	0x0000000e
 #define CRYPTO_ALG_TYPE_ACOMPRESS_MASK	0x0000000e
 
 #define CRYPTO_ALG_LARVAL		0x00000010
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
                   ` (2 preceding siblings ...)
  2023-09-14  8:28 ` [PATCH 3/8] crypto: hash - Hide CRYPTO_ALG_TYPE_AHASH_MASK Herbert Xu
@ 2023-09-14  8:28 ` Herbert Xu
  2023-09-20  6:25   ` Eric Biggers
  2023-09-14  8:28 ` [PATCH 5/8] crypto: lskcipher - Add compatibility wrapper around ECB Herbert Xu
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:28 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: Ard Biesheuvel

Add a new API type lskcipher designed for taking straight kernel
pointers instead of SG lists.  Its relationship to skcipher will
be analogous to that between shash and ahash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/Makefile                    |   6 +-
 crypto/cryptd.c                    |   2 +-
 crypto/lskcipher.c                 | 594 +++++++++++++++++++++++++++++
 crypto/skcipher.c                  |  75 +++-
 crypto/skcipher.h                  |  30 ++
 include/crypto/internal/skcipher.h | 114 +++++-
 include/crypto/skcipher.h          | 309 ++++++++++++++-
 include/linux/crypto.h             |   1 +
 8 files changed, 1086 insertions(+), 45 deletions(-)
 create mode 100644 crypto/lskcipher.c
 create mode 100644 crypto/skcipher.h

diff --git a/crypto/Makefile b/crypto/Makefile
index 953a7e105e58..5ac6876f935a 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -16,7 +16,11 @@ obj-$(CONFIG_CRYPTO_ALGAPI2) += crypto_algapi.o
 obj-$(CONFIG_CRYPTO_AEAD2) += aead.o
 obj-$(CONFIG_CRYPTO_GENIV) += geniv.o
 
-obj-$(CONFIG_CRYPTO_SKCIPHER2) += skcipher.o
+crypto_skcipher-y += lskcipher.o
+crypto_skcipher-y += skcipher.o
+
+obj-$(CONFIG_CRYPTO_SKCIPHER2) += crypto_skcipher.o
+
 obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
 obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o
 
diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index bbcc368b6a55..194a92d677b9 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -929,7 +929,7 @@ static int cryptd_create(struct crypto_template *tmpl, struct rtattr **tb)
 		return PTR_ERR(algt);
 
 	switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
-	case CRYPTO_ALG_TYPE_SKCIPHER:
+	case CRYPTO_ALG_TYPE_LSKCIPHER:
 		return cryptd_create_skcipher(tmpl, tb, algt, &queue);
 	case CRYPTO_ALG_TYPE_HASH:
 		return cryptd_create_hash(tmpl, tb, algt, &queue);
diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
new file mode 100644
index 000000000000..3343c6d955da
--- /dev/null
+++ b/crypto/lskcipher.c
@@ -0,0 +1,594 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Linear symmetric key cipher operations.
+ *
+ * Generic encrypt/decrypt wrapper for ciphers.
+ *
+ * Copyright (c) 2023 Herbert Xu <herbert@gondor.apana.org.au>
+ */
+
+#include <linux/cryptouser.h>
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <net/netlink.h>
+#include "skcipher.h"
+
+static inline struct crypto_lskcipher *__crypto_lskcipher_cast(
+	struct crypto_tfm *tfm)
+{
+	return container_of(tfm, struct crypto_lskcipher, base);
+}
+
+static inline struct lskcipher_alg *__crypto_lskcipher_alg(
+	struct crypto_alg *alg)
+{
+	return container_of(alg, struct lskcipher_alg, co.base);
+}
+
+static inline struct crypto_istat_cipher *lskcipher_get_stat(
+	struct lskcipher_alg *alg)
+{
+	return skcipher_get_stat_common(&alg->co);
+}
+
+static inline int crypto_lskcipher_errstat(struct lskcipher_alg *alg, int err)
+{
+	struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+	if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+		return err;
+
+	if (err)
+		atomic64_inc(&istat->err_cnt);
+
+	return err;
+}
+
+static int lskcipher_setkey_unaligned(struct crypto_lskcipher *tfm,
+				      const u8 *key, unsigned int keylen)
+{
+	unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+	struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
+	u8 *buffer, *alignbuffer;
+	unsigned long absize;
+	int ret;
+
+	absize = keylen + alignmask;
+	buffer = kmalloc(absize, GFP_ATOMIC);
+	if (!buffer)
+		return -ENOMEM;
+
+	alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+	memcpy(alignbuffer, key, keylen);
+	ret = cipher->setkey(tfm, alignbuffer, keylen);
+	kfree_sensitive(buffer);
+	return ret;
+}
+
+int crypto_lskcipher_setkey(struct crypto_lskcipher *tfm, const u8 *key,
+			    unsigned int keylen)
+{
+	unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+	struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
+
+	if (keylen < cipher->co.min_keysize || keylen > cipher->co.max_keysize)
+		return -EINVAL;
+
+	if ((unsigned long)key & alignmask)
+		return lskcipher_setkey_unaligned(tfm, key, keylen);
+	else
+		return cipher->setkey(tfm, key, keylen);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_setkey);
+
+static int crypto_lskcipher_crypt_unaligned(
+	struct crypto_lskcipher *tfm, const u8 *src, u8 *dst, unsigned len,
+	u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
+			     u8 *dst, unsigned len, u8 *iv, bool final))
+{
+	unsigned ivsize = crypto_lskcipher_ivsize(tfm);
+	unsigned bs = crypto_lskcipher_blocksize(tfm);
+	unsigned cs = crypto_lskcipher_chunksize(tfm);
+	int err;
+	u8 *tiv;
+	u8 *p;
+
+	BUILD_BUG_ON(MAX_CIPHER_BLOCKSIZE > PAGE_SIZE ||
+		     MAX_CIPHER_ALIGNMASK >= PAGE_SIZE);
+
+	tiv = kmalloc(PAGE_SIZE, GFP_ATOMIC);
+	if (!tiv)
+		return -ENOMEM;
+
+	memcpy(tiv, iv, ivsize);
+
+	p = kmalloc(PAGE_SIZE, GFP_ATOMIC);
+	err = -ENOMEM;
+	if (!p)
+		goto out;
+
+	while (len >= bs) {
+		unsigned chunk = min((unsigned)PAGE_SIZE, len);
+		int err;
+
+		if (chunk > cs)
+			chunk &= ~(cs - 1);
+
+		memcpy(p, src, chunk);
+		err = crypt(tfm, p, p, chunk, tiv, true);
+		if (err)
+			goto out;
+
+		memcpy(dst, p, chunk);
+		src += chunk;
+		dst += chunk;
+		len -= chunk;
+	}
+
+	err = len ? -EINVAL : 0;
+
+out:
+	memcpy(iv, tiv, ivsize);
+	kfree_sensitive(p);
+	kfree_sensitive(tiv);
+	return err;
+}
+
+static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
+				  u8 *dst, unsigned len, u8 *iv,
+				  int (*crypt)(struct crypto_lskcipher *tfm,
+					       const u8 *src, u8 *dst,
+					       unsigned len, u8 *iv,
+					       bool final))
+{
+	unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+	struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+	int ret;
+
+	if (((unsigned long)src | (unsigned long)dst | (unsigned long)iv) &
+	    alignmask) {
+		ret = crypto_lskcipher_crypt_unaligned(tfm, src, dst, len, iv,
+						       crypt);
+		goto out;
+	}
+
+	ret = crypt(tfm, src, dst, len, iv, true);
+
+out:
+	return crypto_lskcipher_errstat(alg, ret);
+}
+
+int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+			     u8 *dst, unsigned len, u8 *iv)
+{
+	struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+
+	if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
+		struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+		atomic64_inc(&istat->encrypt_cnt);
+		atomic64_add(len, &istat->encrypt_tlen);
+	}
+
+	return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->encrypt);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_encrypt);
+
+int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+			     u8 *dst, unsigned len, u8 *iv)
+{
+	struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+
+	if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
+		struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+		atomic64_inc(&istat->decrypt_cnt);
+		atomic64_add(len, &istat->decrypt_tlen);
+	}
+
+	return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->decrypt);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt);
+
+int crypto_lskcipher_setkey_sg(struct crypto_skcipher *tfm, const u8 *key,
+			       unsigned int keylen)
+{
+	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(tfm);
+
+	return crypto_lskcipher_setkey(*ctx, key, keylen);
+}
+
+static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
+				     int (*crypt)(struct crypto_lskcipher *tfm,
+						  const u8 *src, u8 *dst,
+						  unsigned len, u8 *iv,
+						  bool final))
+{
+	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+	struct crypto_lskcipher *tfm = *ctx;
+	struct skcipher_walk walk;
+	int err;
+
+	err = skcipher_walk_virt(&walk, req, false);
+
+	while (walk.nbytes) {
+		err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
+			    walk.nbytes, walk.iv, walk.nbytes == walk.total);
+		err = skcipher_walk_done(&walk, err);
+	}
+
+	return err;
+}
+
+int crypto_lskcipher_encrypt_sg(struct skcipher_request *req)
+{
+	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+	struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
+
+	return crypto_lskcipher_crypt_sg(req, alg->encrypt);
+}
+
+int crypto_lskcipher_decrypt_sg(struct skcipher_request *req)
+{
+	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+	struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
+
+	return crypto_lskcipher_crypt_sg(req, alg->decrypt);
+}
+
+static void crypto_lskcipher_exit_tfm(struct crypto_tfm *tfm)
+{
+	struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
+	struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
+
+	alg->exit(skcipher);
+}
+
+static int crypto_lskcipher_init_tfm(struct crypto_tfm *tfm)
+{
+	struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
+	struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
+
+	if (alg->exit)
+		skcipher->base.exit = crypto_lskcipher_exit_tfm;
+
+	if (alg->init)
+		return alg->init(skcipher);
+
+	return 0;
+}
+
+static void crypto_lskcipher_free_instance(struct crypto_instance *inst)
+{
+	struct lskcipher_instance *skcipher =
+		container_of(inst, struct lskcipher_instance, s.base);
+
+	skcipher->free(skcipher);
+}
+
+static void __maybe_unused crypto_lskcipher_show(
+	struct seq_file *m, struct crypto_alg *alg)
+{
+	struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+
+	seq_printf(m, "type         : lskcipher\n");
+	seq_printf(m, "blocksize    : %u\n", alg->cra_blocksize);
+	seq_printf(m, "min keysize  : %u\n", skcipher->co.min_keysize);
+	seq_printf(m, "max keysize  : %u\n", skcipher->co.max_keysize);
+	seq_printf(m, "ivsize       : %u\n", skcipher->co.ivsize);
+	seq_printf(m, "chunksize    : %u\n", skcipher->co.chunksize);
+}
+
+static int __maybe_unused crypto_lskcipher_report(
+	struct sk_buff *skb, struct crypto_alg *alg)
+{
+	struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+	struct crypto_report_blkcipher rblkcipher;
+
+	memset(&rblkcipher, 0, sizeof(rblkcipher));
+
+	strscpy(rblkcipher.type, "lskcipher", sizeof(rblkcipher.type));
+	strscpy(rblkcipher.geniv, "<none>", sizeof(rblkcipher.geniv));
+
+	rblkcipher.blocksize = alg->cra_blocksize;
+	rblkcipher.min_keysize = skcipher->co.min_keysize;
+	rblkcipher.max_keysize = skcipher->co.max_keysize;
+	rblkcipher.ivsize = skcipher->co.ivsize;
+
+	return nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER,
+		       sizeof(rblkcipher), &rblkcipher);
+}
+
+static int __maybe_unused crypto_lskcipher_report_stat(
+	struct sk_buff *skb, struct crypto_alg *alg)
+{
+	struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+	struct crypto_istat_cipher *istat;
+	struct crypto_stat_cipher rcipher;
+
+	istat = lskcipher_get_stat(skcipher);
+
+	memset(&rcipher, 0, sizeof(rcipher));
+
+	strscpy(rcipher.type, "cipher", sizeof(rcipher.type));
+
+	rcipher.stat_encrypt_cnt = atomic64_read(&istat->encrypt_cnt);
+	rcipher.stat_encrypt_tlen = atomic64_read(&istat->encrypt_tlen);
+	rcipher.stat_decrypt_cnt =  atomic64_read(&istat->decrypt_cnt);
+	rcipher.stat_decrypt_tlen = atomic64_read(&istat->decrypt_tlen);
+	rcipher.stat_err_cnt =  atomic64_read(&istat->err_cnt);
+
+	return nla_put(skb, CRYPTOCFGA_STAT_CIPHER, sizeof(rcipher), &rcipher);
+}
+
+static const struct crypto_type crypto_lskcipher_type = {
+	.extsize = crypto_alg_extsize,
+	.init_tfm = crypto_lskcipher_init_tfm,
+	.free = crypto_lskcipher_free_instance,
+#ifdef CONFIG_PROC_FS
+	.show = crypto_lskcipher_show,
+#endif
+#if IS_ENABLED(CONFIG_CRYPTO_USER)
+	.report = crypto_lskcipher_report,
+#endif
+#ifdef CONFIG_CRYPTO_STATS
+	.report_stat = crypto_lskcipher_report_stat,
+#endif
+	.maskclear = ~CRYPTO_ALG_TYPE_MASK,
+	.maskset = CRYPTO_ALG_TYPE_MASK,
+	.type = CRYPTO_ALG_TYPE_LSKCIPHER,
+	.tfmsize = offsetof(struct crypto_lskcipher, base),
+};
+
+static void crypto_lskcipher_exit_tfm_sg(struct crypto_tfm *tfm)
+{
+	struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
+
+	crypto_free_lskcipher(*ctx);
+}
+
+int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm)
+{
+	struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
+	struct crypto_alg *calg = tfm->__crt_alg;
+	struct crypto_lskcipher *skcipher;
+
+	if (!crypto_mod_get(calg))
+		return -EAGAIN;
+
+	skcipher = crypto_create_tfm(calg, &crypto_lskcipher_type);
+	if (IS_ERR(skcipher)) {
+		crypto_mod_put(calg);
+		return PTR_ERR(skcipher);
+	}
+
+	*ctx = skcipher;
+	tfm->exit = crypto_lskcipher_exit_tfm_sg;
+
+	return 0;
+}
+
+int crypto_grab_lskcipher(struct crypto_lskcipher_spawn *spawn,
+			  struct crypto_instance *inst,
+			  const char *name, u32 type, u32 mask)
+{
+	spawn->base.frontend = &crypto_lskcipher_type;
+	return crypto_grab_spawn(&spawn->base, inst, name, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_grab_lskcipher);
+
+struct crypto_lskcipher *crypto_alloc_lskcipher(const char *alg_name,
+						u32 type, u32 mask)
+{
+	return crypto_alloc_tfm(alg_name, &crypto_lskcipher_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_lskcipher);
+
+static int lskcipher_prepare_alg(struct lskcipher_alg *alg)
+{
+	struct crypto_alg *base = &alg->co.base;
+	int err;
+
+	err = skcipher_prepare_alg_common(&alg->co);
+	if (err)
+		return err;
+
+	if (alg->co.chunksize & (alg->co.chunksize - 1))
+		return -EINVAL;
+
+	base->cra_type = &crypto_lskcipher_type;
+	base->cra_flags |= CRYPTO_ALG_TYPE_LSKCIPHER;
+
+	return 0;
+}
+
+int crypto_register_lskcipher(struct lskcipher_alg *alg)
+{
+	struct crypto_alg *base = &alg->co.base;
+	int err;
+
+	err = lskcipher_prepare_alg(alg);
+	if (err)
+		return err;
+
+	return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_lskcipher);
+
+void crypto_unregister_lskcipher(struct lskcipher_alg *alg)
+{
+	crypto_unregister_alg(&alg->co.base);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_lskcipher);
+
+int crypto_register_lskciphers(struct lskcipher_alg *algs, int count)
+{
+	int i, ret;
+
+	for (i = 0; i < count; i++) {
+		ret = crypto_register_lskcipher(&algs[i]);
+		if (ret)
+			goto err;
+	}
+
+	return 0;
+
+err:
+	for (--i; i >= 0; --i)
+		crypto_unregister_lskcipher(&algs[i]);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(crypto_register_lskciphers);
+
+void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count)
+{
+	int i;
+
+	for (i = count - 1; i >= 0; --i)
+		crypto_unregister_lskcipher(&algs[i]);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_lskciphers);
+
+int lskcipher_register_instance(struct crypto_template *tmpl,
+				struct lskcipher_instance *inst)
+{
+	int err;
+
+	if (WARN_ON(!inst->free))
+		return -EINVAL;
+
+	err = lskcipher_prepare_alg(&inst->alg);
+	if (err)
+		return err;
+
+	return crypto_register_instance(tmpl, lskcipher_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(lskcipher_register_instance);
+
+static int lskcipher_setkey_simple(struct crypto_lskcipher *tfm, const u8 *key,
+				   unsigned int keylen)
+{
+	struct crypto_lskcipher *cipher = lskcipher_cipher_simple(tfm);
+
+	crypto_lskcipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
+	crypto_lskcipher_set_flags(cipher, crypto_lskcipher_get_flags(tfm) &
+				   CRYPTO_TFM_REQ_MASK);
+	return crypto_lskcipher_setkey(cipher, key, keylen);
+}
+
+static int lskcipher_init_tfm_simple(struct crypto_lskcipher *tfm)
+{
+	struct lskcipher_instance *inst = lskcipher_alg_instance(tfm);
+	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+	struct crypto_lskcipher_spawn *spawn;
+	struct crypto_lskcipher *cipher;
+
+	spawn = lskcipher_instance_ctx(inst);
+	cipher = crypto_spawn_lskcipher(spawn);
+	if (IS_ERR(cipher))
+		return PTR_ERR(cipher);
+
+	*ctx = cipher;
+	return 0;
+}
+
+static void lskcipher_exit_tfm_simple(struct crypto_lskcipher *tfm)
+{
+	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+
+	crypto_free_lskcipher(*ctx);
+}
+
+static void lskcipher_free_instance_simple(struct lskcipher_instance *inst)
+{
+	crypto_drop_lskcipher(lskcipher_instance_ctx(inst));
+	kfree(inst);
+}
+
+/**
+ * lskcipher_alloc_instance_simple - allocate instance of simple block cipher
+ *
+ * Allocate an lskcipher_instance for a simple block cipher mode of operation,
+ * e.g. cbc or ecb.  The instance context will have just a single crypto_spawn,
+ * that for the underlying cipher.  The {min,max}_keysize, ivsize, blocksize,
+ * alignmask, and priority are set from the underlying cipher but can be
+ * overridden if needed.  The tfm context defaults to
+ * struct crypto_lskcipher *, and default ->setkey(), ->init(), and
+ * ->exit() methods are installed.
+ *
+ * @tmpl: the template being instantiated
+ * @tb: the template parameters
+ *
+ * Return: a pointer to the new instance, or an ERR_PTR().  The caller still
+ *	   needs to register the instance.
+ */
+struct lskcipher_instance *lskcipher_alloc_instance_simple(
+	struct crypto_template *tmpl, struct rtattr **tb)
+{
+	u32 mask;
+	struct lskcipher_instance *inst;
+	struct crypto_lskcipher_spawn *spawn;
+	struct lskcipher_alg *cipher_alg;
+	int err;
+
+	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
+	if (err)
+		return ERR_PTR(err);
+
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	if (!inst)
+		return ERR_PTR(-ENOMEM);
+
+	spawn = lskcipher_instance_ctx(inst);
+	err = crypto_grab_lskcipher(spawn,
+				    lskcipher_crypto_instance(inst),
+				    crypto_attr_alg_name(tb[1]), 0, mask);
+	if (err)
+		goto err_free_inst;
+	cipher_alg = crypto_lskcipher_spawn_alg(spawn);
+
+	err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
+				  &cipher_alg->co.base);
+	if (err)
+		goto err_free_inst;
+
+	/* Don't allow nesting. */
+	err = -ELOOP;
+	if ((cipher_alg->co.base.cra_flags & CRYPTO_ALG_INSTANCE))
+		goto err_free_inst;
+
+	err = -EINVAL;
+	if (cipher_alg->co.ivsize)
+		goto err_free_inst;
+
+	inst->free = lskcipher_free_instance_simple;
+
+	/* Default algorithm properties, can be overridden */
+	inst->alg.co.base.cra_blocksize = cipher_alg->co.base.cra_blocksize;
+	inst->alg.co.base.cra_alignmask = cipher_alg->co.base.cra_alignmask;
+	inst->alg.co.base.cra_priority = cipher_alg->co.base.cra_priority;
+	inst->alg.co.min_keysize = cipher_alg->co.min_keysize;
+	inst->alg.co.max_keysize = cipher_alg->co.max_keysize;
+	inst->alg.co.ivsize = cipher_alg->co.base.cra_blocksize;
+
+	/* Use struct crypto_lskcipher * by default, can be overridden */
+	inst->alg.co.base.cra_ctxsize = sizeof(struct crypto_lskcipher *);
+	inst->alg.setkey = lskcipher_setkey_simple;
+	inst->alg.init = lskcipher_init_tfm_simple;
+	inst->alg.exit = lskcipher_exit_tfm_simple;
+
+	return inst;
+
+err_free_inst:
+	lskcipher_free_instance_simple(inst);
+	return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(lskcipher_alloc_instance_simple);
diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index 7b275716cf4e..b9496dc8a609 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -24,8 +24,9 @@
 #include <linux/slab.h>
 #include <linux/string.h>
 #include <net/netlink.h>
+#include "skcipher.h"
 
-#include "internal.h"
+#define CRYPTO_ALG_TYPE_SKCIPHER_MASK	0x0000000e
 
 enum {
 	SKCIPHER_WALK_PHYS = 1 << 0,
@@ -43,6 +44,8 @@ struct skcipher_walk_buffer {
 	u8 buffer[];
 };
 
+static const struct crypto_type crypto_skcipher_type;
+
 static int skcipher_walk_next(struct skcipher_walk *walk);
 
 static inline void skcipher_map_src(struct skcipher_walk *walk)
@@ -89,11 +92,7 @@ static inline struct skcipher_alg *__crypto_skcipher_alg(
 static inline struct crypto_istat_cipher *skcipher_get_stat(
 	struct skcipher_alg *alg)
 {
-#ifdef CONFIG_CRYPTO_STATS
-	return &alg->stat;
-#else
-	return NULL;
-#endif
+	return skcipher_get_stat_common(&alg->co);
 }
 
 static inline int crypto_skcipher_errstat(struct skcipher_alg *alg, int err)
@@ -468,6 +467,7 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk,
 				  struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
 
 	walk->total = req->cryptlen;
 	walk->nbytes = 0;
@@ -485,10 +485,14 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk,
 		       SKCIPHER_WALK_SLEEP : 0;
 
 	walk->blocksize = crypto_skcipher_blocksize(tfm);
-	walk->stride = crypto_skcipher_walksize(tfm);
 	walk->ivsize = crypto_skcipher_ivsize(tfm);
 	walk->alignmask = crypto_skcipher_alignmask(tfm);
 
+	if (alg->co.base.cra_type != &crypto_skcipher_type)
+		walk->stride = alg->co.chunksize;
+	else
+		walk->stride = alg->walksize;
+
 	return skcipher_walk_first(walk);
 }
 
@@ -616,6 +620,11 @@ int crypto_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
 	unsigned long alignmask = crypto_skcipher_alignmask(tfm);
 	int err;
 
+	if (cipher->co.base.cra_type != &crypto_skcipher_type) {
+		err = crypto_lskcipher_setkey_sg(tfm, key, keylen);
+		goto out;
+	}
+
 	if (keylen < cipher->min_keysize || keylen > cipher->max_keysize)
 		return -EINVAL;
 
@@ -624,6 +633,7 @@ int crypto_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
 	else
 		err = cipher->setkey(tfm, key, keylen);
 
+out:
 	if (unlikely(err)) {
 		skcipher_set_needkey(tfm);
 		return err;
@@ -649,6 +659,8 @@ int crypto_skcipher_encrypt(struct skcipher_request *req)
 
 	if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
 		ret = -ENOKEY;
+	else if (alg->co.base.cra_type != &crypto_skcipher_type)
+		ret = crypto_lskcipher_encrypt_sg(req);
 	else
 		ret = alg->encrypt(req);
 
@@ -671,6 +683,8 @@ int crypto_skcipher_decrypt(struct skcipher_request *req)
 
 	if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
 		ret = -ENOKEY;
+	else if (alg->co.base.cra_type != &crypto_skcipher_type)
+		ret = crypto_lskcipher_decrypt_sg(req);
 	else
 		ret = alg->decrypt(req);
 
@@ -693,6 +707,9 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
 
 	skcipher_set_needkey(skcipher);
 
+	if (tfm->__crt_alg->cra_type != &crypto_skcipher_type)
+		return crypto_init_lskcipher_ops_sg(tfm);
+
 	if (alg->exit)
 		skcipher->base.exit = crypto_skcipher_exit_tfm;
 
@@ -702,6 +719,14 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
 	return 0;
 }
 
+static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg)
+{
+	if (alg->cra_type != &crypto_skcipher_type)
+		return sizeof(struct crypto_lskcipher *);
+
+	return crypto_alg_extsize(alg);
+}
+
 static void crypto_skcipher_free_instance(struct crypto_instance *inst)
 {
 	struct skcipher_instance *skcipher =
@@ -770,7 +795,7 @@ static int __maybe_unused crypto_skcipher_report_stat(
 }
 
 static const struct crypto_type crypto_skcipher_type = {
-	.extsize = crypto_alg_extsize,
+	.extsize = crypto_skcipher_extsize,
 	.init_tfm = crypto_skcipher_init_tfm,
 	.free = crypto_skcipher_free_instance,
 #ifdef CONFIG_PROC_FS
@@ -783,7 +808,7 @@ static const struct crypto_type crypto_skcipher_type = {
 	.report_stat = crypto_skcipher_report_stat,
 #endif
 	.maskclear = ~CRYPTO_ALG_TYPE_MASK,
-	.maskset = CRYPTO_ALG_TYPE_MASK,
+	.maskset = CRYPTO_ALG_TYPE_SKCIPHER_MASK,
 	.type = CRYPTO_ALG_TYPE_SKCIPHER,
 	.tfmsize = offsetof(struct crypto_skcipher, base),
 };
@@ -834,27 +859,43 @@ int crypto_has_skcipher(const char *alg_name, u32 type, u32 mask)
 }
 EXPORT_SYMBOL_GPL(crypto_has_skcipher);
 
-static int skcipher_prepare_alg(struct skcipher_alg *alg)
+int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
 {
-	struct crypto_istat_cipher *istat = skcipher_get_stat(alg);
+	struct crypto_istat_cipher *istat = skcipher_get_stat_common(alg);
 	struct crypto_alg *base = &alg->base;
 
-	if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 ||
-	    alg->walksize > PAGE_SIZE / 8)
+	if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8)
 		return -EINVAL;
 
 	if (!alg->chunksize)
 		alg->chunksize = base->cra_blocksize;
+
+	base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
+
+	if (IS_ENABLED(CONFIG_CRYPTO_STATS))
+		memset(istat, 0, sizeof(*istat));
+
+	return 0;
+}
+
+static int skcipher_prepare_alg(struct skcipher_alg *alg)
+{
+	struct crypto_alg *base = &alg->base;
+	int err;
+
+	err = skcipher_prepare_alg_common(&alg->co);
+	if (err)
+		return err;
+
+	if (alg->walksize > PAGE_SIZE / 8)
+		return -EINVAL;
+
 	if (!alg->walksize)
 		alg->walksize = alg->chunksize;
 
 	base->cra_type = &crypto_skcipher_type;
-	base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
 	base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER;
 
-	if (IS_ENABLED(CONFIG_CRYPTO_STATS))
-		memset(istat, 0, sizeof(*istat));
-
 	return 0;
 }
 
diff --git a/crypto/skcipher.h b/crypto/skcipher.h
new file mode 100644
index 000000000000..6f1295f0fef2
--- /dev/null
+++ b/crypto/skcipher.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Cryptographic API.
+ *
+ * Copyright (c) 2023 Herbert Xu <herbert@gondor.apana.org.au>
+ */
+#ifndef _LOCAL_CRYPTO_SKCIPHER_H
+#define _LOCAL_CRYPTO_SKCIPHER_H
+
+#include <crypto/internal/skcipher.h>
+#include "internal.h"
+
+static inline struct crypto_istat_cipher *skcipher_get_stat_common(
+	struct skcipher_alg_common *alg)
+{
+#ifdef CONFIG_CRYPTO_STATS
+	return &alg->stat;
+#else
+	return NULL;
+#endif
+}
+
+int crypto_lskcipher_setkey_sg(struct crypto_skcipher *tfm, const u8 *key,
+			       unsigned int keylen);
+int crypto_lskcipher_encrypt_sg(struct skcipher_request *req);
+int crypto_lskcipher_decrypt_sg(struct skcipher_request *req);
+int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm);
+int skcipher_prepare_alg_common(struct skcipher_alg_common *alg);
+
+#endif	/* _LOCAL_CRYPTO_SKCIPHER_H */
diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h
index fb3d9e899f52..4382fd707b8a 100644
--- a/include/crypto/internal/skcipher.h
+++ b/include/crypto/internal/skcipher.h
@@ -36,10 +36,25 @@ struct skcipher_instance {
 	};
 };
 
+struct lskcipher_instance {
+	void (*free)(struct lskcipher_instance *inst);
+	union {
+		struct {
+			char head[offsetof(struct lskcipher_alg, co.base)];
+			struct crypto_instance base;
+		} s;
+		struct lskcipher_alg alg;
+	};
+};
+
 struct crypto_skcipher_spawn {
 	struct crypto_spawn base;
 };
 
+struct crypto_lskcipher_spawn {
+	struct crypto_spawn base;
+};
+
 struct skcipher_walk {
 	union {
 		struct {
@@ -80,6 +95,12 @@ static inline struct crypto_instance *skcipher_crypto_instance(
 	return &inst->s.base;
 }
 
+static inline struct crypto_instance *lskcipher_crypto_instance(
+	struct lskcipher_instance *inst)
+{
+	return &inst->s.base;
+}
+
 static inline struct skcipher_instance *skcipher_alg_instance(
 	struct crypto_skcipher *skcipher)
 {
@@ -87,11 +108,23 @@ static inline struct skcipher_instance *skcipher_alg_instance(
 			    struct skcipher_instance, alg);
 }
 
+static inline struct lskcipher_instance *lskcipher_alg_instance(
+	struct crypto_lskcipher *lskcipher)
+{
+	return container_of(crypto_lskcipher_alg(lskcipher),
+			    struct lskcipher_instance, alg);
+}
+
 static inline void *skcipher_instance_ctx(struct skcipher_instance *inst)
 {
 	return crypto_instance_ctx(skcipher_crypto_instance(inst));
 }
 
+static inline void *lskcipher_instance_ctx(struct lskcipher_instance *inst)
+{
+	return crypto_instance_ctx(lskcipher_crypto_instance(inst));
+}
+
 static inline void skcipher_request_complete(struct skcipher_request *req, int err)
 {
 	crypto_request_complete(&req->base, err);
@@ -101,29 +134,56 @@ int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn,
 			 struct crypto_instance *inst,
 			 const char *name, u32 type, u32 mask);
 
+int crypto_grab_lskcipher(struct crypto_lskcipher_spawn *spawn,
+			  struct crypto_instance *inst,
+			  const char *name, u32 type, u32 mask);
+
 static inline void crypto_drop_skcipher(struct crypto_skcipher_spawn *spawn)
 {
 	crypto_drop_spawn(&spawn->base);
 }
 
+static inline void crypto_drop_lskcipher(struct crypto_lskcipher_spawn *spawn)
+{
+	crypto_drop_spawn(&spawn->base);
+}
+
 static inline struct skcipher_alg *crypto_skcipher_spawn_alg(
 	struct crypto_skcipher_spawn *spawn)
 {
 	return container_of(spawn->base.alg, struct skcipher_alg, base);
 }
 
+static inline struct lskcipher_alg *crypto_lskcipher_spawn_alg(
+	struct crypto_lskcipher_spawn *spawn)
+{
+	return container_of(spawn->base.alg, struct lskcipher_alg, co.base);
+}
+
 static inline struct skcipher_alg *crypto_spawn_skcipher_alg(
 	struct crypto_skcipher_spawn *spawn)
 {
 	return crypto_skcipher_spawn_alg(spawn);
 }
 
+static inline struct lskcipher_alg *crypto_spawn_lskcipher_alg(
+	struct crypto_lskcipher_spawn *spawn)
+{
+	return crypto_lskcipher_spawn_alg(spawn);
+}
+
 static inline struct crypto_skcipher *crypto_spawn_skcipher(
 	struct crypto_skcipher_spawn *spawn)
 {
 	return crypto_spawn_tfm2(&spawn->base);
 }
 
+static inline struct crypto_lskcipher *crypto_spawn_lskcipher(
+	struct crypto_lskcipher_spawn *spawn)
+{
+	return crypto_spawn_tfm2(&spawn->base);
+}
+
 static inline void crypto_skcipher_set_reqsize(
 	struct crypto_skcipher *skcipher, unsigned int reqsize)
 {
@@ -144,6 +204,13 @@ void crypto_unregister_skciphers(struct skcipher_alg *algs, int count);
 int skcipher_register_instance(struct crypto_template *tmpl,
 			       struct skcipher_instance *inst);
 
+int crypto_register_lskcipher(struct lskcipher_alg *alg);
+void crypto_unregister_lskcipher(struct lskcipher_alg *alg);
+int crypto_register_lskciphers(struct lskcipher_alg *algs, int count);
+void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count);
+int lskcipher_register_instance(struct crypto_template *tmpl,
+				struct lskcipher_instance *inst);
+
 int skcipher_walk_done(struct skcipher_walk *walk, int err);
 int skcipher_walk_virt(struct skcipher_walk *walk,
 		       struct skcipher_request *req,
@@ -166,6 +233,11 @@ static inline void *crypto_skcipher_ctx(struct crypto_skcipher *tfm)
 	return crypto_tfm_ctx(&tfm->base);
 }
 
+static inline void *crypto_lskcipher_ctx(struct crypto_lskcipher *tfm)
+{
+	return crypto_tfm_ctx(&tfm->base);
+}
+
 static inline void *crypto_skcipher_ctx_dma(struct crypto_skcipher *tfm)
 {
 	return crypto_tfm_ctx_dma(&tfm->base);
@@ -209,21 +281,16 @@ static inline unsigned int crypto_skcipher_alg_walksize(
 	return alg->walksize;
 }
 
-/**
- * crypto_skcipher_walksize() - obtain walk size
- * @tfm: cipher handle
- *
- * In some cases, algorithms can only perform optimally when operating on
- * multiple blocks in parallel. This is reflected by the walksize, which
- * must be a multiple of the chunksize (or equal if the concern does not
- * apply)
- *
- * Return: walk size in bytes
- */
-static inline unsigned int crypto_skcipher_walksize(
-	struct crypto_skcipher *tfm)
+static inline unsigned int crypto_lskcipher_alg_min_keysize(
+	struct lskcipher_alg *alg)
 {
-	return crypto_skcipher_alg_walksize(crypto_skcipher_alg(tfm));
+	return alg->co.min_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_alg_max_keysize(
+	struct lskcipher_alg *alg)
+{
+	return alg->co.max_keysize;
 }
 
 /* Helpers for simple block cipher modes of operation */
@@ -249,5 +316,24 @@ static inline struct crypto_alg *skcipher_ialg_simple(
 	return crypto_spawn_cipher_alg(spawn);
 }
 
+static inline struct crypto_lskcipher *lskcipher_cipher_simple(
+	struct crypto_lskcipher *tfm)
+{
+	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+
+	return *ctx;
+}
+
+struct lskcipher_instance *lskcipher_alloc_instance_simple(
+	struct crypto_template *tmpl, struct rtattr **tb);
+
+static inline struct lskcipher_alg *lskcipher_ialg_simple(
+	struct lskcipher_instance *inst)
+{
+	struct crypto_lskcipher_spawn *spawn = lskcipher_instance_ctx(inst);
+
+	return crypto_lskcipher_spawn_alg(spawn);
+}
+
 #endif	/* _CRYPTO_INTERNAL_SKCIPHER_H */
 
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 080d1ba3611d..a648ef5ce897 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -49,6 +49,10 @@ struct crypto_sync_skcipher {
 	struct crypto_skcipher base;
 };
 
+struct crypto_lskcipher {
+	struct crypto_tfm base;
+};
+
 /*
  * struct crypto_istat_cipher - statistics for cipher algorithm
  * @encrypt_cnt:	number of encrypt requests
@@ -65,6 +69,43 @@ struct crypto_istat_cipher {
 	atomic64_t err_cnt;
 };
 
+#ifdef CONFIG_CRYPTO_STATS
+#define SKCIPHER_ALG_COMMON_STAT struct crypto_istat_cipher stat;
+#else
+#define SKCIPHER_ALG_COMMON_STAT
+#endif
+
+/*
+ * struct skcipher_alg_common - common properties of skcipher_alg
+ * @min_keysize: Minimum key size supported by the transformation. This is the
+ *		 smallest key length supported by this transformation algorithm.
+ *		 This must be set to one of the pre-defined values as this is
+ *		 not hardware specific. Possible values for this field can be
+ *		 found via git grep "_MIN_KEY_SIZE" include/crypto/
+ * @max_keysize: Maximum key size supported by the transformation. This is the
+ *		 largest key length supported by this transformation algorithm.
+ *		 This must be set to one of the pre-defined values as this is
+ *		 not hardware specific. Possible values for this field can be
+ *		 found via git grep "_MAX_KEY_SIZE" include/crypto/
+ * @ivsize: IV size applicable for transformation. The consumer must provide an
+ *	    IV of exactly that size to perform the encrypt or decrypt operation.
+ * @chunksize: Equal to the block size except for stream ciphers such as
+ *	       CTR where it is set to the underlying block size.
+ * @stat: Statistics for cipher algorithm
+ * @base: Definition of a generic crypto algorithm.
+ */
+#define SKCIPHER_ALG_COMMON {		\
+	unsigned int min_keysize;	\
+	unsigned int max_keysize;	\
+	unsigned int ivsize;		\
+	unsigned int chunksize;		\
+					\
+	SKCIPHER_ALG_COMMON_STAT	\
+					\
+	struct crypto_alg base;		\
+}
+struct skcipher_alg_common SKCIPHER_ALG_COMMON;
+
 /**
  * struct skcipher_alg - symmetric key cipher definition
  * @min_keysize: Minimum key size supported by the transformation. This is the
@@ -120,6 +161,7 @@ struct crypto_istat_cipher {
  * 	      in parallel. Should be a multiple of chunksize.
  * @stat: Statistics for cipher algorithm
  * @base: Definition of a generic crypto algorithm.
+ * @co: see struct skcipher_alg_common
  *
  * All fields except @ivsize are mandatory and must be filled.
  */
@@ -131,17 +173,55 @@ struct skcipher_alg {
 	int (*init)(struct crypto_skcipher *tfm);
 	void (*exit)(struct crypto_skcipher *tfm);
 
-	unsigned int min_keysize;
-	unsigned int max_keysize;
-	unsigned int ivsize;
-	unsigned int chunksize;
 	unsigned int walksize;
 
-#ifdef CONFIG_CRYPTO_STATS
-	struct crypto_istat_cipher stat;
-#endif
+	union {
+		struct SKCIPHER_ALG_COMMON;
+		struct skcipher_alg_common co;
+	};
+};
 
-	struct crypto_alg base;
+/**
+ * struct lskcipher_alg - linear symmetric key cipher definition
+ * @setkey: Set key for the transformation. This function is used to either
+ *	    program a supplied key into the hardware or store the key in the
+ *	    transformation context for programming it later. Note that this
+ *	    function does modify the transformation context. This function can
+ *	    be called multiple times during the existence of the transformation
+ *	    object, so one must make sure the key is properly reprogrammed into
+ *	    the hardware. This function is also responsible for checking the key
+ *	    length for validity. In case a software fallback was put in place in
+ *	    the @cra_init call, this function might need to use the fallback if
+ *	    the algorithm doesn't support all of the key sizes.
+ * @encrypt: Encrypt a number of bytes. This function is used to encrypt
+ *	     the supplied data.  This function shall not modify
+ *	     the transformation context, as this function may be called
+ *	     in parallel with the same transformation object.  Data
+ *	     may be left over if length is not a multiple of blocks
+ *	     and there is more to come (final == false).  The number of
+ *	     left-over bytes should be returned in case of success.
+ * @decrypt: Decrypt a number of bytes. This is a reverse counterpart to
+ *	     @encrypt and the conditions are exactly the same.
+ * @init: Initialize the cryptographic transformation object. This function
+ *	  is used to initialize the cryptographic transformation object.
+ *	  This function is called only once at the instantiation time, right
+ *	  after the transformation context was allocated.
+ * @exit: Deinitialize the cryptographic transformation object. This is a
+ *	  counterpart to @init, used to remove various changes set in
+ *	  @init.
+ * @co: see struct skcipher_alg_common
+ */
+struct lskcipher_alg {
+	int (*setkey)(struct crypto_lskcipher *tfm, const u8 *key,
+	              unsigned int keylen);
+	int (*encrypt)(struct crypto_lskcipher *tfm, const u8 *src,
+		       u8 *dst, unsigned len, u8 *iv, bool final);
+	int (*decrypt)(struct crypto_lskcipher *tfm, const u8 *src,
+		       u8 *dst, unsigned len, u8 *iv, bool final);
+	int (*init)(struct crypto_lskcipher *tfm);
+	void (*exit)(struct crypto_lskcipher *tfm);
+
+	struct skcipher_alg_common co;
 };
 
 #define MAX_SYNC_SKCIPHER_REQSIZE      384
@@ -213,12 +293,36 @@ struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name,
 struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(const char *alg_name,
 					      u32 type, u32 mask);
 
+
+/**
+ * crypto_alloc_lskcipher() - allocate linear symmetric key cipher handle
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ *	      lskcipher
+ * @type: specifies the type of the cipher
+ * @mask: specifies the mask for the cipher
+ *
+ * Allocate a cipher handle for an lskcipher. The returned struct
+ * crypto_lskcipher is the cipher handle that is required for any subsequent
+ * API invocation for that lskcipher.
+ *
+ * Return: allocated cipher handle in case of success; IS_ERR() is true in case
+ *	   of an error, PTR_ERR() returns the error code.
+ */
+struct crypto_lskcipher *crypto_alloc_lskcipher(const char *alg_name,
+						u32 type, u32 mask);
+
 static inline struct crypto_tfm *crypto_skcipher_tfm(
 	struct crypto_skcipher *tfm)
 {
 	return &tfm->base;
 }
 
+static inline struct crypto_tfm *crypto_lskcipher_tfm(
+	struct crypto_lskcipher *tfm)
+{
+	return &tfm->base;
+}
+
 /**
  * crypto_free_skcipher() - zeroize and free cipher handle
  * @tfm: cipher handle to be freed
@@ -235,6 +339,17 @@ static inline void crypto_free_sync_skcipher(struct crypto_sync_skcipher *tfm)
 	crypto_free_skcipher(&tfm->base);
 }
 
+/**
+ * crypto_free_lskcipher() - zeroize and free cipher handle
+ * @tfm: cipher handle to be freed
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
+ */
+static inline void crypto_free_lskcipher(struct crypto_lskcipher *tfm)
+{
+	crypto_destroy_tfm(tfm, crypto_lskcipher_tfm(tfm));
+}
+
 /**
  * crypto_has_skcipher() - Search for the availability of an skcipher.
  * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
@@ -253,6 +368,19 @@ static inline const char *crypto_skcipher_driver_name(
 	return crypto_tfm_alg_driver_name(crypto_skcipher_tfm(tfm));
 }
 
+static inline const char *crypto_lskcipher_driver_name(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_tfm_alg_driver_name(crypto_lskcipher_tfm(tfm));
+}
+
+static inline struct skcipher_alg_common *crypto_skcipher_alg_common(
+	struct crypto_skcipher *tfm)
+{
+	return container_of(crypto_skcipher_tfm(tfm)->__crt_alg,
+			    struct skcipher_alg_common, base);
+}
+
 static inline struct skcipher_alg *crypto_skcipher_alg(
 	struct crypto_skcipher *tfm)
 {
@@ -260,11 +388,24 @@ static inline struct skcipher_alg *crypto_skcipher_alg(
 			    struct skcipher_alg, base);
 }
 
+static inline struct lskcipher_alg *crypto_lskcipher_alg(
+	struct crypto_lskcipher *tfm)
+{
+	return container_of(crypto_lskcipher_tfm(tfm)->__crt_alg,
+			    struct lskcipher_alg, co.base);
+}
+
 static inline unsigned int crypto_skcipher_alg_ivsize(struct skcipher_alg *alg)
 {
 	return alg->ivsize;
 }
 
+static inline unsigned int crypto_lskcipher_alg_ivsize(
+	struct lskcipher_alg *alg)
+{
+	return alg->co.ivsize;
+}
+
 /**
  * crypto_skcipher_ivsize() - obtain IV size
  * @tfm: cipher handle
@@ -276,7 +417,7 @@ static inline unsigned int crypto_skcipher_alg_ivsize(struct skcipher_alg *alg)
  */
 static inline unsigned int crypto_skcipher_ivsize(struct crypto_skcipher *tfm)
 {
-	return crypto_skcipher_alg(tfm)->ivsize;
+	return crypto_skcipher_alg_common(tfm)->ivsize;
 }
 
 static inline unsigned int crypto_sync_skcipher_ivsize(
@@ -285,6 +426,21 @@ static inline unsigned int crypto_sync_skcipher_ivsize(
 	return crypto_skcipher_ivsize(&tfm->base);
 }
 
+/**
+ * crypto_lskcipher_ivsize() - obtain IV size
+ * @tfm: cipher handle
+ *
+ * The size of the IV for the lskcipher referenced by the cipher handle is
+ * returned. This IV size may be zero if the cipher does not need an IV.
+ *
+ * Return: IV size in bytes
+ */
+static inline unsigned int crypto_lskcipher_ivsize(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_lskcipher_alg(tfm)->co.ivsize;
+}
+
 /**
  * crypto_skcipher_blocksize() - obtain block size of cipher
  * @tfm: cipher handle
@@ -301,12 +457,34 @@ static inline unsigned int crypto_skcipher_blocksize(
 	return crypto_tfm_alg_blocksize(crypto_skcipher_tfm(tfm));
 }
 
+/**
+ * crypto_lskcipher_blocksize() - obtain block size of cipher
+ * @tfm: cipher handle
+ *
+ * The block size for the lskcipher referenced with the cipher handle is
+ * returned. The caller may use that information to allocate appropriate
+ * memory for the data returned by the encryption or decryption operation
+ *
+ * Return: block size of cipher
+ */
+static inline unsigned int crypto_lskcipher_blocksize(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_tfm_alg_blocksize(crypto_lskcipher_tfm(tfm));
+}
+
 static inline unsigned int crypto_skcipher_alg_chunksize(
 	struct skcipher_alg *alg)
 {
 	return alg->chunksize;
 }
 
+static inline unsigned int crypto_lskcipher_alg_chunksize(
+	struct lskcipher_alg *alg)
+{
+	return alg->co.chunksize;
+}
+
 /**
  * crypto_skcipher_chunksize() - obtain chunk size
  * @tfm: cipher handle
@@ -321,7 +499,24 @@ static inline unsigned int crypto_skcipher_alg_chunksize(
 static inline unsigned int crypto_skcipher_chunksize(
 	struct crypto_skcipher *tfm)
 {
-	return crypto_skcipher_alg_chunksize(crypto_skcipher_alg(tfm));
+	return crypto_skcipher_alg_common(tfm)->chunksize;
+}
+
+/**
+ * crypto_lskcipher_chunksize() - obtain chunk size
+ * @tfm: cipher handle
+ *
+ * The block size is set to one for ciphers such as CTR.  However,
+ * you still need to provide incremental updates in multiples of
+ * the underlying block size as the IV does not have sub-block
+ * granularity.  This is known in this API as the chunk size.
+ *
+ * Return: chunk size in bytes
+ */
+static inline unsigned int crypto_lskcipher_chunksize(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_lskcipher_alg_chunksize(crypto_lskcipher_alg(tfm));
 }
 
 static inline unsigned int crypto_sync_skcipher_blocksize(
@@ -336,6 +531,12 @@ static inline unsigned int crypto_skcipher_alignmask(
 	return crypto_tfm_alg_alignmask(crypto_skcipher_tfm(tfm));
 }
 
+static inline unsigned int crypto_lskcipher_alignmask(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_tfm_alg_alignmask(crypto_lskcipher_tfm(tfm));
+}
+
 static inline u32 crypto_skcipher_get_flags(struct crypto_skcipher *tfm)
 {
 	return crypto_tfm_get_flags(crypto_skcipher_tfm(tfm));
@@ -371,6 +572,23 @@ static inline void crypto_sync_skcipher_clear_flags(
 	crypto_skcipher_clear_flags(&tfm->base, flags);
 }
 
+static inline u32 crypto_lskcipher_get_flags(struct crypto_lskcipher *tfm)
+{
+	return crypto_tfm_get_flags(crypto_lskcipher_tfm(tfm));
+}
+
+static inline void crypto_lskcipher_set_flags(struct crypto_lskcipher *tfm,
+					       u32 flags)
+{
+	crypto_tfm_set_flags(crypto_lskcipher_tfm(tfm), flags);
+}
+
+static inline void crypto_lskcipher_clear_flags(struct crypto_lskcipher *tfm,
+						 u32 flags)
+{
+	crypto_tfm_clear_flags(crypto_lskcipher_tfm(tfm), flags);
+}
+
 /**
  * crypto_skcipher_setkey() - set key for cipher
  * @tfm: cipher handle
@@ -396,16 +614,47 @@ static inline int crypto_sync_skcipher_setkey(struct crypto_sync_skcipher *tfm,
 	return crypto_skcipher_setkey(&tfm->base, key, keylen);
 }
 
+/**
+ * crypto_lskcipher_setkey() - set key for cipher
+ * @tfm: cipher handle
+ * @key: buffer holding the key
+ * @keylen: length of the key in bytes
+ *
+ * The caller provided key is set for the lskcipher referenced by the cipher
+ * handle.
+ *
+ * Note, the key length determines the cipher type. Many block ciphers implement
+ * different cipher modes depending on the key size, such as AES-128 vs AES-192
+ * vs. AES-256. When providing a 16 byte key for an AES cipher handle, AES-128
+ * is performed.
+ *
+ * Return: 0 if the setting of the key was successful; < 0 if an error occurred
+ */
+int crypto_lskcipher_setkey(struct crypto_lskcipher *tfm,
+			    const u8 *key, unsigned int keylen);
+
 static inline unsigned int crypto_skcipher_min_keysize(
 	struct crypto_skcipher *tfm)
 {
-	return crypto_skcipher_alg(tfm)->min_keysize;
+	return crypto_skcipher_alg_common(tfm)->min_keysize;
 }
 
 static inline unsigned int crypto_skcipher_max_keysize(
 	struct crypto_skcipher *tfm)
 {
-	return crypto_skcipher_alg(tfm)->max_keysize;
+	return crypto_skcipher_alg_common(tfm)->max_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_min_keysize(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_lskcipher_alg(tfm)->co.min_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_max_keysize(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_lskcipher_alg(tfm)->co.max_keysize;
 }
 
 /**
@@ -457,6 +706,42 @@ int crypto_skcipher_encrypt(struct skcipher_request *req);
  */
 int crypto_skcipher_decrypt(struct skcipher_request *req);
 
+/**
+ * crypto_lskcipher_encrypt() - encrypt plaintext
+ * @tfm: lskcipher handle
+ * @src: source buffer
+ * @dst: destination buffer
+ * @len: number of bytes to process
+ * @iv: IV for the cipher operation which must comply with the IV size defined
+ *      by crypto_lskcipher_ivsize
+ *
+ * Encrypt plaintext data using the lskcipher handle.
+ *
+ * Return: >=0 if the cipher operation was successful, if positive
+ *	   then this many bytes have been left unprocessed;
+ *	   < 0 if an error occurred
+ */
+int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+			     u8 *dst, unsigned len, u8 *iv);
+
+/**
+ * crypto_lskcipher_decrypt() - decrypt ciphertext
+ * @tfm: lskcipher handle
+ * @src: source buffer
+ * @dst: destination buffer
+ * @len: number of bytes to process
+ * @iv: IV for the cipher operation which must comply with the IV size defined
+ *      by crypto_lskcipher_ivsize
+ *
+ * Decrypt ciphertext data using the lskcipher handle.
+ *
+ * Return: >=0 if the cipher operation was successful, if positive
+ *	   then this many bytes have been left unprocessed;
+ *	   < 0 if an error occurred
+ */
+int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+			     u8 *dst, unsigned len, u8 *iv);
+
 /**
  * DOC: Symmetric Key Cipher Request Handle
  *
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index a0780deb017a..f3c3a3b27fac 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -24,6 +24,7 @@
 #define CRYPTO_ALG_TYPE_CIPHER		0x00000001
 #define CRYPTO_ALG_TYPE_COMPRESS	0x00000002
 #define CRYPTO_ALG_TYPE_AEAD		0x00000003
+#define CRYPTO_ALG_TYPE_LSKCIPHER	0x00000004
 #define CRYPTO_ALG_TYPE_SKCIPHER	0x00000005
 #define CRYPTO_ALG_TYPE_AKCIPHER	0x00000006
 #define CRYPTO_ALG_TYPE_SIG		0x00000007
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 5/8] crypto: lskcipher - Add compatibility wrapper around ECB
  2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
                   ` (3 preceding siblings ...)
  2023-09-14  8:28 ` [PATCH 4/8] crypto: skcipher - Add lskcipher Herbert Xu
@ 2023-09-14  8:28 ` Herbert Xu
  2023-09-14  8:28 ` [PATCH 6/8] crypto: testmgr - Add support for lskcipher algorithms Herbert Xu
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:28 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: Ard Biesheuvel

As an aid to the transition from cipher algorithm implementations
to lskcipher, add a temporary wrapper when creating simple lskcipher
templates by using ecb(X) instead of X if an lskcipher implementation
of X cannot be found.

This can be reverted once all cipher implementations have switched
over to lskcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/lskcipher.c | 57 ++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 52 insertions(+), 5 deletions(-)

diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
index 3343c6d955da..9be3c04bc62a 100644
--- a/crypto/lskcipher.c
+++ b/crypto/lskcipher.c
@@ -536,13 +536,19 @@ struct lskcipher_instance *lskcipher_alloc_instance_simple(
 	u32 mask;
 	struct lskcipher_instance *inst;
 	struct crypto_lskcipher_spawn *spawn;
+	char ecb_name[CRYPTO_MAX_ALG_NAME];
 	struct lskcipher_alg *cipher_alg;
+	const char *cipher_name;
 	int err;
 
 	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
 	if (err)
 		return ERR_PTR(err);
 
+	cipher_name = crypto_attr_alg_name(tb[1]);
+	if (IS_ERR(cipher_name))
+		return ERR_CAST(cipher_name);
+
 	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
 	if (!inst)
 		return ERR_PTR(-ENOMEM);
@@ -550,9 +556,23 @@ struct lskcipher_instance *lskcipher_alloc_instance_simple(
 	spawn = lskcipher_instance_ctx(inst);
 	err = crypto_grab_lskcipher(spawn,
 				    lskcipher_crypto_instance(inst),
-				    crypto_attr_alg_name(tb[1]), 0, mask);
+				    cipher_name, 0, mask);
+
+	ecb_name[0] = 0;
+	if (err == -ENOENT && !!memcmp(tmpl->name, "ecb", 4)) {
+		err = -ENAMETOOLONG;
+		if (snprintf(ecb_name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+			     cipher_name) >= CRYPTO_MAX_ALG_NAME)
+			goto err_free_inst;
+
+		err = crypto_grab_lskcipher(spawn,
+					    lskcipher_crypto_instance(inst),
+					    ecb_name, 0, mask);
+	}
+
 	if (err)
 		goto err_free_inst;
+
 	cipher_alg = crypto_lskcipher_spawn_alg(spawn);
 
 	err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
@@ -560,10 +580,37 @@ struct lskcipher_instance *lskcipher_alloc_instance_simple(
 	if (err)
 		goto err_free_inst;
 
-	/* Don't allow nesting. */
-	err = -ELOOP;
-	if ((cipher_alg->co.base.cra_flags & CRYPTO_ALG_INSTANCE))
-		goto err_free_inst;
+	if (ecb_name[0]) {
+		int len;
+
+		len = strscpy(ecb_name, &cipher_alg->co.base.cra_name[4],
+			      sizeof(ecb_name));
+		if (len < 2)
+			goto err_free_inst;
+
+		if (ecb_name[len - 1] != ')')
+			goto err_free_inst;
+
+		ecb_name[len - 1] = 0;
+
+		err = -ENAMETOOLONG;
+		if (snprintf(inst->alg.co.base.cra_name, CRYPTO_MAX_ALG_NAME,
+			     "%s(%s)", tmpl->name, ecb_name) >=
+		    CRYPTO_MAX_ALG_NAME)
+			goto err_free_inst;
+
+		if (strcmp(ecb_name, cipher_name) &&
+		    snprintf(inst->alg.co.base.cra_driver_name,
+			     CRYPTO_MAX_ALG_NAME,
+			     "%s(%s)", tmpl->name, cipher_name) >=
+		    CRYPTO_MAX_ALG_NAME)
+			goto err_free_inst;
+	} else {
+		/* Don't allow nesting. */
+		err = -ELOOP;
+		if ((cipher_alg->co.base.cra_flags & CRYPTO_ALG_INSTANCE))
+			goto err_free_inst;
+	}
 
 	err = -EINVAL;
 	if (cipher_alg->co.ivsize)
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 6/8] crypto: testmgr - Add support for lskcipher algorithms
  2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
                   ` (4 preceding siblings ...)
  2023-09-14  8:28 ` [PATCH 5/8] crypto: lskcipher - Add compatibility wrapper around ECB Herbert Xu
@ 2023-09-14  8:28 ` Herbert Xu
  2023-09-14  8:28 ` [PATCH 7/8] crypto: ecb - Convert from skcipher to lskcipher Herbert Xu
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:28 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: Ard Biesheuvel

Test lskcipher algorithms using the same logic as cipher algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/testmgr.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 216878c8bc3d..aed4a6bf47ad 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -5945,6 +5945,25 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask)
 	return rc;
 
 notest:
+	if ((type & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_LSKCIPHER) {
+		char nalg[CRYPTO_MAX_ALG_NAME];
+
+		if (snprintf(nalg, sizeof(nalg), "ecb(%s)", alg) >=
+		    sizeof(nalg))
+			goto notest2;
+
+		i = alg_find_test(nalg);
+		if (i < 0)
+			goto notest2;
+
+		if (fips_enabled && !alg_test_descs[i].fips_allowed)
+			goto non_fips_alg;
+
+		rc = alg_test_skcipher(alg_test_descs + i, driver, type, mask);
+		goto test_done;
+	}
+
+notest2:
 	printk(KERN_INFO "alg: No test for %s (%s)\n", alg, driver);
 
 	if (type & CRYPTO_ALG_FIPS_INTERNAL)
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 7/8] crypto: ecb - Convert from skcipher to lskcipher
  2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
                   ` (5 preceding siblings ...)
  2023-09-14  8:28 ` [PATCH 6/8] crypto: testmgr - Add support for lskcipher algorithms Herbert Xu
@ 2023-09-14  8:28 ` Herbert Xu
  2023-09-14  8:28 ` [PATCH 8/8] crypto: cbc " Herbert Xu
  2023-09-14  8:51 ` [PATCH 0/8] crypto: Add lskcipher API type Ard Biesheuvel
  8 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:28 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: Ard Biesheuvel

This patch adds two different implementations of ECB.  First of
all an lskcipher wrapper around existing ciphers is introduced as
a temporary transition aid.

Secondly a permanent lskcipher template is also added.  It's simply
a wrapper around the underlying lskcipher algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/ecb.c | 206 ++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 164 insertions(+), 42 deletions(-)

diff --git a/crypto/ecb.c b/crypto/ecb.c
index 71fbb0543d64..cc7625d1a475 100644
--- a/crypto/ecb.c
+++ b/crypto/ecb.c
@@ -5,75 +5,196 @@
  * Copyright (c) 2006 Herbert Xu <herbert@gondor.apana.org.au>
  */
 
-#include <crypto/algapi.h>
 #include <crypto/internal/cipher.h>
 #include <crypto/internal/skcipher.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/slab.h>
 
-static int crypto_ecb_crypt(struct skcipher_request *req,
-			    struct crypto_cipher *cipher,
+static int crypto_ecb_crypt(struct crypto_cipher *cipher, const u8 *src,
+			    u8 *dst, unsigned nbytes, bool final,
 			    void (*fn)(struct crypto_tfm *, u8 *, const u8 *))
 {
 	const unsigned int bsize = crypto_cipher_blocksize(cipher);
-	struct skcipher_walk walk;
-	unsigned int nbytes;
+
+	while (nbytes >= bsize) {
+		fn(crypto_cipher_tfm(cipher), dst, src);
+
+		src += bsize;
+		dst += bsize;
+
+		nbytes -= bsize;
+	}
+
+	return nbytes && final ? -EINVAL : nbytes;
+}
+
+static int crypto_ecb_encrypt2(struct crypto_lskcipher *tfm, const u8 *src,
+			       u8 *dst, unsigned len, u8 *iv, bool final)
+{
+	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+	struct crypto_cipher *cipher = *ctx;
+
+	return crypto_ecb_crypt(cipher, src, dst, len, final,
+				crypto_cipher_alg(cipher)->cia_encrypt);
+}
+
+static int crypto_ecb_decrypt2(struct crypto_lskcipher *tfm, const u8 *src,
+			       u8 *dst, unsigned len, u8 *iv, bool final)
+{
+	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+	struct crypto_cipher *cipher = *ctx;
+
+	return crypto_ecb_crypt(cipher, src, dst, len, final,
+				crypto_cipher_alg(cipher)->cia_decrypt);
+}
+
+static int lskcipher_setkey_simple2(struct crypto_lskcipher *tfm,
+				    const u8 *key, unsigned int keylen)
+{
+	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+	struct crypto_cipher *cipher = *ctx;
+
+	crypto_cipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
+	crypto_cipher_set_flags(cipher, crypto_lskcipher_get_flags(tfm) &
+				CRYPTO_TFM_REQ_MASK);
+	return crypto_cipher_setkey(cipher, key, keylen);
+}
+
+static int lskcipher_init_tfm_simple2(struct crypto_lskcipher *tfm)
+{
+	struct lskcipher_instance *inst = lskcipher_alg_instance(tfm);
+	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+	struct crypto_cipher_spawn *spawn;
+	struct crypto_cipher *cipher;
+
+	spawn = lskcipher_instance_ctx(inst);
+	cipher = crypto_spawn_cipher(spawn);
+	if (IS_ERR(cipher))
+		return PTR_ERR(cipher);
+
+	*ctx = cipher;
+	return 0;
+}
+
+static void lskcipher_exit_tfm_simple2(struct crypto_lskcipher *tfm)
+{
+	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+
+	crypto_free_cipher(*ctx);
+}
+
+static void lskcipher_free_instance_simple2(struct lskcipher_instance *inst)
+{
+	crypto_drop_cipher(lskcipher_instance_ctx(inst));
+	kfree(inst);
+}
+
+static struct lskcipher_instance *lskcipher_alloc_instance_simple2(
+	struct crypto_template *tmpl, struct rtattr **tb)
+{
+	struct crypto_cipher_spawn *spawn;
+	struct lskcipher_instance *inst;
+	struct crypto_alg *cipher_alg;
+	u32 mask;
 	int err;
 
-	err = skcipher_walk_virt(&walk, req, false);
+	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
+	if (err)
+		return ERR_PTR(err);
 
-	while ((nbytes = walk.nbytes) != 0) {
-		const u8 *src = walk.src.virt.addr;
-		u8 *dst = walk.dst.virt.addr;
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	if (!inst)
+		return ERR_PTR(-ENOMEM);
+	spawn = lskcipher_instance_ctx(inst);
 
-		do {
-			fn(crypto_cipher_tfm(cipher), dst, src);
+	err = crypto_grab_cipher(spawn, lskcipher_crypto_instance(inst),
+				 crypto_attr_alg_name(tb[1]), 0, mask);
+	if (err)
+		goto err_free_inst;
+	cipher_alg = crypto_spawn_cipher_alg(spawn);
 
-			src += bsize;
-			dst += bsize;
-		} while ((nbytes -= bsize) >= bsize);
+	err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
+				  cipher_alg);
+	if (err)
+		goto err_free_inst;
 
-		err = skcipher_walk_done(&walk, nbytes);
-	}
+	inst->free = lskcipher_free_instance_simple2;
+
+	/* Default algorithm properties, can be overridden */
+	inst->alg.co.base.cra_blocksize = cipher_alg->cra_blocksize;
+	inst->alg.co.base.cra_alignmask = cipher_alg->cra_alignmask;
+	inst->alg.co.base.cra_priority = cipher_alg->cra_priority;
+	inst->alg.co.min_keysize = cipher_alg->cra_cipher.cia_min_keysize;
+	inst->alg.co.max_keysize = cipher_alg->cra_cipher.cia_max_keysize;
+	inst->alg.co.ivsize = cipher_alg->cra_blocksize;
+
+	/* Use struct crypto_cipher * by default, can be overridden */
+	inst->alg.co.base.cra_ctxsize = sizeof(struct crypto_cipher *);
+	inst->alg.setkey = lskcipher_setkey_simple2;
+	inst->alg.init = lskcipher_init_tfm_simple2;
+	inst->alg.exit = lskcipher_exit_tfm_simple2;
+
+	return inst;
+
+err_free_inst:
+	lskcipher_free_instance_simple2(inst);
+	return ERR_PTR(err);
+}
+
+static int crypto_ecb_create2(struct crypto_template *tmpl, struct rtattr **tb)
+{
+	struct lskcipher_instance *inst;
+	int err;
+
+	inst = lskcipher_alloc_instance_simple2(tmpl, tb);
+	if (IS_ERR(inst))
+		return PTR_ERR(inst);
+
+	/* ECB mode doesn't take an IV */
+	inst->alg.co.ivsize = 0;
+
+	inst->alg.encrypt = crypto_ecb_encrypt2;
+	inst->alg.decrypt = crypto_ecb_decrypt2;
+
+	err = lskcipher_register_instance(tmpl, inst);
+	if (err)
+		inst->free(inst);
 
 	return err;
 }
 
-static int crypto_ecb_encrypt(struct skcipher_request *req)
-{
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
-
-	return crypto_ecb_crypt(req, cipher,
-				crypto_cipher_alg(cipher)->cia_encrypt);
-}
-
-static int crypto_ecb_decrypt(struct skcipher_request *req)
-{
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
-
-	return crypto_ecb_crypt(req, cipher,
-				crypto_cipher_alg(cipher)->cia_decrypt);
-}
-
 static int crypto_ecb_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
-	struct skcipher_instance *inst;
+	struct crypto_lskcipher_spawn *spawn;
+	struct lskcipher_alg *cipher_alg;
+	struct lskcipher_instance *inst;
 	int err;
 
-	inst = skcipher_alloc_instance_simple(tmpl, tb);
-	if (IS_ERR(inst))
-		return PTR_ERR(inst);
+	inst = lskcipher_alloc_instance_simple(tmpl, tb);
+	if (IS_ERR(inst)) {
+		err = crypto_ecb_create2(tmpl, tb);
+		return err;
+	}
 
-	inst->alg.ivsize = 0; /* ECB mode doesn't take an IV */
+	spawn = lskcipher_instance_ctx(inst);
+	cipher_alg = crypto_lskcipher_spawn_alg(spawn);
 
-	inst->alg.encrypt = crypto_ecb_encrypt;
-	inst->alg.decrypt = crypto_ecb_decrypt;
+	/* ECB mode doesn't take an IV */
+	inst->alg.co.ivsize = 0;
+	if (cipher_alg->co.ivsize)
+		return -EINVAL;
 
-	err = skcipher_register_instance(tmpl, inst);
+	inst->alg.co.base.cra_ctxsize = cipher_alg->co.base.cra_ctxsize;
+	inst->alg.setkey = cipher_alg->setkey;
+	inst->alg.encrypt = cipher_alg->encrypt;
+	inst->alg.decrypt = cipher_alg->decrypt;
+	inst->alg.init = cipher_alg->init;
+	inst->alg.exit = cipher_alg->exit;
+
+	err = lskcipher_register_instance(tmpl, inst);
 	if (err)
 		inst->free(inst);
 
@@ -102,3 +223,4 @@ module_exit(crypto_ecb_module_exit);
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("ECB block cipher mode of operation");
 MODULE_ALIAS_CRYPTO("ecb");
+MODULE_IMPORT_NS(CRYPTO_INTERNAL);
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 8/8] crypto: cbc - Convert from skcipher to lskcipher
  2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
                   ` (6 preceding siblings ...)
  2023-09-14  8:28 ` [PATCH 7/8] crypto: ecb - Convert from skcipher to lskcipher Herbert Xu
@ 2023-09-14  8:28 ` Herbert Xu
  2023-10-02 20:25   ` Nathan Chancellor
  2023-09-14  8:51 ` [PATCH 0/8] crypto: Add lskcipher API type Ard Biesheuvel
  8 siblings, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:28 UTC (permalink / raw)
  To: Linux Crypto Mailing List; +Cc: Ard Biesheuvel

Replace the existing skcipher CBC template with an lskcipher version.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---
 crypto/cbc.c | 159 +++++++++++++++++++--------------------------------
 1 file changed, 59 insertions(+), 100 deletions(-)

diff --git a/crypto/cbc.c b/crypto/cbc.c
index 6c03e96b945f..28345b8d921c 100644
--- a/crypto/cbc.c
+++ b/crypto/cbc.c
@@ -5,8 +5,6 @@
  * Copyright (c) 2006-2016 Herbert Xu <herbert@gondor.apana.org.au>
  */
 
-#include <crypto/algapi.h>
-#include <crypto/internal/cipher.h>
 #include <crypto/internal/skcipher.h>
 #include <linux/err.h>
 #include <linux/init.h>
@@ -14,99 +12,71 @@
 #include <linux/log2.h>
 #include <linux/module.h>
 
-static int crypto_cbc_encrypt_segment(struct skcipher_walk *walk,
-				      struct crypto_skcipher *skcipher)
+static int crypto_cbc_encrypt_segment(struct crypto_lskcipher *tfm,
+				      const u8 *src, u8 *dst, unsigned nbytes,
+				      u8 *iv)
 {
-	unsigned int bsize = crypto_skcipher_blocksize(skcipher);
-	void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
-	unsigned int nbytes = walk->nbytes;
-	u8 *src = walk->src.virt.addr;
-	u8 *dst = walk->dst.virt.addr;
-	struct crypto_cipher *cipher;
-	struct crypto_tfm *tfm;
-	u8 *iv = walk->iv;
+	unsigned int bsize = crypto_lskcipher_blocksize(tfm);
 
-	cipher = skcipher_cipher_simple(skcipher);
-	tfm = crypto_cipher_tfm(cipher);
-	fn = crypto_cipher_alg(cipher)->cia_encrypt;
-
-	do {
+	for (; nbytes >= bsize; src += bsize, dst += bsize, nbytes -= bsize) {
 		crypto_xor(iv, src, bsize);
-		fn(tfm, dst, iv);
+		crypto_lskcipher_encrypt(tfm, iv, dst, bsize, NULL);
 		memcpy(iv, dst, bsize);
-
-		src += bsize;
-		dst += bsize;
-	} while ((nbytes -= bsize) >= bsize);
+	}
 
 	return nbytes;
 }
 
-static int crypto_cbc_encrypt_inplace(struct skcipher_walk *walk,
-				      struct crypto_skcipher *skcipher)
+static int crypto_cbc_encrypt_inplace(struct crypto_lskcipher *tfm,
+				      u8 *src, unsigned nbytes, u8 *oiv)
 {
-	unsigned int bsize = crypto_skcipher_blocksize(skcipher);
-	void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
-	unsigned int nbytes = walk->nbytes;
-	u8 *src = walk->src.virt.addr;
-	struct crypto_cipher *cipher;
-	struct crypto_tfm *tfm;
-	u8 *iv = walk->iv;
+	unsigned int bsize = crypto_lskcipher_blocksize(tfm);
+	u8 *iv = oiv;
 
-	cipher = skcipher_cipher_simple(skcipher);
-	tfm = crypto_cipher_tfm(cipher);
-	fn = crypto_cipher_alg(cipher)->cia_encrypt;
+	if (nbytes < bsize)
+		goto out;
 
 	do {
 		crypto_xor(src, iv, bsize);
-		fn(tfm, src, src);
+		crypto_lskcipher_encrypt(tfm, src, src, bsize, NULL);
 		iv = src;
 
 		src += bsize;
 	} while ((nbytes -= bsize) >= bsize);
 
-	memcpy(walk->iv, iv, bsize);
+	memcpy(oiv, iv, bsize);
 
+out:
 	return nbytes;
 }
 
-static int crypto_cbc_encrypt(struct skcipher_request *req)
+static int crypto_cbc_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+			      u8 *dst, unsigned len, u8 *iv, bool final)
 {
-	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
-	struct skcipher_walk walk;
-	int err;
+	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+	struct crypto_lskcipher *cipher = *ctx;
+	int rem;
 
-	err = skcipher_walk_virt(&walk, req, false);
+	if (src == dst)
+		rem = crypto_cbc_encrypt_inplace(cipher, dst, len, iv);
+	else
+		rem = crypto_cbc_encrypt_segment(cipher, src, dst, len, iv);
 
-	while (walk.nbytes) {
-		if (walk.src.virt.addr == walk.dst.virt.addr)
-			err = crypto_cbc_encrypt_inplace(&walk, skcipher);
-		else
-			err = crypto_cbc_encrypt_segment(&walk, skcipher);
-		err = skcipher_walk_done(&walk, err);
-	}
-
-	return err;
+	return rem && final ? -EINVAL : rem;
 }
 
-static int crypto_cbc_decrypt_segment(struct skcipher_walk *walk,
-				      struct crypto_skcipher *skcipher)
+static int crypto_cbc_decrypt_segment(struct crypto_lskcipher *tfm,
+				      const u8 *src, u8 *dst, unsigned nbytes,
+				      u8 *oiv)
 {
-	unsigned int bsize = crypto_skcipher_blocksize(skcipher);
-	void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
-	unsigned int nbytes = walk->nbytes;
-	u8 *src = walk->src.virt.addr;
-	u8 *dst = walk->dst.virt.addr;
-	struct crypto_cipher *cipher;
-	struct crypto_tfm *tfm;
-	u8 *iv = walk->iv;
+	unsigned int bsize = crypto_lskcipher_blocksize(tfm);
+	const u8 *iv = oiv;
 
-	cipher = skcipher_cipher_simple(skcipher);
-	tfm = crypto_cipher_tfm(cipher);
-	fn = crypto_cipher_alg(cipher)->cia_decrypt;
+	if (nbytes < bsize)
+		goto out;
 
 	do {
-		fn(tfm, dst, src);
+		crypto_lskcipher_decrypt(tfm, src, dst, bsize, NULL);
 		crypto_xor(dst, iv, bsize);
 		iv = src;
 
@@ -114,83 +84,72 @@ static int crypto_cbc_decrypt_segment(struct skcipher_walk *walk,
 		dst += bsize;
 	} while ((nbytes -= bsize) >= bsize);
 
-	memcpy(walk->iv, iv, bsize);
+	memcpy(oiv, iv, bsize);
 
+out:
 	return nbytes;
 }
 
-static int crypto_cbc_decrypt_inplace(struct skcipher_walk *walk,
-				      struct crypto_skcipher *skcipher)
+static int crypto_cbc_decrypt_inplace(struct crypto_lskcipher *tfm,
+				      u8 *src, unsigned nbytes, u8 *iv)
 {
-	unsigned int bsize = crypto_skcipher_blocksize(skcipher);
-	void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
-	unsigned int nbytes = walk->nbytes;
-	u8 *src = walk->src.virt.addr;
+	unsigned int bsize = crypto_lskcipher_blocksize(tfm);
 	u8 last_iv[MAX_CIPHER_BLOCKSIZE];
-	struct crypto_cipher *cipher;
-	struct crypto_tfm *tfm;
 
-	cipher = skcipher_cipher_simple(skcipher);
-	tfm = crypto_cipher_tfm(cipher);
-	fn = crypto_cipher_alg(cipher)->cia_decrypt;
+	if (nbytes < bsize)
+		goto out;
 
 	/* Start of the last block. */
 	src += nbytes - (nbytes & (bsize - 1)) - bsize;
 	memcpy(last_iv, src, bsize);
 
 	for (;;) {
-		fn(tfm, src, src);
+		crypto_lskcipher_decrypt(tfm, src, src, bsize, NULL);
 		if ((nbytes -= bsize) < bsize)
 			break;
 		crypto_xor(src, src - bsize, bsize);
 		src -= bsize;
 	}
 
-	crypto_xor(src, walk->iv, bsize);
-	memcpy(walk->iv, last_iv, bsize);
+	crypto_xor(src, iv, bsize);
+	memcpy(iv, last_iv, bsize);
 
+out:
 	return nbytes;
 }
 
-static int crypto_cbc_decrypt(struct skcipher_request *req)
+static int crypto_cbc_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+			      u8 *dst, unsigned len, u8 *iv, bool final)
 {
-	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
-	struct skcipher_walk walk;
-	int err;
+	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+	struct crypto_lskcipher *cipher = *ctx;
+	int rem;
 
-	err = skcipher_walk_virt(&walk, req, false);
+	if (src == dst)
+		rem = crypto_cbc_decrypt_inplace(cipher, dst, len, iv);
+	else
+		rem = crypto_cbc_decrypt_segment(cipher, src, dst, len, iv);
 
-	while (walk.nbytes) {
-		if (walk.src.virt.addr == walk.dst.virt.addr)
-			err = crypto_cbc_decrypt_inplace(&walk, skcipher);
-		else
-			err = crypto_cbc_decrypt_segment(&walk, skcipher);
-		err = skcipher_walk_done(&walk, err);
-	}
-
-	return err;
+	return rem && final ? -EINVAL : rem;
 }
 
 static int crypto_cbc_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
-	struct skcipher_instance *inst;
-	struct crypto_alg *alg;
+	struct lskcipher_instance *inst;
 	int err;
 
-	inst = skcipher_alloc_instance_simple(tmpl, tb);
+	inst = lskcipher_alloc_instance_simple(tmpl, tb);
 	if (IS_ERR(inst))
 		return PTR_ERR(inst);
 
-	alg = skcipher_ialg_simple(inst);
-
 	err = -EINVAL;
-	if (!is_power_of_2(alg->cra_blocksize))
+	if (!is_power_of_2(inst->alg.co.base.cra_blocksize))
 		goto out_free_inst;
 
 	inst->alg.encrypt = crypto_cbc_encrypt;
 	inst->alg.decrypt = crypto_cbc_decrypt;
 
-	err = skcipher_register_instance(tmpl, inst);
+	err = lskcipher_register_instance(tmpl, inst);
 	if (err) {
 out_free_inst:
 		inst->free(inst);
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/8] crypto: Add lskcipher API type
  2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
                   ` (7 preceding siblings ...)
  2023-09-14  8:28 ` [PATCH 8/8] crypto: cbc " Herbert Xu
@ 2023-09-14  8:51 ` Ard Biesheuvel
  2023-09-14  8:56   ` Herbert Xu
  8 siblings, 1 reply; 50+ messages in thread
From: Ard Biesheuvel @ 2023-09-14  8:51 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On Thu, 14 Sept 2023 at 10:28, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> This series introduces the lskcipher API type.  Its relationship
> to skcipher is the same as that between shash and ahash.
>
> This series only converts ecb and cbc to the new algorithm type.
> Once all templates have been moved over, we can then convert the
> cipher implementations such as aes-generic.
>
> Ard, if you have some spare cycles you can help with either the
> templates or the cipher algorithm conversions.  The latter will
> be applied once the templates have been completely moved over.
>
> Just let me know which ones you'd like to do so I won't touch
> them.
>

Hello Herbert,

Thanks for sending this.

So the intent is for lskcipher to ultimately supplant the current
cipher entirely, right? And lskcipher can be used directly by clients
of the crypto API, in which case kernel VAs may be used directly, but
no async support is available, while skcipher API clients will gain
access to lskciphers via a generic wrapper (if needed?)

That makes sense but it would help to spell this out.

I'd be happy to help out here but I'll be off on vacation for ~3 weeks
after this week so i won't get around to it before mid October. What I
will do (if it helps) is rebase my recent RISC-V scalar AES cipher
patches onto this, and implement ecb(aes) instead (which is the idea
IIUC?)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/8] crypto: Add lskcipher API type
  2023-09-14  8:51 ` [PATCH 0/8] crypto: Add lskcipher API type Ard Biesheuvel
@ 2023-09-14  8:56   ` Herbert Xu
  2023-09-14  9:18     ` Ard Biesheuvel
  0 siblings, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  8:56 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Linux Crypto Mailing List

On Thu, Sep 14, 2023 at 10:51:21AM +0200, Ard Biesheuvel wrote:
>
> So the intent is for lskcipher to ultimately supplant the current
> cipher entirely, right? And lskcipher can be used directly by clients
> of the crypto API, in which case kernel VAs may be used directly, but
> no async support is available, while skcipher API clients will gain
> access to lskciphers via a generic wrapper (if needed?)
> 
> That makes sense but it would help to spell this out.

Yes that's the idea.  It is pretty much exactly the same as how
shash and ahash are handled and used.

Because of the way I structured the ecb transition code (it will
take an old cipher and repackage it as an lskcipher), we need to
convert the templates first and then do the cipher => lskcipher
conversion.

> I'd be happy to help out here but I'll be off on vacation for ~3 weeks
> after this week so i won't get around to it before mid October. What I
> will do (if it helps) is rebase my recent RISC-V scalar AES cipher
> patches onto this, and implement ecb(aes) instead (which is the idea
> IIUC?)

That sounds good.  In fact let me attach the aes-generic proof-
of-concept conversion (it can only be applied after all templates
have been converted, so if you test it now everything but ecb/cbc
will be broken).

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
--
diff --git a/crypto/aes_generic.c b/crypto/aes_generic.c
index 666474b81c6a..afb74ee04193 100644
--- a/crypto/aes_generic.c
+++ b/crypto/aes_generic.c
@@ -47,14 +47,13 @@
  * ---------------------------------------------------------------------------
  */
 
-#include <crypto/aes.h>
-#include <crypto/algapi.h>
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/types.h>
-#include <linux/errno.h>
-#include <asm/byteorder.h>
 #include <asm/unaligned.h>
+#include <crypto/aes.h>
+#include <crypto/internal/skcipher.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
 
 static inline u8 byte(const u32 x, const unsigned n)
 {
@@ -1123,7 +1122,7 @@ EXPORT_SYMBOL_GPL(crypto_it_tab);
 
 /**
  * crypto_aes_set_key - Set the AES key.
- * @tfm:	The %crypto_tfm that is used in the context.
+ * @tfm:	The %crypto_lskcipher that is used in the context.
  * @in_key:	The input key.
  * @key_len:	The size of the key.
  *
@@ -1133,10 +1132,10 @@ EXPORT_SYMBOL_GPL(crypto_it_tab);
  *
  * Return: 0 on success; -EINVAL on failure (only happens for bad key lengths)
  */
-int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
-		unsigned int key_len)
+int crypto_aes_set_key(struct crypto_lskcipher *tfm, const u8 *in_key,
+		       unsigned int key_len)
 {
-	struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+	struct crypto_aes_ctx *ctx = crypto_lskcipher_ctx(tfm);
 
 	return aes_expandkey(ctx, in_key, key_len);
 }
@@ -1173,9 +1172,9 @@ EXPORT_SYMBOL_GPL(crypto_aes_set_key);
 	f_rl(bo, bi, 3, k);	\
 } while (0)
 
-static void crypto_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+static void aes_encrypt_one(struct crypto_lskcipher *tfm, const u8 *in, u8 *out)
 {
-	const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+	const struct crypto_aes_ctx *ctx = crypto_lskcipher_ctx(tfm);
 	u32 b0[4], b1[4];
 	const u32 *kp = ctx->key_enc + 4;
 	const int key_len = ctx->key_length;
@@ -1212,6 +1211,17 @@ static void crypto_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 	put_unaligned_le32(b0[3], out + 12);
 }
 
+static int crypto_aes_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+			      u8 *dst, unsigned nbytes, u8 *iv, bool final)
+{
+	const unsigned int bsize = AES_BLOCK_SIZE;
+
+	for (; nbytes >= bsize; src += bsize, dst += bsize, nbytes -= bsize)
+		aes_encrypt_one(tfm, src, dst);
+
+	return nbytes && final ? -EINVAL : nbytes;
+}
+
 /* decrypt a block of text */
 
 #define i_rn(bo, bi, n, k)	do {				\
@@ -1243,9 +1253,9 @@ static void crypto_aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 	i_rl(bo, bi, 3, k);	\
 } while (0)
 
-static void crypto_aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+static void aes_decrypt_one(struct crypto_lskcipher *tfm, const u8 *in, u8 *out)
 {
-	const struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+	const struct crypto_aes_ctx *ctx = crypto_lskcipher_ctx(tfm);
 	u32 b0[4], b1[4];
 	const int key_len = ctx->key_length;
 	const u32 *kp = ctx->key_dec + 4;
@@ -1282,33 +1292,41 @@ static void crypto_aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 	put_unaligned_le32(b0[3], out + 12);
 }
 
-static struct crypto_alg aes_alg = {
-	.cra_name		=	"aes",
-	.cra_driver_name	=	"aes-generic",
-	.cra_priority		=	100,
-	.cra_flags		=	CRYPTO_ALG_TYPE_CIPHER,
-	.cra_blocksize		=	AES_BLOCK_SIZE,
-	.cra_ctxsize		=	sizeof(struct crypto_aes_ctx),
-	.cra_module		=	THIS_MODULE,
-	.cra_u			=	{
-		.cipher = {
-			.cia_min_keysize	=	AES_MIN_KEY_SIZE,
-			.cia_max_keysize	=	AES_MAX_KEY_SIZE,
-			.cia_setkey		=	crypto_aes_set_key,
-			.cia_encrypt		=	crypto_aes_encrypt,
-			.cia_decrypt		=	crypto_aes_decrypt
-		}
-	}
+static int crypto_aes_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+			      u8 *dst, unsigned nbytes, u8 *iv, bool final)
+{
+	const unsigned int bsize = AES_BLOCK_SIZE;
+
+	for (; nbytes >= bsize; src += bsize, dst += bsize, nbytes -= bsize)
+		aes_decrypt_one(tfm, src, dst);
+
+	return nbytes && final ? -EINVAL : nbytes;
+}
+
+static struct lskcipher_alg aes_alg = {
+	.co = {
+		.base.cra_name		=	"aes",
+		.base.cra_driver_name	=	"aes-generic",
+		.base.cra_priority	=	100,
+		.base.cra_blocksize	=	AES_BLOCK_SIZE,
+		.base.cra_ctxsize	=	sizeof(struct crypto_aes_ctx),
+		.base.cra_module	=	THIS_MODULE,
+		.min_keysize		=	AES_MIN_KEY_SIZE,
+		.max_keysize		=	AES_MAX_KEY_SIZE,
+	},
+	.setkey				=	crypto_aes_set_key,
+	.encrypt			=	crypto_aes_encrypt,
+	.decrypt			=	crypto_aes_decrypt,
 };
 
 static int __init aes_init(void)
 {
-	return crypto_register_alg(&aes_alg);
+	return crypto_register_lskcipher(&aes_alg);
 }
 
 static void __exit aes_fini(void)
 {
-	crypto_unregister_alg(&aes_alg);
+	crypto_unregister_lskcipher(&aes_alg);
 }
 
 subsys_initcall(aes_init);
diff --git a/include/crypto/aes.h b/include/crypto/aes.h
index 2090729701ab..947109e24360 100644
--- a/include/crypto/aes.h
+++ b/include/crypto/aes.h
@@ -6,8 +6,9 @@
 #ifndef _CRYPTO_AES_H
 #define _CRYPTO_AES_H
 
+#include <linux/cache.h>
+#include <linux/errno.h>
 #include <linux/types.h>
-#include <linux/crypto.h>
 
 #define AES_MIN_KEY_SIZE	16
 #define AES_MAX_KEY_SIZE	32
@@ -18,6 +19,8 @@
 #define AES_MAX_KEYLENGTH	(15 * 16)
 #define AES_MAX_KEYLENGTH_U32	(AES_MAX_KEYLENGTH / sizeof(u32))
 
+struct crypto_lskcipher;
+
 /*
  * Please ensure that the first two fields are 16-byte aligned
  * relative to the start of the structure, i.e., don't move them!
@@ -48,8 +51,8 @@ static inline int aes_check_keylen(unsigned int keylen)
 	return 0;
 }
 
-int crypto_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
-		unsigned int key_len);
+int crypto_aes_set_key(struct crypto_lskcipher *tfm, const u8 *in_key,
+		       unsigned int key_len);
 
 /**
  * aes_expandkey - Expands the AES key as described in FIPS-197

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/8] crypto: Add lskcipher API type
  2023-09-14  8:56   ` Herbert Xu
@ 2023-09-14  9:18     ` Ard Biesheuvel
  2023-09-14  9:29       ` Herbert Xu
  2023-09-14  9:32       ` Herbert Xu
  0 siblings, 2 replies; 50+ messages in thread
From: Ard Biesheuvel @ 2023-09-14  9:18 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On Thu, 14 Sept 2023 at 10:56, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Thu, Sep 14, 2023 at 10:51:21AM +0200, Ard Biesheuvel wrote:
> >
> > So the intent is for lskcipher to ultimately supplant the current
> > cipher entirely, right? And lskcipher can be used directly by clients
> > of the crypto API, in which case kernel VAs may be used directly, but
> > no async support is available, while skcipher API clients will gain
> > access to lskciphers via a generic wrapper (if needed?)
> >
> > That makes sense but it would help to spell this out.
>
> Yes that's the idea.  It is pretty much exactly the same as how
> shash and ahash are handled and used.
>
> Because of the way I structured the ecb transition code (it will
> take an old cipher and repackage it as an lskcipher), we need to
> convert the templates first and then do the cipher => lskcipher
> conversion.
>
> > I'd be happy to help out here but I'll be off on vacation for ~3 weeks
> > after this week so i won't get around to it before mid October. What I
> > will do (if it helps) is rebase my recent RISC-V scalar AES cipher
> > patches onto this, and implement ecb(aes) instead (which is the idea
> > IIUC?)
>
> That sounds good.  In fact let me attach the aes-generic proof-
> of-concept conversion (it can only be applied after all templates
> have been converted, so if you test it now everything but ecb/cbc
> will be broken).
>

That helps, thanks.

...
> +static struct lskcipher_alg aes_alg = {
> +       .co = {
> +               .base.cra_name          =       "aes",

So this means that the base name will be aes, not ecb(aes), right?
What about cbc and ctr? It makes sense for a single lskcipher to
implement all three of those at least, so that algorithms like XTS and
GCM can be implemented cheaply using generic templates, without the
need to call into the lskcipher for each block of input.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/8] crypto: Add lskcipher API type
  2023-09-14  9:18     ` Ard Biesheuvel
@ 2023-09-14  9:29       ` Herbert Xu
  2023-09-14  9:31         ` Ard Biesheuvel
  2023-09-14  9:32       ` Herbert Xu
  1 sibling, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  9:29 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Linux Crypto Mailing List

On Thu, Sep 14, 2023 at 11:18:00AM +0200, Ard Biesheuvel wrote:
>
> So this means that the base name will be aes, not ecb(aes), right?
> What about cbc and ctr? It makes sense for a single lskcipher to
> implement all three of those at least, so that algorithms like XTS and
> GCM can be implemented cheaply using generic templates, without the
> need to call into the lskcipher for each block of input.

You can certainly implement all three with arch-specific code
but I didn't think there was a need to do this for the generic
version.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/8] crypto: Add lskcipher API type
  2023-09-14  9:29       ` Herbert Xu
@ 2023-09-14  9:31         ` Ard Biesheuvel
  2023-09-14  9:34           ` Herbert Xu
  0 siblings, 1 reply; 50+ messages in thread
From: Ard Biesheuvel @ 2023-09-14  9:31 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On Thu, 14 Sept 2023 at 11:30, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Thu, Sep 14, 2023 at 11:18:00AM +0200, Ard Biesheuvel wrote:
> >
> > So this means that the base name will be aes, not ecb(aes), right?
> > What about cbc and ctr? It makes sense for a single lskcipher to
> > implement all three of those at least, so that algorithms like XTS and
> > GCM can be implemented cheaply using generic templates, without the
> > need to call into the lskcipher for each block of input.
>
> You can certainly implement all three with arch-specific code
> but I didn't think there was a need to do this for the generic
> version.
>

Fair enough. So what should such an arch version implement?

aes
cbc(aes)
ctr(aes)

or

ecb(aes)
cbc(aes)
ctr(aes)

?

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/8] crypto: Add lskcipher API type
  2023-09-14  9:18     ` Ard Biesheuvel
  2023-09-14  9:29       ` Herbert Xu
@ 2023-09-14  9:32       ` Herbert Xu
  1 sibling, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  9:32 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Linux Crypto Mailing List

On Thu, Sep 14, 2023 at 11:18:00AM +0200, Ard Biesheuvel wrote:
>
> > +static struct lskcipher_alg aes_alg = {
> > +       .co = {
> > +               .base.cra_name          =       "aes",
> 
> So this means that the base name will be aes, not ecb(aes), right?

Yes this will be called "aes".  If someone asks for "ecb(aes)"
that will instantiate the ecb template which will construct
a new algorithm with the same function pointers as the original.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/8] crypto: Add lskcipher API type
  2023-09-14  9:31         ` Ard Biesheuvel
@ 2023-09-14  9:34           ` Herbert Xu
  2023-09-17 16:24             ` Ard Biesheuvel
  0 siblings, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-09-14  9:34 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Linux Crypto Mailing List

On Thu, Sep 14, 2023 at 11:31:14AM +0200, Ard Biesheuvel wrote:
>
> ecb(aes)

This is unnecessary as the generic template will construct an
algorithm that's almost exactly the same as the underlying
algorithm.  But you could register it if you want to.  The
template instantiation is a one-off event.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/8] crypto: Add lskcipher API type
  2023-09-14  9:34           ` Herbert Xu
@ 2023-09-17 16:24             ` Ard Biesheuvel
  2023-09-19  4:03               ` Herbert Xu
  0 siblings, 1 reply; 50+ messages in thread
From: Ard Biesheuvel @ 2023-09-17 16:24 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List

On Thu, 14 Sept 2023 at 11:34, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Thu, Sep 14, 2023 at 11:31:14AM +0200, Ard Biesheuvel wrote:
> >
> > ecb(aes)
>
> This is unnecessary as the generic template will construct an
> algorithm that's almost exactly the same as the underlying
> algorithm.  But you could register it if you want to.  The
> template instantiation is a one-off event.
>

Ported my RISC-V AES implementation here:
https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=riscv-scalar-aes

I will get back to this after mu holidays, early October.

Thanks,

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/8] crypto: Add lskcipher API type
  2023-09-17 16:24             ` Ard Biesheuvel
@ 2023-09-19  4:03               ` Herbert Xu
  0 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-09-19  4:03 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Linux Crypto Mailing List

On Sun, Sep 17, 2023 at 06:24:32PM +0200, Ard Biesheuvel wrote:
>
> Ported my RISC-V AES implementation here:
> https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=riscv-scalar-aes

Looks good to me.

> I will get back to this after mu holidays, early October.

Have a great time!

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-09-14  8:28 ` [PATCH 4/8] crypto: skcipher - Add lskcipher Herbert Xu
@ 2023-09-20  6:25   ` Eric Biggers
  2023-09-21  4:32     ` Herbert Xu
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Biggers @ 2023-09-20  6:25 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Thu, Sep 14, 2023 at 04:28:24PM +0800, Herbert Xu wrote:
> Add a new API type lskcipher designed for taking straight kernel
> pointers instead of SG lists.  Its relationship to skcipher will
> be analogous to that between shash and ahash.

Is lskcipher only for algorithms that can be computed incrementally?  That would
exclude the wide-block modes, and maybe others too.  And if so, what is the
model for incremental computation?  Based on crypto_lskcipher_crypt_sg(), all
the state is assumed to be carried forward in the "IV".  Does that work for all
algorithms?  Note that shash has an arbitrary state struct (shash_desc) instead.

- Eric

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-09-20  6:25   ` Eric Biggers
@ 2023-09-21  4:32     ` Herbert Xu
  2023-09-22  3:10       ` Eric Biggers
  0 siblings, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-09-21  4:32 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Tue, Sep 19, 2023 at 11:25:51PM -0700, Eric Biggers wrote:
>
> Is lskcipher only for algorithms that can be computed incrementally?  That would
> exclude the wide-block modes, and maybe others too.  And if so, what is the

You mean things like adiantum? We could add a flag for that so
the skcipher wrapper linearises the input before calling lskcipher.

> model for incremental computation?  Based on crypto_lskcipher_crypt_sg(), all
> the state is assumed to be carried forward in the "IV".  Does that work for all
> algorithms?  Note that shash has an arbitrary state struct (shash_desc) instead.

Is there any practical difference? You could always represent
one as the other, no?

The only case where it would matter is if an algorithm had both
an IV as well as additional state that should not be passed along
as part of the IV, do you have anything in mind?

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-09-21  4:32     ` Herbert Xu
@ 2023-09-22  3:10       ` Eric Biggers
  2023-11-17  5:19         ` Herbert Xu
  2023-12-05  8:41         ` [PATCH 4/8] crypto: skcipher - Add lskcipher Herbert Xu
  0 siblings, 2 replies; 50+ messages in thread
From: Eric Biggers @ 2023-09-22  3:10 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Thu, Sep 21, 2023 at 12:32:17PM +0800, Herbert Xu wrote:
> On Tue, Sep 19, 2023 at 11:25:51PM -0700, Eric Biggers wrote:
> >
> > Is lskcipher only for algorithms that can be computed incrementally?  That would
> > exclude the wide-block modes, and maybe others too.  And if so, what is the
> 
> You mean things like adiantum? 

Yes, wide-block modes such as Adiantum and HCTR2 require multiple passes over
the data.  As do SIV modes such as AES-GCM-SIV (though AES-GCM-SIV isn't yet
supported by the kernel, and it would be an "aead", not an "skcipher").

> We could add a flag for that so
> the skcipher wrapper linearises the input before calling lskcipher.

That makes sense, but I suppose this would mean adding code that allocates huge
scratch buffers, like what the infamous crypto/scompress.c does?  I hope that we
can ensure that these buffers are only allocated when they are actually needed.

> 
> > model for incremental computation?  Based on crypto_lskcipher_crypt_sg(), all
> > the state is assumed to be carried forward in the "IV".  Does that work for all
> > algorithms?  Note that shash has an arbitrary state struct (shash_desc) instead.
> 
> Is there any practical difference? You could always represent
> one as the other, no?
> 
> The only case where it would matter is if an algorithm had both
> an IV as well as additional state that should not be passed along
> as part of the IV, do you have anything in mind?

Well, IV is *initialization vector*: a value that the algorithm uses as input.
It shouldn't be overloaded to represent some internal intermediate state.  We
already made this mistake with the iv vs. iv_out thing, which only ever got
implemented by CBC and CTR, and people repeatedly get confused by.  So we know
it technically works for those two algorithms, but not anything else.

With ChaCha, for example, it makes more sense to use 16-word state matrix as the
intermediate state instead of the 4-word "IV".  (See chacha_crypt().)
Especially for XChaCha, so that the HChaCha step doesn't need to be repeated.

- Eric

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 8/8] crypto: cbc - Convert from skcipher to lskcipher
  2023-09-14  8:28 ` [PATCH 8/8] crypto: cbc " Herbert Xu
@ 2023-10-02 20:25   ` Nathan Chancellor
  2023-10-03  3:31     ` [PATCH] crypto: skcipher - Add dependency on ecb Herbert Xu
  0 siblings, 1 reply; 50+ messages in thread
From: Nathan Chancellor @ 2023-10-02 20:25 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

Hi Herbert,

On Thu, Sep 14, 2023 at 04:28:28PM +0800, Herbert Xu wrote:
> Replace the existing skcipher CBC template with an lskcipher version.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

I am noticing a failure to get to user space when booting OpenSUSE's
armv7hl configuration [1] in QEMU after this change as commit
705b52fef3c7 ("crypto: cbc - Convert from skcipher to lskcipher"). I can
reproduce it with GCC 13.2.0 from kernel.org [2] and QEMU 8.1.1, in case
either of those versions matter.  The rootfs is available at [3] in case
it is relevant.

$ curl -LSso .config https://github.com/openSUSE/kernel-source/raw/master/config/armv7hl/default

$ make -skj"$(nproc)" ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- olddefconfig bzImage

$ qemu-system-arm \
    -display none \
    -nodefaults \
    -no-reboot \
    -machine virt \
    -append 'console=ttyAMA0 earlycon' \
    -kernel arch/arm/boot/zImage \
    -initrd arm-rootfs.cpio \
    -m 512m \
    -serial mon:stdio
...
[    0.000000][    T0] Linux version 6.6.0-rc1-default+ (nathan@dev-arch.thelio-3990X) (arm-linux-gnueabi-gcc (GCC) 13.2.0, GNU ld (GNU Binutils) 2.41) #1 SMP Mon Oct  2 13:12:40 MST 2023
...
[    0.743773][    T1] ------------[ cut here ]------------
[    0.743980][    T1] WARNING: CPU: 0 PID: 1 at crypto/algapi.c:506 crypto_unregister_alg+0x124/0x12c
[    0.744693][    T1] Modules linked in:
[    0.745078][    T1] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.6.0-rc1-default+ #1 906712a81ca49f79575cf3062af84575f391802d
[    0.745453][    T1] Hardware name: Generic DT based system
[    0.745700][    T1] Backtrace:
[    0.745901][    T1]  dump_backtrace from show_stack+0x20/0x24
[    0.746181][    T1]  r7:c077851c r6:00000009 r5:00000053 r4:c1595a60
[    0.746373][    T1]  show_stack from dump_stack_lvl+0x48/0x54
[    0.746530][    T1]  dump_stack_lvl from dump_stack+0x18/0x1c
[    0.746703][    T1]  r5:000001fa r4:c1589f2c
[    0.746811][    T1]  dump_stack from __warn+0x88/0x120
[    0.746954][    T1]  __warn from warn_slowpath_fmt+0xb8/0x188
[    0.747115][    T1]  r8:0000012a r7:c077851c r6:c1589f2c r5:00000000 r4:c1e825f0
[    0.747288][    T1]  warn_slowpath_fmt from crypto_unregister_alg+0x124/0x12c
[    0.747475][    T1]  r7:c214e414 r6:00000001 r5:c214e428 r4:c2d590c0
[    0.747628][    T1]  crypto_unregister_alg from crypto_unregister_skcipher+0x1c/0x20
[    0.747824][    T1]  r4:c2d59000
[    0.747911][    T1]  crypto_unregister_skcipher from simd_skcipher_free+0x20/0x2c
[    0.748100][    T1]  simd_skcipher_free from aes_exit+0x30/0x4c
[    0.748264][    T1]  r5:c214e428 r4:c214e418
[    0.748375][    T1]  aes_exit from aes_init+0x88/0xa8
[    0.748521][    T1]  r5:fffffffe r4:c1f12740
[    0.748637][    T1]  aes_init from do_one_initcall+0x44/0x25c
[    0.748803][    T1]  r9:c1dd3d5c r8:c1689880 r7:00000000 r6:c2570000 r5:00000019 r4:c1d0c618
[    0.749008][    T1]  do_one_initcall from kernel_init_freeable+0x23c/0x298
[    0.749187][    T1]  r8:c1689880 r7:00000007 r6:c1dd3d38 r5:00000019 r4:c25f0640
[    0.749364][    T1]  kernel_init_freeable from kernel_init+0x28/0x14c
[    0.749540][    T1]  r10:00000000 r9:00000000 r8:00000000 r7:00000000 r6:00000000 r5:c0fae978
[    0.749744][    T1]  r4:c1f0b040
[    0.749832][    T1]  kernel_init from ret_from_fork+0x14/0x30
[    0.750033][    T1] Exception stack(0xe080dfb0 to 0xe080dff8)
[    0.750315][    T1] dfa0:                                     00000000 00000000 00000000 00000000
[    0.750546][    T1] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    0.750760][    T1] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000
[    0.750967][    T1]  r5:c0fae978 r4:00000000
[    0.751214][    T1] ---[ end trace 0000000000000000 ]---
[    0.751519][    T1] ------------[ cut here ]------------
[    0.751650][    T1] WARNING: CPU: 0 PID: 1 at crypto/algapi.c:506 crypto_unregister_alg+0x124/0x12c
[    0.751873][    T1] Modules linked in:
[    0.752037][    T1] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W          6.6.0-rc1-default+ #1 906712a81ca49f79575cf3062af84575f391802d
[    0.752331][    T1] Hardware name: Generic DT based system
[    0.752464][    T1] Backtrace:
[    0.752551][    T1]  dump_backtrace from show_stack+0x20/0x24
[    0.752702][    T1]  r7:c077851c r6:00000009 r5:00000053 r4:c1595a60
[    0.752853][    T1]  show_stack from dump_stack_lvl+0x48/0x54
[    0.753001][    T1]  dump_stack_lvl from dump_stack+0x18/0x1c
[    0.753151][    T1]  r5:000001fa r4:c1589f2c
[    0.753258][    T1]  dump_stack from __warn+0x88/0x120
[    0.753417][    T1]  __warn from warn_slowpath_fmt+0xb8/0x188
[    0.753572][    T1]  r8:0000012a r7:c077851c r6:c1589f2c r5:00000000 r4:c1e825f0
[    0.753750][    T1]  warn_slowpath_fmt from crypto_unregister_alg+0x124/0x12c
[    0.753938][    T1]  r7:c214e414 r6:00000001 r5:00000002 r4:c1f12bc0
[    0.754096][    T1]  crypto_unregister_alg from crypto_unregister_skciphers+0x30/0x40
[    0.754291][    T1]  r4:c1f12bc0
[    0.754378][    T1]  crypto_unregister_skciphers from aes_exit+0x48/0x4c
[    0.754556][    T1]  r5:c214e428 r4:c214e428
[    0.754666][    T1]  aes_exit from aes_init+0x88/0xa8
[    0.754804][    T1]  r5:fffffffe r4:c1f12740
[    0.754913][    T1]  aes_init from do_one_initcall+0x44/0x25c
[    0.755070][    T1]  r9:c1dd3d5c r8:c1689880 r7:00000000 r6:c2570000 r5:00000019 r4:c1d0c618
[    0.755274][    T1]  do_one_initcall from kernel_init_freeable+0x23c/0x298
[    0.755462][    T1]  r8:c1689880 r7:00000007 r6:c1dd3d38 r5:00000019 r4:c25f0640
[    0.755636][    T1]  kernel_init_freeable from kernel_init+0x28/0x14c
[    0.755807][    T1]  r10:00000000 r9:00000000 r8:00000000 r7:00000000 r6:00000000 r5:c0fae978
[    0.756007][    T1]  r4:c1f0b040
[    0.756095][    T1]  kernel_init from ret_from_fork+0x14/0x30
[    0.756243][    T1] Exception stack(0xe080dfb0 to 0xe080dff8)
[    0.756390][    T1] dfa0:                                     00000000 00000000 00000000 00000000
[    0.756610][    T1] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[    0.756828][    T1] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000
[    0.757001][    T1]  r5:c0fae978 r4:00000000
[    0.757178][    T1] ---[ end trace 0000000000000000 ]---
...
[    0.982740][    T1] trusted_key: encrypted_key: failed to alloc_cipher (-2)
...
[    0.993923][   T80] alg: No test for  ()
[    0.994049][   T80] alg: Unexpected test result for : 0

If there is any additional information I can provide or patches I can
test, I am more than happy to do so.

[1]: https://github.com/openSUSE/kernel-source/raw/master/config/armv7hl/default
[2]: https://mirrors.edge.kernel.org/pub/tools/crosstool/
[3]: https://github.com/ClangBuiltLinux/boot-utils/releases

Cheers,
Nathan

# bad: [df964ce9ef9fea10cf131bf6bad8658fde7956f6] Add linux-next specific files for 20230929
# good: [9ed22ae6be817d7a3f5c15ca22cbc9d3963b481d] Merge tag 'spi-fix-v6.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
git bisect start 'df964ce9ef9fea10cf131bf6bad8658fde7956f6' '9ed22ae6be817d7a3f5c15ca22cbc9d3963b481d'
# good: [2afef4020a647c2034c72a5ab765ad06338024c1] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git
git bisect good 2afef4020a647c2034c72a5ab765ad06338024c1
# bad: [621abed2c5eb145b5c8f25aa08f4eaac3a4880df] Merge branch 'drm-next' of https://gitlab.freedesktop.org/agd5f/linux
git bisect bad 621abed2c5eb145b5c8f25aa08f4eaac3a4880df
# good: [fcdecb00fb04c2db761851b194547d291ba532c5] Merge branch 'main' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
git bisect good fcdecb00fb04c2db761851b194547d291ba532c5
# bad: [62edfd0bd4ac7b7c6b5eff0ea290261ff5ab6d1c] Merge branch 'drm-next' of git://git.freedesktop.org/git/drm/drm.git
git bisect bad 62edfd0bd4ac7b7c6b5eff0ea290261ff5ab6d1c
# good: [9896f0608f9fe0b49badd2fd6ae76ec761c70624] Merge ath-next from git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git
git bisect good 9896f0608f9fe0b49badd2fd6ae76ec761c70624
# good: [d856c84b8cbc2f5bc6e906deebf3fa912bb6c1c3] Merge branch 'spi-nor/next' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux.git
git bisect good d856c84b8cbc2f5bc6e906deebf3fa912bb6c1c3
# good: [39e0b96d61b6f5ad880d9953dc2b4c5b3ee145b3] drm/bridge/analogix/anx78xx: Drop ID table
git bisect good 39e0b96d61b6f5ad880d9953dc2b4c5b3ee145b3
# good: [3102bbcdcd3c945ef0bcea498d3a0c6384536d6c] crypto: qat - refactor deprecated strncpy
git bisect good 3102bbcdcd3c945ef0bcea498d3a0c6384536d6c
# bad: [bb4277c7e617e8b271eb7ad75d5bdb6b8a249613] Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git
git bisect bad bb4277c7e617e8b271eb7ad75d5bdb6b8a249613
# bad: [aa3f80500382ca864b7cfcff4e5ca2fa6a0e977d] crypto: hisilicon/zip - support deflate algorithm
git bisect bad aa3f80500382ca864b7cfcff4e5ca2fa6a0e977d
# good: [b64d143b752932ef483d0ed8d00958f1832dd6bc] crypto: hash - Hide CRYPTO_ALG_TYPE_AHASH_MASK
git bisect good b64d143b752932ef483d0ed8d00958f1832dd6bc
# good: [3dfe8786b11a4a3f9ced2eb89c6c5d73eba84700] crypto: testmgr - Add support for lskcipher algorithms
git bisect good 3dfe8786b11a4a3f9ced2eb89c6c5d73eba84700
# bad: [705b52fef3c73655701d9c8868e744f1fa03e942] crypto: cbc - Convert from skcipher to lskcipher
git bisect bad 705b52fef3c73655701d9c8868e744f1fa03e942
# good: [32a8dc4afcfb098ef4e8b465c90db17d22d90107] crypto: ecb - Convert from skcipher to lskcipher
git bisect good 32a8dc4afcfb098ef4e8b465c90db17d22d90107
# first bad commit: [705b52fef3c73655701d9c8868e744f1fa03e942] crypto: cbc - Convert from skcipher to lskcipher

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH] crypto: skcipher - Add dependency on ecb
  2023-10-02 20:25   ` Nathan Chancellor
@ 2023-10-03  3:31     ` Herbert Xu
  2023-10-03 15:25       ` Nathan Chancellor
  0 siblings, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-10-03  3:31 UTC (permalink / raw)
  To: Nathan Chancellor; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Mon, Oct 02, 2023 at 01:25:22PM -0700, Nathan Chancellor wrote:
>
> I am noticing a failure to get to user space when booting OpenSUSE's
> armv7hl configuration [1] in QEMU after this change as commit
> 705b52fef3c7 ("crypto: cbc - Convert from skcipher to lskcipher"). I can
> reproduce it with GCC 13.2.0 from kernel.org [2] and QEMU 8.1.1, in case
> either of those versions matter.  The rootfs is available at [3] in case
> it is relevant.

Thanks for the report.  This is caused by a missing dependency
on ECB.  Please try this patch:

---8<---
As lskcipher requires the ecb wrapper for the transition add an
explicit dependency on it so that it is always present.  This can
be removed once all simple ciphers have been converted to lskcipher.

Reported-by: Nathan Chancellor <nathan@kernel.org>
Fixes: 705b52fef3c7 ("crypto: cbc - Convert from skcipher to lskcipher")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

diff --git a/crypto/Kconfig b/crypto/Kconfig
index ed931ddea644..bbf51d55724e 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -85,6 +85,7 @@ config CRYPTO_SKCIPHER
 	tristate
 	select CRYPTO_SKCIPHER2
 	select CRYPTO_ALGAPI
+	select CRYPTO_ECB
 
 config CRYPTO_SKCIPHER2
 	tristate
@@ -689,7 +690,7 @@ config CRYPTO_CTS
 
 config CRYPTO_ECB
 	tristate "ECB (Electronic Codebook)"
-	select CRYPTO_SKCIPHER
+	select CRYPTO_SKCIPHER2
 	select CRYPTO_MANAGER
 	help
 	  ECB (Electronic Codebook) mode (NIST SP800-38A)
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH] crypto: skcipher - Add dependency on ecb
  2023-10-03  3:31     ` [PATCH] crypto: skcipher - Add dependency on ecb Herbert Xu
@ 2023-10-03 15:25       ` Nathan Chancellor
  0 siblings, 0 replies; 50+ messages in thread
From: Nathan Chancellor @ 2023-10-03 15:25 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Tue, Oct 03, 2023 at 11:31:55AM +0800, Herbert Xu wrote:
> On Mon, Oct 02, 2023 at 01:25:22PM -0700, Nathan Chancellor wrote:
> >
> > I am noticing a failure to get to user space when booting OpenSUSE's
> > armv7hl configuration [1] in QEMU after this change as commit
> > 705b52fef3c7 ("crypto: cbc - Convert from skcipher to lskcipher"). I can
> > reproduce it with GCC 13.2.0 from kernel.org [2] and QEMU 8.1.1, in case
> > either of those versions matter.  The rootfs is available at [3] in case
> > it is relevant.
> 
> Thanks for the report.  This is caused by a missing dependency
> on ECB.  Please try this patch:
> 
> ---8<---
> As lskcipher requires the ecb wrapper for the transition add an
> explicit dependency on it so that it is always present.  This can
> be removed once all simple ciphers have been converted to lskcipher.
> 
> Reported-by: Nathan Chancellor <nathan@kernel.org>
> Fixes: 705b52fef3c7 ("crypto: cbc - Convert from skcipher to lskcipher")
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Tested-by: Nathan Chancellor <nathan@kernel.org>

Thanks for the quick fix!

> diff --git a/crypto/Kconfig b/crypto/Kconfig
> index ed931ddea644..bbf51d55724e 100644
> --- a/crypto/Kconfig
> +++ b/crypto/Kconfig
> @@ -85,6 +85,7 @@ config CRYPTO_SKCIPHER
>  	tristate
>  	select CRYPTO_SKCIPHER2
>  	select CRYPTO_ALGAPI
> +	select CRYPTO_ECB
>  
>  config CRYPTO_SKCIPHER2
>  	tristate
> @@ -689,7 +690,7 @@ config CRYPTO_CTS
>  
>  config CRYPTO_ECB
>  	tristate "ECB (Electronic Codebook)"
> -	select CRYPTO_SKCIPHER
> +	select CRYPTO_SKCIPHER2
>  	select CRYPTO_MANAGER
>  	help
>  	  ECB (Electronic Codebook) mode (NIST SP800-38A)
> -- 
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-09-22  3:10       ` Eric Biggers
@ 2023-11-17  5:19         ` Herbert Xu
  2023-11-17  5:42           ` Eric Biggers
  2023-12-05  8:41         ` [PATCH 4/8] crypto: skcipher - Add lskcipher Herbert Xu
  1 sibling, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-11-17  5:19 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Thu, Sep 21, 2023 at 08:10:30PM -0700, Eric Biggers wrote:
>
> Well, IV is *initialization vector*: a value that the algorithm uses as input.
> It shouldn't be overloaded to represent some internal intermediate state.  We
> already made this mistake with the iv vs. iv_out thing, which only ever got
> implemented by CBC and CTR, and people repeatedly get confused by.  So we know
> it technically works for those two algorithms, but not anything else.
> 
> With ChaCha, for example, it makes more sense to use 16-word state matrix as the
> intermediate state instead of the 4-word "IV".  (See chacha_crypt().)
> Especially for XChaCha, so that the HChaCha step doesn't need to be repeated.

Fair enough, but what's the point of keeping the internal state
across two lskcipher calls? The whole point of lskcipher is that the
input is linear and can be processed in one go.

With shash we must keep the internal state because the API operates
on the update/final model so we need multiple suboperations to finish
each hashing operation.

With ciphers we haven't traditionally done it that way.  Are you
thinking of extending lskcipher so that it is more like hashing, with
an explicit finalisation step?

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-11-17  5:19         ` Herbert Xu
@ 2023-11-17  5:42           ` Eric Biggers
  2023-11-17  9:07             ` Herbert Xu
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Biggers @ 2023-11-17  5:42 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Fri, Nov 17, 2023 at 01:19:46PM +0800, Herbert Xu wrote:
> On Thu, Sep 21, 2023 at 08:10:30PM -0700, Eric Biggers wrote:
> >
> > Well, IV is *initialization vector*: a value that the algorithm uses as input.
> > It shouldn't be overloaded to represent some internal intermediate state.  We
> > already made this mistake with the iv vs. iv_out thing, which only ever got
> > implemented by CBC and CTR, and people repeatedly get confused by.  So we know
> > it technically works for those two algorithms, but not anything else.
> > 
> > With ChaCha, for example, it makes more sense to use 16-word state matrix as the
> > intermediate state instead of the 4-word "IV".  (See chacha_crypt().)
> > Especially for XChaCha, so that the HChaCha step doesn't need to be repeated.
> 
> Fair enough, but what's the point of keeping the internal state
> across two lskcipher calls? The whole point of lskcipher is that the
> input is linear and can be processed in one go.
> 
> With shash we must keep the internal state because the API operates
> on the update/final model so we need multiple suboperations to finish
> each hashing operation.
> 
> With ciphers we haven't traditionally done it that way.  Are you
> thinking of extending lskcipher so that it is more like hashing, with
> an explicit finalisation step?

crypto_lskcipher_crypt_sg() assumes that a single en/decryption operation can be
broken up into multiple ones.  I think you're arguing that since there's no
"init" or "final", these sub-en/decryptions aren't analogous to "update" but
rather are full en/decryptions that happen to combine to create the larger one.
So sure, looking at it that way, the input/output IV does make sense, though it
does mean that we end up with the confusing "output IV" terminology as well as
having to repeat any setup code, e.g. HChaCha, that some algorithms have.

- Eric

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-11-17  5:42           ` Eric Biggers
@ 2023-11-17  9:07             ` Herbert Xu
  2023-11-24 10:27               ` Herbert Xu
  0 siblings, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-11-17  9:07 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Thu, Nov 16, 2023 at 09:42:31PM -0800, Eric Biggers wrote:
.
> crypto_lskcipher_crypt_sg() assumes that a single en/decryption operation can be
> broken up into multiple ones.  I think you're arguing that since there's no

Good point.  It means that we'd have to linearise the buffer for
such algorithms, or just write an SG implementation as we do now
in addition to the lskcipher.

Let me think about this a bit more.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-11-17  9:07             ` Herbert Xu
@ 2023-11-24 10:27               ` Herbert Xu
  2023-11-27 22:28                 ` Eric Biggers
  0 siblings, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-11-24 10:27 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Fri, Nov 17, 2023 at 05:07:22PM +0800, Herbert Xu wrote:
> On Thu, Nov 16, 2023 at 09:42:31PM -0800, Eric Biggers wrote:
> .
> > crypto_lskcipher_crypt_sg() assumes that a single en/decryption operation can be
> > broken up into multiple ones.  I think you're arguing that since there's no

OK I see where some of the confusion is coming from.  The current
skcipher interface assumes that the underlying algorithm can be
chained.

So the implementation of chacha is actually wrong as it stands
and it will produce incorrect results when used through if_alg.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-11-24 10:27               ` Herbert Xu
@ 2023-11-27 22:28                 ` Eric Biggers
  2023-11-29  6:24                   ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Biggers @ 2023-11-27 22:28 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Fri, Nov 24, 2023 at 06:27:25PM +0800, Herbert Xu wrote:
> On Fri, Nov 17, 2023 at 05:07:22PM +0800, Herbert Xu wrote:
> > On Thu, Nov 16, 2023 at 09:42:31PM -0800, Eric Biggers wrote:
> > .
> > > crypto_lskcipher_crypt_sg() assumes that a single en/decryption operation can be
> > > broken up into multiple ones.  I think you're arguing that since there's no
> 
> OK I see where some of the confusion is coming from.  The current
> skcipher interface assumes that the underlying algorithm can be
> chained.
> 
> So the implementation of chacha is actually wrong as it stands
> and it will produce incorrect results when used through if_alg.
> 

As far as I can tell, currently "chaining" is only implemented by CBC and CTR.
So this really seems like an issue in AF_ALG, not the skcipher API per se.
AF_ALG should not support splitting up encryption/decryption operations on
algorithms that don't support it.

- Eric

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)
  2023-11-27 22:28                 ` Eric Biggers
@ 2023-11-29  6:24                   ` Herbert Xu
  2023-11-29  6:29                     ` [PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
                                       ` (5 more replies)
  0 siblings, 6 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-29  6:24 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Mon, Nov 27, 2023 at 02:28:03PM -0800, Eric Biggers wrote:
>
> As far as I can tell, currently "chaining" is only implemented by CBC and CTR.
> So this really seems like an issue in AF_ALG, not the skcipher API per se.
> AF_ALG should not support splitting up encryption/decryption operations on
> algorithms that don't support it.

Yes I can see your view.  But it really is only a very small number
of algorithms (basically arc4 and chacha) that are currently broken
in this way.  CTS is similarly broken but for a different reason.

Yes we could change the way af_alg operates by removing the ability
to process unlimited amounts of data and instead switching to the
AEAD model where all data is presented together.

However, I think this would be an unnecessary limitation since there
is a way to solve the chaining issue for stream ciphers and others
such as CTS.

So here is my attempt at this, hopefully without causing too much
churn or breakage:

Herbert Xu (4):
  crypto: skcipher - Add internal state support
  crypto: skcipher - Make use of internal state
  crypto: arc4 - Add internal state
  crypto: algif_skcipher - Fix stream cipher chaining

 crypto/algif_skcipher.c   |  71 +++++++++++++++++++++++++--
 crypto/arc4.c             |   8 ++-
 crypto/cbc.c              |   6 ++-
 crypto/ecb.c              |  10 ++--
 crypto/lskcipher.c        |  42 ++++++++++++----
 crypto/skcipher.c         |  64 +++++++++++++++++++++++-
 include/crypto/if_alg.h   |   2 +
 include/crypto/skcipher.h | 100 +++++++++++++++++++++++++++++++++++++-
 8 files changed, 280 insertions(+), 23 deletions(-)
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 1/4] crypto: skcipher - Add internal state support
  2023-11-29  6:24                   ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
@ 2023-11-29  6:29                     ` Herbert Xu
  2023-11-29  6:29                     ` [PATCH 2/4] crypto: skcipher - Make use of internal state Herbert Xu
                                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-29  6:29 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel

Unlike chaining modes such as CBC, stream ciphers other than CTR
usually hold an internal state that must be preserved if the
operation is to be done piecemeal.  This has not been represented
in the API, resulting in the inability to split up stream cipher
operations.

This patch adds the basic representation of an internal state to
skcipher and lskcipher.  In the interest of backwards compatibility,
the default has been set such that existing users are assumed to
be operating in one go as opposed to piecemeal.

With the new API, each lskcipher/skcipher algorithm has a new
attribute called statesize.  For skcipher, this is the size of
the buffer that can be exported or imported similar to ahash.
For lskcipher, instead of providing a buffer of ivsize, the user
now has to provide a buffer of ivsize + statesize.

Each skcipher operation is assumed to be final as they are now,
but this may be overridden with a request flag.  When the override
occurs, the user may then export the partial state and reimport
it later.

For lskcipher operations this is reversed.  All operations are
not final and the state will be exported unless the FINAL bit is
set.  However, the CONT bit still has to be set for the state
to be used.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/arc4.c             |    2 -
 crypto/cbc.c              |    6 ++--
 crypto/ecb.c              |   10 ++++--
 crypto/lskcipher.c        |   14 +++++----
 include/crypto/skcipher.h |   67 ++++++++++++++++++++++++++++++++++++++++++++--
 5 files changed, 84 insertions(+), 15 deletions(-)

diff --git a/crypto/arc4.c b/crypto/arc4.c
index eb3590dc9282..2150f94e7d03 100644
--- a/crypto/arc4.c
+++ b/crypto/arc4.c
@@ -23,7 +23,7 @@ static int crypto_arc4_setkey(struct crypto_lskcipher *tfm, const u8 *in_key,
 }
 
 static int crypto_arc4_crypt(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned nbytes, u8 *iv, bool final)
+			     u8 *dst, unsigned nbytes, u8 *iv, u32 flags)
 {
 	struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
 
diff --git a/crypto/cbc.c b/crypto/cbc.c
index 28345b8d921c..eedddef9ce40 100644
--- a/crypto/cbc.c
+++ b/crypto/cbc.c
@@ -51,9 +51,10 @@ static int crypto_cbc_encrypt_inplace(struct crypto_lskcipher *tfm,
 }
 
 static int crypto_cbc_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
-			      u8 *dst, unsigned len, u8 *iv, bool final)
+			      u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+	bool final = flags & CRYPTO_LSKCIPHER_FLAG_FINAL;
 	struct crypto_lskcipher *cipher = *ctx;
 	int rem;
 
@@ -119,9 +120,10 @@ static int crypto_cbc_decrypt_inplace(struct crypto_lskcipher *tfm,
 }
 
 static int crypto_cbc_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
-			      u8 *dst, unsigned len, u8 *iv, bool final)
+			      u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+	bool final = flags & CRYPTO_LSKCIPHER_FLAG_FINAL;
 	struct crypto_lskcipher *cipher = *ctx;
 	int rem;
 
diff --git a/crypto/ecb.c b/crypto/ecb.c
index cc7625d1a475..e3a67789050e 100644
--- a/crypto/ecb.c
+++ b/crypto/ecb.c
@@ -32,22 +32,24 @@ static int crypto_ecb_crypt(struct crypto_cipher *cipher, const u8 *src,
 }
 
 static int crypto_ecb_encrypt2(struct crypto_lskcipher *tfm, const u8 *src,
-			       u8 *dst, unsigned len, u8 *iv, bool final)
+			       u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
 	struct crypto_cipher *cipher = *ctx;
 
-	return crypto_ecb_crypt(cipher, src, dst, len, final,
+	return crypto_ecb_crypt(cipher, src, dst, len,
+				flags & CRYPTO_LSKCIPHER_FLAG_FINAL,
 				crypto_cipher_alg(cipher)->cia_encrypt);
 }
 
 static int crypto_ecb_decrypt2(struct crypto_lskcipher *tfm, const u8 *src,
-			       u8 *dst, unsigned len, u8 *iv, bool final)
+			       u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
 	struct crypto_cipher *cipher = *ctx;
 
-	return crypto_ecb_crypt(cipher, src, dst, len, final,
+	return crypto_ecb_crypt(cipher, src, dst, len,
+				flags & CRYPTO_LSKCIPHER_FLAG_FINAL,
 				crypto_cipher_alg(cipher)->cia_decrypt);
 }
 
diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
index 9edc89730951..51bcf85070c7 100644
--- a/crypto/lskcipher.c
+++ b/crypto/lskcipher.c
@@ -88,7 +88,7 @@ EXPORT_SYMBOL_GPL(crypto_lskcipher_setkey);
 static int crypto_lskcipher_crypt_unaligned(
 	struct crypto_lskcipher *tfm, const u8 *src, u8 *dst, unsigned len,
 	u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned len, u8 *iv, bool final))
+			     u8 *dst, unsigned len, u8 *iv, u32 flags))
 {
 	unsigned ivsize = crypto_lskcipher_ivsize(tfm);
 	unsigned bs = crypto_lskcipher_blocksize(tfm);
@@ -119,7 +119,7 @@ static int crypto_lskcipher_crypt_unaligned(
 			chunk &= ~(cs - 1);
 
 		memcpy(p, src, chunk);
-		err = crypt(tfm, p, p, chunk, tiv, true);
+		err = crypt(tfm, p, p, chunk, tiv, CRYPTO_LSKCIPHER_FLAG_FINAL);
 		if (err)
 			goto out;
 
@@ -143,7 +143,7 @@ static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
 				  int (*crypt)(struct crypto_lskcipher *tfm,
 					       const u8 *src, u8 *dst,
 					       unsigned len, u8 *iv,
-					       bool final))
+					       u32 flags))
 {
 	unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
 	struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
@@ -156,7 +156,7 @@ static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
 		goto out;
 	}
 
-	ret = crypt(tfm, src, dst, len, iv, true);
+	ret = crypt(tfm, src, dst, len, iv, CRYPTO_LSKCIPHER_FLAG_FINAL);
 
 out:
 	return crypto_lskcipher_errstat(alg, ret);
@@ -198,7 +198,7 @@ static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
 				     int (*crypt)(struct crypto_lskcipher *tfm,
 						  const u8 *src, u8 *dst,
 						  unsigned len, u8 *iv,
-						  bool final))
+						  u32 flags))
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
@@ -210,7 +210,9 @@ static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
 
 	while (walk.nbytes) {
 		err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
-			    walk.nbytes, walk.iv, walk.nbytes == walk.total);
+			    walk.nbytes, walk.iv,
+			    walk.nbytes == walk.total ?
+			    CRYPTO_LSKCIPHER_FLAG_FINAL : 0);
 		err = skcipher_walk_done(&walk, err);
 	}
 
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index ea18af48346b..0cfbe86f957b 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -15,6 +15,17 @@
 #include <linux/string.h>
 #include <linux/types.h>
 
+/* Set this bit if the lskcipher operation is a continuation. */
+#define CRYPTO_LSKCIPHER_FLAG_CONT	0x00000001
+/* Set this bit if the lskcipher operation is final. */
+#define CRYPTO_LSKCIPHER_FLAG_FINAL	0x00000002
+/* The bit CRYPTO_TFM_REQ_MAY_SLEEP can also be set if needed. */
+
+/* Set this bit if the skcipher operation is a continuation. */
+#define CRYPTO_SKCIPHER_REQ_CONT	0x00000001
+/* Set this bit if the skcipher operation is not final. */
+#define CRYPTO_SKCIPHER_REQ_NOTFINAL	0x00000002
+
 struct scatterlist;
 
 /**
@@ -91,6 +102,7 @@ struct crypto_istat_cipher {
  *	    IV of exactly that size to perform the encrypt or decrypt operation.
  * @chunksize: Equal to the block size except for stream ciphers such as
  *	       CTR where it is set to the underlying block size.
+ * @statesize: Size of the internal state for the algorithm.
  * @stat: Statistics for cipher algorithm
  * @base: Definition of a generic crypto algorithm.
  */
@@ -99,6 +111,7 @@ struct crypto_istat_cipher {
 	unsigned int max_keysize;	\
 	unsigned int ivsize;		\
 	unsigned int chunksize;		\
+	unsigned int statesize;		\
 					\
 	SKCIPHER_ALG_COMMON_STAT	\
 					\
@@ -141,6 +154,17 @@ struct skcipher_alg_common SKCIPHER_ALG_COMMON;
  *	     be called in parallel with the same transformation object.
  * @decrypt: Decrypt a single block. This is a reverse counterpart to @encrypt
  *	     and the conditions are exactly the same.
+ * @export: Export partial state of the transformation. This function dumps the
+ *	    entire state of the ongoing transformation into a provided block of
+ *	    data so it can be @import 'ed back later on. This is useful in case
+ *	    you want to save partial result of the transformation after
+ *	    processing certain amount of data and reload this partial result
+ *	    multiple times later on for multiple re-use. No data processing
+ *	    happens at this point.
+ * @import: Import partial state of the transformation. This function loads the
+ *	    entire state of the ongoing transformation from a provided block of
+ *	    data so the transformation can continue from this point onward. No
+ *	    data processing happens at this point.
  * @init: Initialize the cryptographic transformation object. This function
  *	  is used to initialize the cryptographic transformation object.
  *	  This function is called only once at the instantiation time, right
@@ -170,6 +194,8 @@ struct skcipher_alg {
 	              unsigned int keylen);
 	int (*encrypt)(struct skcipher_request *req);
 	int (*decrypt)(struct skcipher_request *req);
+	int (*export)(struct skcipher_request *req, void *out);
+	int (*import)(struct skcipher_request *req, const void *in);
 	int (*init)(struct crypto_skcipher *tfm);
 	void (*exit)(struct crypto_skcipher *tfm);
 
@@ -200,6 +226,9 @@ struct skcipher_alg {
  *	     may be left over if length is not a multiple of blocks
  *	     and there is more to come (final == false).  The number of
  *	     left-over bytes should be returned in case of success.
+ *	     The siv field shall be as long as ivsize + statesize with
+ *	     the IV placed at the front.  The state will be used by the
+ *	     algorithm internally.
  * @decrypt: Decrypt a number of bytes. This is a reverse counterpart to
  *	     @encrypt and the conditions are exactly the same.
  * @init: Initialize the cryptographic transformation object. This function
@@ -215,9 +244,9 @@ struct lskcipher_alg {
 	int (*setkey)(struct crypto_lskcipher *tfm, const u8 *key,
 	              unsigned int keylen);
 	int (*encrypt)(struct crypto_lskcipher *tfm, const u8 *src,
-		       u8 *dst, unsigned len, u8 *iv, bool final);
+		       u8 *dst, unsigned len, u8 *siv, u32 flags);
 	int (*decrypt)(struct crypto_lskcipher *tfm, const u8 *src,
-		       u8 *dst, unsigned len, u8 *iv, bool final);
+		       u8 *dst, unsigned len, u8 *siv, u32 flags);
 	int (*init)(struct crypto_lskcipher *tfm);
 	void (*exit)(struct crypto_lskcipher *tfm);
 
@@ -496,6 +525,40 @@ static inline unsigned int crypto_lskcipher_chunksize(
 	return crypto_lskcipher_alg(tfm)->co.chunksize;
 }
 
+/**
+ * crypto_skcipher_statesize() - obtain state size
+ * @tfm: cipher handle
+ *
+ * Some algorithms cannot be chained with the IV alone.  They carry
+ * internal state which must be replicated if data is to be processed
+ * incrementally.  The size of that state can be obtained with this
+ * function.
+ *
+ * Return: state size in bytes
+ */
+static inline unsigned int crypto_skcipher_statesize(
+	struct crypto_skcipher *tfm)
+{
+	return crypto_skcipher_alg_common(tfm)->statesize;
+}
+
+/**
+ * crypto_lskcipher_statesize() - obtain state size
+ * @tfm: cipher handle
+ *
+ * Some algorithms cannot be chained with the IV alone.  They carry
+ * internal state which must be replicated if data is to be processed
+ * incrementally.  The size of that state can be obtained with this
+ * function.
+ *
+ * Return: state size in bytes
+ */
+static inline unsigned int crypto_lskcipher_statesize(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_lskcipher_alg(tfm)->co.statesize;
+}
+
 static inline unsigned int crypto_sync_skcipher_blocksize(
 	struct crypto_sync_skcipher *tfm)
 {

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 2/4] crypto: skcipher - Make use of internal state
  2023-11-29  6:24                   ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
  2023-11-29  6:29                     ` [PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
@ 2023-11-29  6:29                     ` Herbert Xu
  2023-11-29  6:29                     ` [PATCH 3/4] crypto: arc4 - Add " Herbert Xu
                                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-29  6:29 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel

This patch adds code to the skcipher/lskcipher API to make use
of the internal state if present.  In particular, the skcipher
lskcipher wrapper will allocate a buffer for the IV/state and
feed that to the underlying lskcipher algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/lskcipher.c        |   34 ++++++++++++++++++++----
 crypto/skcipher.c         |   64 ++++++++++++++++++++++++++++++++++++++++++++--
 include/crypto/skcipher.h |   33 +++++++++++++++++++++++
 3 files changed, 123 insertions(+), 8 deletions(-)

diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
index 51bcf85070c7..e6b87787bd64 100644
--- a/crypto/lskcipher.c
+++ b/crypto/lskcipher.c
@@ -90,6 +90,7 @@ static int crypto_lskcipher_crypt_unaligned(
 	u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
 			     u8 *dst, unsigned len, u8 *iv, u32 flags))
 {
+	unsigned statesize = crypto_lskcipher_statesize(tfm);
 	unsigned ivsize = crypto_lskcipher_ivsize(tfm);
 	unsigned bs = crypto_lskcipher_blocksize(tfm);
 	unsigned cs = crypto_lskcipher_chunksize(tfm);
@@ -104,7 +105,7 @@ static int crypto_lskcipher_crypt_unaligned(
 	if (!tiv)
 		return -ENOMEM;
 
-	memcpy(tiv, iv, ivsize);
+	memcpy(tiv, iv, ivsize + statesize);
 
 	p = kmalloc(PAGE_SIZE, GFP_ATOMIC);
 	err = -ENOMEM;
@@ -132,7 +133,7 @@ static int crypto_lskcipher_crypt_unaligned(
 	err = len ? -EINVAL : 0;
 
 out:
-	memcpy(iv, tiv, ivsize);
+	memcpy(iv, tiv, ivsize + statesize);
 	kfree_sensitive(p);
 	kfree_sensitive(tiv);
 	return err;
@@ -197,25 +198,45 @@ EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt);
 static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
 				     int (*crypt)(struct crypto_lskcipher *tfm,
 						  const u8 *src, u8 *dst,
-						  unsigned len, u8 *iv,
+						  unsigned len, u8 *ivs,
 						  u32 flags))
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+	u8 *ivs = skcipher_request_ctx(req);
 	struct crypto_lskcipher *tfm = *ctx;
 	struct skcipher_walk walk;
+	unsigned ivsize;
+	u32 flags;
 	int err;
 
+	ivsize = crypto_lskcipher_ivsize(tfm);
+	ivs = PTR_ALIGN(ivs, crypto_skcipher_alignmask(skcipher) + 1);
+
+	flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
+
+	if (req->base.flags & CRYPTO_SKCIPHER_REQ_CONT)
+		flags |= CRYPTO_LSKCIPHER_FLAG_CONT;
+	else
+		memcpy(ivs, req->iv, ivsize);
+
+	if (!(req->base.flags & CRYPTO_SKCIPHER_REQ_NOTFINAL))
+		flags |= CRYPTO_LSKCIPHER_FLAG_FINAL;
+
 	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes) {
 		err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
-			    walk.nbytes, walk.iv,
-			    walk.nbytes == walk.total ?
-			    CRYPTO_LSKCIPHER_FLAG_FINAL : 0);
+			    walk.nbytes, ivs,
+			    flags & ~(walk.nbytes == walk.total ?
+				      0 : CRYPTO_LSKCIPHER_FLAG_FINAL));
 		err = skcipher_walk_done(&walk, err);
+		flags |= CRYPTO_LSKCIPHER_FLAG_CONT;
 	}
 
+	if (flags & CRYPTO_LSKCIPHER_FLAG_FINAL)
+		memcpy(req->iv, ivs, ivsize);
+
 	return err;
 }
 
@@ -278,6 +299,7 @@ static void __maybe_unused crypto_lskcipher_show(
 	seq_printf(m, "max keysize  : %u\n", skcipher->co.max_keysize);
 	seq_printf(m, "ivsize       : %u\n", skcipher->co.ivsize);
 	seq_printf(m, "chunksize    : %u\n", skcipher->co.chunksize);
+	seq_printf(m, "statesize    : %u\n", skcipher->co.statesize);
 }
 
 static int __maybe_unused crypto_lskcipher_report(
diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index ac8b8c042654..b8e1d15c2807 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -698,6 +698,54 @@ int crypto_skcipher_decrypt(struct skcipher_request *req)
 }
 EXPORT_SYMBOL_GPL(crypto_skcipher_decrypt);
 
+static int crypto_lskcipher_export(struct skcipher_request *req, void *out)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	u8 *ivs = skcipher_request_ctx(req);
+
+	ivs = PTR_ALIGN(ivs, crypto_skcipher_alignmask(tfm) + 1);
+
+	memcpy(out, ivs + crypto_skcipher_ivsize(tfm),
+	       crypto_skcipher_statesize(tfm));
+
+	return 0;
+}
+
+static int crypto_lskcipher_import(struct skcipher_request *req, const void *in)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	u8 *ivs = skcipher_request_ctx(req);
+
+	ivs = PTR_ALIGN(ivs, crypto_skcipher_alignmask(tfm) + 1);
+
+	memcpy(ivs + crypto_skcipher_ivsize(tfm), in,
+	       crypto_skcipher_statesize(tfm));
+
+	return 0;
+}
+
+int crypto_skcipher_export(struct skcipher_request *req, void *out)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+
+	if (alg->co.base.cra_type != &crypto_skcipher_type)
+		return crypto_lskcipher_export(req, out);
+	return alg->export(req, out);
+}
+EXPORT_SYMBOL_GPL(crypto_skcipher_export);
+
+int crypto_skcipher_import(struct skcipher_request *req, const void *in)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+
+	if (alg->co.base.cra_type != &crypto_skcipher_type)
+		return crypto_lskcipher_import(req, in);
+	return alg->import(req, in);
+}
+EXPORT_SYMBOL_GPL(crypto_skcipher_import);
+
 static void crypto_skcipher_exit_tfm(struct crypto_tfm *tfm)
 {
 	struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm);
@@ -713,8 +761,17 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
 
 	skcipher_set_needkey(skcipher);
 
-	if (tfm->__crt_alg->cra_type != &crypto_skcipher_type)
+	if (tfm->__crt_alg->cra_type != &crypto_skcipher_type) {
+		unsigned am = crypto_skcipher_alignmask(skcipher);
+		unsigned reqsize;
+
+		reqsize = am & ~(crypto_tfm_ctx_alignment() - 1);
+		reqsize += crypto_skcipher_ivsize(skcipher);
+		reqsize += crypto_skcipher_statesize(skcipher);
+		crypto_skcipher_set_reqsize(skcipher, reqsize);
+
 		return crypto_init_lskcipher_ops_sg(tfm);
+	}
 
 	if (alg->exit)
 		skcipher->base.exit = crypto_skcipher_exit_tfm;
@@ -756,6 +813,7 @@ static void crypto_skcipher_show(struct seq_file *m, struct crypto_alg *alg)
 	seq_printf(m, "ivsize       : %u\n", skcipher->ivsize);
 	seq_printf(m, "chunksize    : %u\n", skcipher->chunksize);
 	seq_printf(m, "walksize     : %u\n", skcipher->walksize);
+	seq_printf(m, "statesize    : %u\n", skcipher->statesize);
 }
 
 static int __maybe_unused crypto_skcipher_report(
@@ -870,7 +928,9 @@ int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
 	struct crypto_istat_cipher *istat = skcipher_get_stat_common(alg);
 	struct crypto_alg *base = &alg->base;
 
-	if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8)
+	if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 ||
+	    alg->statesize > PAGE_SIZE / 2 ||
+	    (alg->ivsize + alg->statesize) > PAGE_SIZE / 2)
 		return -EINVAL;
 
 	if (!alg->chunksize)
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 0cfbe86f957b..b2faab27bed4 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -746,6 +746,39 @@ int crypto_skcipher_encrypt(struct skcipher_request *req);
  */
 int crypto_skcipher_decrypt(struct skcipher_request *req);
 
+/**
+ * crypto_skcipher_export() - export partial state
+ * @req: reference to the skcipher_request handle that holds all information
+ *	 needed to perform the operation
+ * @out: output buffer of sufficient size that can hold the state
+ *
+ * Export partial state of the transformation. This function dumps the
+ * entire state of the ongoing transformation into a provided block of
+ * data so it can be @import 'ed back later on. This is useful in case
+ * you want to save partial result of the transformation after
+ * processing certain amount of data and reload this partial result
+ * multiple times later on for multiple re-use. No data processing
+ * happens at this point.
+ *
+ * Return: 0 if the cipher operation was successful; < 0 if an error occurred
+ */
+int crypto_skcipher_export(struct skcipher_request *req, void *out);
+
+/**
+ * crypto_skcipher_import() - import partial state
+ * @req: reference to the skcipher_request handle that holds all information
+ *	 needed to perform the operation
+ * @in: buffer holding the state
+ *
+ * Import partial state of the transformation. This function loads the
+ * entire state of the ongoing transformation from a provided block of
+ * data so the transformation can continue from this point onward. No
+ * data processing happens at this point.
+ *
+ * Return: 0 if the cipher operation was successful; < 0 if an error occurred
+ */
+int crypto_skcipher_import(struct skcipher_request *req, const void *in);
+
 /**
  * crypto_lskcipher_encrypt() - encrypt plaintext
  * @tfm: lskcipher handle

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 3/4] crypto: arc4 - Add internal state
  2023-11-29  6:24                   ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
  2023-11-29  6:29                     ` [PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
  2023-11-29  6:29                     ` [PATCH 2/4] crypto: skcipher - Make use of internal state Herbert Xu
@ 2023-11-29  6:29                     ` Herbert Xu
  2023-11-29  6:29                     ` [PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
                                       ` (2 subsequent siblings)
  5 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-29  6:29 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel

The arc4 algorithm has always had internal state.  It's been buggy
from day one in that the state has been stored in the shared tfm
object.  That means two users sharing the same tfm will end up
affecting each other's output, or worse, they may end up with the
same output.

Fix this by declaring an internal state and storing the state there
instead of within the tfm context.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/arc4.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/crypto/arc4.c b/crypto/arc4.c
index 2150f94e7d03..e285bfcef667 100644
--- a/crypto/arc4.c
+++ b/crypto/arc4.c
@@ -23,10 +23,15 @@ static int crypto_arc4_setkey(struct crypto_lskcipher *tfm, const u8 *in_key,
 }
 
 static int crypto_arc4_crypt(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned nbytes, u8 *iv, u32 flags)
+			     u8 *dst, unsigned nbytes, u8 *siv, u32 flags)
 {
 	struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
 
+	if (!(flags & CRYPTO_LSKCIPHER_FLAG_CONT))
+		memcpy(siv, ctx, sizeof(*ctx));
+
+	ctx = (struct arc4_ctx *)siv;
+
 	arc4_crypt(ctx, dst, src, nbytes);
 	return 0;
 }
@@ -48,6 +53,7 @@ static struct lskcipher_alg arc4_alg = {
 	.co.base.cra_module		=	THIS_MODULE,
 	.co.min_keysize			=	ARC4_MIN_KEY_SIZE,
 	.co.max_keysize			=	ARC4_MAX_KEY_SIZE,
+	.co.statesize			=	sizeof(struct arc4_ctx),
 	.setkey				=	crypto_arc4_setkey,
 	.encrypt			=	crypto_arc4_crypt,
 	.decrypt			=	crypto_arc4_crypt,

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining
  2023-11-29  6:24                   ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
                                       ` (2 preceding siblings ...)
  2023-11-29  6:29                     ` [PATCH 3/4] crypto: arc4 - Add " Herbert Xu
@ 2023-11-29  6:29                     ` Herbert Xu
  2023-11-29 21:04                     ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Eric Biggers
  2023-11-30  9:55                     ` [v2 PATCH " Herbert Xu
  5 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-29  6:29 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel

Unlike algif_aead which is always issued in one go (thus limiting
the maximum size of the request), algif_skcipher has always allowed
unlimited input data by cutting them up as necessary and feeding
the fragments to the underlying algorithm one at a time.

However, because of deficiencies in the API, this has been broken
for most stream ciphers such as arc4 or chacha.  This is because
they have an internal state in addition to the IV that must be
preserved in order to continue processing.

Fix this by using the new skcipher state API.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/algif_skcipher.c |   71 +++++++++++++++++++++++++++++++++++++++++++++---
 include/crypto/if_alg.h |    2 +
 2 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 9ada9b741af8..59dcc6fc74a2 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -47,6 +47,52 @@ static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg,
 	return af_alg_sendmsg(sock, msg, size, ivsize);
 }
 
+static int algif_skcipher_export(struct sock *sk, struct skcipher_request *req)
+{
+	struct alg_sock *ask = alg_sk(sk);
+	struct crypto_skcipher *tfm;
+	struct af_alg_ctx *ctx;
+	struct alg_sock *pask;
+	unsigned statesize;
+	struct sock *psk;
+	int err;
+
+	if (!(req->base.flags & CRYPTO_SKCIPHER_REQ_NOTFINAL))
+		return 0;
+
+	ctx = ask->private;
+	psk = ask->parent;
+	pask = alg_sk(psk);
+	tfm = pask->private;
+
+	statesize = crypto_skcipher_statesize(tfm);
+	ctx->state = sock_kmalloc(sk, statesize, GFP_ATOMIC);
+	if (!ctx->state)
+		return -ENOMEM;
+
+	err = crypto_skcipher_export(req, ctx->state);
+	if (err) {
+		sock_kzfree_s(sk, ctx->state, statesize);
+		ctx->state = NULL;
+	}
+
+	return err;
+}
+
+static void algif_skcipher_done(void *data, int err)
+{
+	struct af_alg_async_req *areq = data;
+	struct sock *sk = areq->sk;
+
+	if (err)
+		goto out;
+
+	err = algif_skcipher_export(sk, &areq->cra_u.skcipher_req);
+
+out:
+	af_alg_async_cb(data, err);
+}
+
 static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 			     size_t ignored, int flags)
 {
@@ -58,6 +104,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	struct crypto_skcipher *tfm = pask->private;
 	unsigned int bs = crypto_skcipher_chunksize(tfm);
 	struct af_alg_async_req *areq;
+	unsigned cflags = 0;
 	int err = 0;
 	size_t len = 0;
 
@@ -82,8 +129,10 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	 * If more buffers are to be expected to be processed, process only
 	 * full block size buffers.
 	 */
-	if (ctx->more || len < ctx->used)
+	if (ctx->more || len < ctx->used) {
 		len -= len % bs;
+		cflags |= CRYPTO_SKCIPHER_REQ_NOTFINAL;
+	}
 
 	/*
 	 * Create a per request TX SGL for this request which tracks the
@@ -107,6 +156,16 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	skcipher_request_set_crypt(&areq->cra_u.skcipher_req, areq->tsgl,
 				   areq->first_rsgl.sgl.sgt.sgl, len, ctx->iv);
 
+	if (ctx->state) {
+		err = crypto_skcipher_import(&areq->cra_u.skcipher_req,
+					     ctx->state);
+		sock_kzfree_s(sk, ctx->state, crypto_skcipher_statesize(tfm));
+		ctx->state = NULL;
+		if (err)
+			goto free;
+		cflags |= CRYPTO_SKCIPHER_REQ_CONT;
+	}
+
 	if (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) {
 		/* AIO operation */
 		sock_hold(sk);
@@ -116,8 +175,9 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 		areq->outlen = len;
 
 		skcipher_request_set_callback(&areq->cra_u.skcipher_req,
+					      cflags |
 					      CRYPTO_TFM_REQ_MAY_SLEEP,
-					      af_alg_async_cb, areq);
+					      algif_skcipher_done, areq);
 		err = ctx->enc ?
 			crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) :
 			crypto_skcipher_decrypt(&areq->cra_u.skcipher_req);
@@ -130,6 +190,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	} else {
 		/* Synchronous operation */
 		skcipher_request_set_callback(&areq->cra_u.skcipher_req,
+					      cflags |
 					      CRYPTO_TFM_REQ_MAY_SLEEP |
 					      CRYPTO_TFM_REQ_MAY_BACKLOG,
 					      crypto_req_done, &ctx->wait);
@@ -137,8 +198,11 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 			crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) :
 			crypto_skcipher_decrypt(&areq->cra_u.skcipher_req),
 						 &ctx->wait);
-	}
 
+		if (!err)
+			err = algif_skcipher_export(
+				sk, &areq->cra_u.skcipher_req);
+	}
 
 free:
 	af_alg_free_resources(areq);
@@ -301,6 +365,7 @@ static void skcipher_sock_destruct(struct sock *sk)
 
 	af_alg_pull_tsgl(sk, ctx->used, NULL, 0);
 	sock_kzfree_s(sk, ctx->iv, crypto_skcipher_ivsize(tfm));
+	sock_kzfree_s(sk, ctx->state, crypto_skcipher_statesize(tfm));
 	sock_kfree_s(sk, ctx, ctx->len);
 	af_alg_release_parent(sk);
 }
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 08b803a4fcde..78ecaf5db04c 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -121,6 +121,7 @@ struct af_alg_async_req {
  *
  * @tsgl_list:		Link to TX SGL
  * @iv:			IV for cipher operation
+ * @state:		Existing state for continuing operation
  * @aead_assoclen:	Length of AAD for AEAD cipher operations
  * @completion:		Work queue for synchronous operation
  * @used:		TX bytes sent to kernel. This variable is used to
@@ -142,6 +143,7 @@ struct af_alg_ctx {
 	struct list_head tsgl_list;
 
 	void *iv;
+	void *state;
 	size_t aead_assoclen;
 
 	struct crypto_wait wait;

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)
  2023-11-29  6:24                   ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
                                       ` (3 preceding siblings ...)
  2023-11-29  6:29                     ` [PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
@ 2023-11-29 21:04                     ` Eric Biggers
  2023-11-30  2:17                       ` Herbert Xu
  2023-11-30  9:55                     ` [v2 PATCH " Herbert Xu
  5 siblings, 1 reply; 50+ messages in thread
From: Eric Biggers @ 2023-11-29 21:04 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Wed, Nov 29, 2023 at 02:24:18PM +0800, Herbert Xu wrote:
> On Mon, Nov 27, 2023 at 02:28:03PM -0800, Eric Biggers wrote:
> >
> > As far as I can tell, currently "chaining" is only implemented by CBC and CTR.
> > So this really seems like an issue in AF_ALG, not the skcipher API per se.
> > AF_ALG should not support splitting up encryption/decryption operations on
> > algorithms that don't support it.
> 
> Yes I can see your view.  But it really is only a very small number
> of algorithms (basically arc4 and chacha) that are currently broken
> in this way.  CTS is similarly broken but for a different reason.

I don't think that's accurate.  CBC and CTR are the only skciphers for which
this behavior is actually tested.  Everything else, not just stream ciphers but
all other skciphers, can be assumed to be broken.  Even when I added the tests
for "output IV" for CBC and CTR back in 2019 (because I perhaps
over-simplisticly just considered those to be missing tests), many
implementations failed and had to be fixed.  So I think it's fair to say that
this is not really something that has ever actually been important or even
supported, despite what the intent of the algif_skcipher code may have been.  We
could choose to onboard new algorithms to that convention one by one, but we'd
need to add the tests and fix everything failing them, which will be a lot.

- Eric

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)
  2023-11-29 21:04                     ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Eric Biggers
@ 2023-11-30  2:17                       ` Herbert Xu
  0 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-30  2:17 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Wed, Nov 29, 2023 at 01:04:21PM -0800, Eric Biggers wrote:
>
> I don't think that's accurate.  CBC and CTR are the only skciphers for which
> this behavior is actually tested.  Everything else, not just stream ciphers but
> all other skciphers, can be assumed to be broken.  Even when I added the tests
> for "output IV" for CBC and CTR back in 2019 (because I perhaps
> over-simplisticly just considered those to be missing tests), many
> implementations failed and had to be fixed.  So I think it's fair to say that
> this is not really something that has ever actually been important or even
> supported, despite what the intent of the algif_skcipher code may have been.  We
> could choose to onboard new algorithms to that convention one by one, but we'd
> need to add the tests and fix everything failing them, which will be a lot.

OK I was perhaps a bit over the top, but it is certainly the case
that for IPsec encryption algorithms, all the underlying algorithms
are able to support chaining.  I concede that the majority of disk
encryption algorithms do not.

I'm not worried about the amount of work here since most of it could
be done at the same as the lskcipher conversion which is worthy in and
of itself.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [v2 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)
  2023-11-29  6:24                   ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
                                       ` (4 preceding siblings ...)
  2023-11-29 21:04                     ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Eric Biggers
@ 2023-11-30  9:55                     ` Herbert Xu
  2023-11-30  9:56                       ` [v2 PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
                                         ` (4 more replies)
  5 siblings, 5 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-30  9:55 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

v2 fixes a crash when no export/import functions are provided.

This series of patches adds the ability to process a skcipher
request in a piecemeal fashion, which is currently only possible
for selected algorithms such as CBC and CTR.

Herbert Xu (4):
  crypto: skcipher - Add internal state support
  crypto: skcipher - Make use of internal state
  crypto: arc4 - Add internal state
  crypto: algif_skcipher - Fix stream cipher chaining

 crypto/algif_skcipher.c   |  71 +++++++++++++++++++++++++--
 crypto/arc4.c             |   8 ++-
 crypto/cbc.c              |   6 ++-
 crypto/ecb.c              |  10 ++--
 crypto/lskcipher.c        |  42 ++++++++++++----
 crypto/skcipher.c         |  80 +++++++++++++++++++++++++++++-
 include/crypto/if_alg.h   |   2 +
 include/crypto/skcipher.h | 100 +++++++++++++++++++++++++++++++++++++-
 8 files changed, 296 insertions(+), 23 deletions(-)

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [v2 PATCH 1/4] crypto: skcipher - Add internal state support
  2023-11-30  9:55                     ` [v2 PATCH " Herbert Xu
@ 2023-11-30  9:56                       ` Herbert Xu
  2023-11-30  9:56                       ` [v2 PATCH 2/4] crypto: skcipher - Make use of internal state Herbert Xu
                                         ` (3 subsequent siblings)
  4 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-30  9:56 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel

Unlike chaining modes such as CBC, stream ciphers other than CTR
usually hold an internal state that must be preserved if the
operation is to be done piecemeal.  This has not been represented
in the API, resulting in the inability to split up stream cipher
operations.

This patch adds the basic representation of an internal state to
skcipher and lskcipher.  In the interest of backwards compatibility,
the default has been set such that existing users are assumed to
be operating in one go as opposed to piecemeal.

With the new API, each lskcipher/skcipher algorithm has a new
attribute called statesize.  For skcipher, this is the size of
the buffer that can be exported or imported similar to ahash.
For lskcipher, instead of providing a buffer of ivsize, the user
now has to provide a buffer of ivsize + statesize.

Each skcipher operation is assumed to be final as they are now,
but this may be overridden with a request flag.  When the override
occurs, the user may then export the partial state and reimport
it later.

For lskcipher operations this is reversed.  All operations are
not final and the state will be exported unless the FINAL bit is
set.  However, the CONT bit still has to be set for the state
to be used.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/arc4.c             |    2 -
 crypto/cbc.c              |    6 ++--
 crypto/ecb.c              |   10 ++++--
 crypto/lskcipher.c        |   14 +++++----
 include/crypto/skcipher.h |   67 ++++++++++++++++++++++++++++++++++++++++++++--
 5 files changed, 84 insertions(+), 15 deletions(-)

diff --git a/crypto/arc4.c b/crypto/arc4.c
index eb3590dc9282..2150f94e7d03 100644
--- a/crypto/arc4.c
+++ b/crypto/arc4.c
@@ -23,7 +23,7 @@ static int crypto_arc4_setkey(struct crypto_lskcipher *tfm, const u8 *in_key,
 }
 
 static int crypto_arc4_crypt(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned nbytes, u8 *iv, bool final)
+			     u8 *dst, unsigned nbytes, u8 *iv, u32 flags)
 {
 	struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
 
diff --git a/crypto/cbc.c b/crypto/cbc.c
index 28345b8d921c..eedddef9ce40 100644
--- a/crypto/cbc.c
+++ b/crypto/cbc.c
@@ -51,9 +51,10 @@ static int crypto_cbc_encrypt_inplace(struct crypto_lskcipher *tfm,
 }
 
 static int crypto_cbc_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
-			      u8 *dst, unsigned len, u8 *iv, bool final)
+			      u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+	bool final = flags & CRYPTO_LSKCIPHER_FLAG_FINAL;
 	struct crypto_lskcipher *cipher = *ctx;
 	int rem;
 
@@ -119,9 +120,10 @@ static int crypto_cbc_decrypt_inplace(struct crypto_lskcipher *tfm,
 }
 
 static int crypto_cbc_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
-			      u8 *dst, unsigned len, u8 *iv, bool final)
+			      u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+	bool final = flags & CRYPTO_LSKCIPHER_FLAG_FINAL;
 	struct crypto_lskcipher *cipher = *ctx;
 	int rem;
 
diff --git a/crypto/ecb.c b/crypto/ecb.c
index cc7625d1a475..e3a67789050e 100644
--- a/crypto/ecb.c
+++ b/crypto/ecb.c
@@ -32,22 +32,24 @@ static int crypto_ecb_crypt(struct crypto_cipher *cipher, const u8 *src,
 }
 
 static int crypto_ecb_encrypt2(struct crypto_lskcipher *tfm, const u8 *src,
-			       u8 *dst, unsigned len, u8 *iv, bool final)
+			       u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
 	struct crypto_cipher *cipher = *ctx;
 
-	return crypto_ecb_crypt(cipher, src, dst, len, final,
+	return crypto_ecb_crypt(cipher, src, dst, len,
+				flags & CRYPTO_LSKCIPHER_FLAG_FINAL,
 				crypto_cipher_alg(cipher)->cia_encrypt);
 }
 
 static int crypto_ecb_decrypt2(struct crypto_lskcipher *tfm, const u8 *src,
-			       u8 *dst, unsigned len, u8 *iv, bool final)
+			       u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
 	struct crypto_cipher *cipher = *ctx;
 
-	return crypto_ecb_crypt(cipher, src, dst, len, final,
+	return crypto_ecb_crypt(cipher, src, dst, len,
+				flags & CRYPTO_LSKCIPHER_FLAG_FINAL,
 				crypto_cipher_alg(cipher)->cia_decrypt);
 }
 
diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
index 9edc89730951..51bcf85070c7 100644
--- a/crypto/lskcipher.c
+++ b/crypto/lskcipher.c
@@ -88,7 +88,7 @@ EXPORT_SYMBOL_GPL(crypto_lskcipher_setkey);
 static int crypto_lskcipher_crypt_unaligned(
 	struct crypto_lskcipher *tfm, const u8 *src, u8 *dst, unsigned len,
 	u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned len, u8 *iv, bool final))
+			     u8 *dst, unsigned len, u8 *iv, u32 flags))
 {
 	unsigned ivsize = crypto_lskcipher_ivsize(tfm);
 	unsigned bs = crypto_lskcipher_blocksize(tfm);
@@ -119,7 +119,7 @@ static int crypto_lskcipher_crypt_unaligned(
 			chunk &= ~(cs - 1);
 
 		memcpy(p, src, chunk);
-		err = crypt(tfm, p, p, chunk, tiv, true);
+		err = crypt(tfm, p, p, chunk, tiv, CRYPTO_LSKCIPHER_FLAG_FINAL);
 		if (err)
 			goto out;
 
@@ -143,7 +143,7 @@ static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
 				  int (*crypt)(struct crypto_lskcipher *tfm,
 					       const u8 *src, u8 *dst,
 					       unsigned len, u8 *iv,
-					       bool final))
+					       u32 flags))
 {
 	unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
 	struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
@@ -156,7 +156,7 @@ static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
 		goto out;
 	}
 
-	ret = crypt(tfm, src, dst, len, iv, true);
+	ret = crypt(tfm, src, dst, len, iv, CRYPTO_LSKCIPHER_FLAG_FINAL);
 
 out:
 	return crypto_lskcipher_errstat(alg, ret);
@@ -198,7 +198,7 @@ static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
 				     int (*crypt)(struct crypto_lskcipher *tfm,
 						  const u8 *src, u8 *dst,
 						  unsigned len, u8 *iv,
-						  bool final))
+						  u32 flags))
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
@@ -210,7 +210,9 @@ static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
 
 	while (walk.nbytes) {
 		err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
-			    walk.nbytes, walk.iv, walk.nbytes == walk.total);
+			    walk.nbytes, walk.iv,
+			    walk.nbytes == walk.total ?
+			    CRYPTO_LSKCIPHER_FLAG_FINAL : 0);
 		err = skcipher_walk_done(&walk, err);
 	}
 
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index ea18af48346b..0cfbe86f957b 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -15,6 +15,17 @@
 #include <linux/string.h>
 #include <linux/types.h>
 
+/* Set this bit if the lskcipher operation is a continuation. */
+#define CRYPTO_LSKCIPHER_FLAG_CONT	0x00000001
+/* Set this bit if the lskcipher operation is final. */
+#define CRYPTO_LSKCIPHER_FLAG_FINAL	0x00000002
+/* The bit CRYPTO_TFM_REQ_MAY_SLEEP can also be set if needed. */
+
+/* Set this bit if the skcipher operation is a continuation. */
+#define CRYPTO_SKCIPHER_REQ_CONT	0x00000001
+/* Set this bit if the skcipher operation is not final. */
+#define CRYPTO_SKCIPHER_REQ_NOTFINAL	0x00000002
+
 struct scatterlist;
 
 /**
@@ -91,6 +102,7 @@ struct crypto_istat_cipher {
  *	    IV of exactly that size to perform the encrypt or decrypt operation.
  * @chunksize: Equal to the block size except for stream ciphers such as
  *	       CTR where it is set to the underlying block size.
+ * @statesize: Size of the internal state for the algorithm.
  * @stat: Statistics for cipher algorithm
  * @base: Definition of a generic crypto algorithm.
  */
@@ -99,6 +111,7 @@ struct crypto_istat_cipher {
 	unsigned int max_keysize;	\
 	unsigned int ivsize;		\
 	unsigned int chunksize;		\
+	unsigned int statesize;		\
 					\
 	SKCIPHER_ALG_COMMON_STAT	\
 					\
@@ -141,6 +154,17 @@ struct skcipher_alg_common SKCIPHER_ALG_COMMON;
  *	     be called in parallel with the same transformation object.
  * @decrypt: Decrypt a single block. This is a reverse counterpart to @encrypt
  *	     and the conditions are exactly the same.
+ * @export: Export partial state of the transformation. This function dumps the
+ *	    entire state of the ongoing transformation into a provided block of
+ *	    data so it can be @import 'ed back later on. This is useful in case
+ *	    you want to save partial result of the transformation after
+ *	    processing certain amount of data and reload this partial result
+ *	    multiple times later on for multiple re-use. No data processing
+ *	    happens at this point.
+ * @import: Import partial state of the transformation. This function loads the
+ *	    entire state of the ongoing transformation from a provided block of
+ *	    data so the transformation can continue from this point onward. No
+ *	    data processing happens at this point.
  * @init: Initialize the cryptographic transformation object. This function
  *	  is used to initialize the cryptographic transformation object.
  *	  This function is called only once at the instantiation time, right
@@ -170,6 +194,8 @@ struct skcipher_alg {
 	              unsigned int keylen);
 	int (*encrypt)(struct skcipher_request *req);
 	int (*decrypt)(struct skcipher_request *req);
+	int (*export)(struct skcipher_request *req, void *out);
+	int (*import)(struct skcipher_request *req, const void *in);
 	int (*init)(struct crypto_skcipher *tfm);
 	void (*exit)(struct crypto_skcipher *tfm);
 
@@ -200,6 +226,9 @@ struct skcipher_alg {
  *	     may be left over if length is not a multiple of blocks
  *	     and there is more to come (final == false).  The number of
  *	     left-over bytes should be returned in case of success.
+ *	     The siv field shall be as long as ivsize + statesize with
+ *	     the IV placed at the front.  The state will be used by the
+ *	     algorithm internally.
  * @decrypt: Decrypt a number of bytes. This is a reverse counterpart to
  *	     @encrypt and the conditions are exactly the same.
  * @init: Initialize the cryptographic transformation object. This function
@@ -215,9 +244,9 @@ struct lskcipher_alg {
 	int (*setkey)(struct crypto_lskcipher *tfm, const u8 *key,
 	              unsigned int keylen);
 	int (*encrypt)(struct crypto_lskcipher *tfm, const u8 *src,
-		       u8 *dst, unsigned len, u8 *iv, bool final);
+		       u8 *dst, unsigned len, u8 *siv, u32 flags);
 	int (*decrypt)(struct crypto_lskcipher *tfm, const u8 *src,
-		       u8 *dst, unsigned len, u8 *iv, bool final);
+		       u8 *dst, unsigned len, u8 *siv, u32 flags);
 	int (*init)(struct crypto_lskcipher *tfm);
 	void (*exit)(struct crypto_lskcipher *tfm);
 
@@ -496,6 +525,40 @@ static inline unsigned int crypto_lskcipher_chunksize(
 	return crypto_lskcipher_alg(tfm)->co.chunksize;
 }
 
+/**
+ * crypto_skcipher_statesize() - obtain state size
+ * @tfm: cipher handle
+ *
+ * Some algorithms cannot be chained with the IV alone.  They carry
+ * internal state which must be replicated if data is to be processed
+ * incrementally.  The size of that state can be obtained with this
+ * function.
+ *
+ * Return: state size in bytes
+ */
+static inline unsigned int crypto_skcipher_statesize(
+	struct crypto_skcipher *tfm)
+{
+	return crypto_skcipher_alg_common(tfm)->statesize;
+}
+
+/**
+ * crypto_lskcipher_statesize() - obtain state size
+ * @tfm: cipher handle
+ *
+ * Some algorithms cannot be chained with the IV alone.  They carry
+ * internal state which must be replicated if data is to be processed
+ * incrementally.  The size of that state can be obtained with this
+ * function.
+ *
+ * Return: state size in bytes
+ */
+static inline unsigned int crypto_lskcipher_statesize(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_lskcipher_alg(tfm)->co.statesize;
+}
+
 static inline unsigned int crypto_sync_skcipher_blocksize(
 	struct crypto_sync_skcipher *tfm)
 {

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [v2 PATCH 2/4] crypto: skcipher - Make use of internal state
  2023-11-30  9:55                     ` [v2 PATCH " Herbert Xu
  2023-11-30  9:56                       ` [v2 PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
@ 2023-11-30  9:56                       ` Herbert Xu
  2023-11-30  9:56                       ` [v2 PATCH 3/4] crypto: arc4 - Add " Herbert Xu
                                         ` (2 subsequent siblings)
  4 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-30  9:56 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel

This patch adds code to the skcipher/lskcipher API to make use
of the internal state if present.  In particular, the skcipher
lskcipher wrapper will allocate a buffer for the IV/state and
feed that to the underlying lskcipher algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/lskcipher.c        |   34 ++++++++++++++++---
 crypto/skcipher.c         |   80 ++++++++++++++++++++++++++++++++++++++++++++--
 include/crypto/skcipher.h |   33 ++++++++++++++++++
 3 files changed, 139 insertions(+), 8 deletions(-)

diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
index 51bcf85070c7..e6b87787bd64 100644
--- a/crypto/lskcipher.c
+++ b/crypto/lskcipher.c
@@ -90,6 +90,7 @@ static int crypto_lskcipher_crypt_unaligned(
 	u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
 			     u8 *dst, unsigned len, u8 *iv, u32 flags))
 {
+	unsigned statesize = crypto_lskcipher_statesize(tfm);
 	unsigned ivsize = crypto_lskcipher_ivsize(tfm);
 	unsigned bs = crypto_lskcipher_blocksize(tfm);
 	unsigned cs = crypto_lskcipher_chunksize(tfm);
@@ -104,7 +105,7 @@ static int crypto_lskcipher_crypt_unaligned(
 	if (!tiv)
 		return -ENOMEM;
 
-	memcpy(tiv, iv, ivsize);
+	memcpy(tiv, iv, ivsize + statesize);
 
 	p = kmalloc(PAGE_SIZE, GFP_ATOMIC);
 	err = -ENOMEM;
@@ -132,7 +133,7 @@ static int crypto_lskcipher_crypt_unaligned(
 	err = len ? -EINVAL : 0;
 
 out:
-	memcpy(iv, tiv, ivsize);
+	memcpy(iv, tiv, ivsize + statesize);
 	kfree_sensitive(p);
 	kfree_sensitive(tiv);
 	return err;
@@ -197,25 +198,45 @@ EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt);
 static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
 				     int (*crypt)(struct crypto_lskcipher *tfm,
 						  const u8 *src, u8 *dst,
-						  unsigned len, u8 *iv,
+						  unsigned len, u8 *ivs,
 						  u32 flags))
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+	u8 *ivs = skcipher_request_ctx(req);
 	struct crypto_lskcipher *tfm = *ctx;
 	struct skcipher_walk walk;
+	unsigned ivsize;
+	u32 flags;
 	int err;
 
+	ivsize = crypto_lskcipher_ivsize(tfm);
+	ivs = PTR_ALIGN(ivs, crypto_skcipher_alignmask(skcipher) + 1);
+
+	flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
+
+	if (req->base.flags & CRYPTO_SKCIPHER_REQ_CONT)
+		flags |= CRYPTO_LSKCIPHER_FLAG_CONT;
+	else
+		memcpy(ivs, req->iv, ivsize);
+
+	if (!(req->base.flags & CRYPTO_SKCIPHER_REQ_NOTFINAL))
+		flags |= CRYPTO_LSKCIPHER_FLAG_FINAL;
+
 	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes) {
 		err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
-			    walk.nbytes, walk.iv,
-			    walk.nbytes == walk.total ?
-			    CRYPTO_LSKCIPHER_FLAG_FINAL : 0);
+			    walk.nbytes, ivs,
+			    flags & ~(walk.nbytes == walk.total ?
+				      0 : CRYPTO_LSKCIPHER_FLAG_FINAL));
 		err = skcipher_walk_done(&walk, err);
+		flags |= CRYPTO_LSKCIPHER_FLAG_CONT;
 	}
 
+	if (flags & CRYPTO_LSKCIPHER_FLAG_FINAL)
+		memcpy(req->iv, ivs, ivsize);
+
 	return err;
 }
 
@@ -278,6 +299,7 @@ static void __maybe_unused crypto_lskcipher_show(
 	seq_printf(m, "max keysize  : %u\n", skcipher->co.max_keysize);
 	seq_printf(m, "ivsize       : %u\n", skcipher->co.ivsize);
 	seq_printf(m, "chunksize    : %u\n", skcipher->co.chunksize);
+	seq_printf(m, "statesize    : %u\n", skcipher->co.statesize);
 }
 
 static int __maybe_unused crypto_lskcipher_report(
diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index ac8b8c042654..bc70e159d27d 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -698,6 +698,64 @@ int crypto_skcipher_decrypt(struct skcipher_request *req)
 }
 EXPORT_SYMBOL_GPL(crypto_skcipher_decrypt);
 
+static int crypto_lskcipher_export(struct skcipher_request *req, void *out)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	u8 *ivs = skcipher_request_ctx(req);
+
+	ivs = PTR_ALIGN(ivs, crypto_skcipher_alignmask(tfm) + 1);
+
+	memcpy(out, ivs + crypto_skcipher_ivsize(tfm),
+	       crypto_skcipher_statesize(tfm));
+
+	return 0;
+}
+
+static int crypto_lskcipher_import(struct skcipher_request *req, const void *in)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	u8 *ivs = skcipher_request_ctx(req);
+
+	ivs = PTR_ALIGN(ivs, crypto_skcipher_alignmask(tfm) + 1);
+
+	memcpy(ivs + crypto_skcipher_ivsize(tfm), in,
+	       crypto_skcipher_statesize(tfm));
+
+	return 0;
+}
+
+static int skcipher_noexport(struct skcipher_request *req, void *out)
+{
+	return 0;
+}
+
+static int skcipher_noimport(struct skcipher_request *req, const void *in)
+{
+	return 0;
+}
+
+int crypto_skcipher_export(struct skcipher_request *req, void *out)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+
+	if (alg->co.base.cra_type != &crypto_skcipher_type)
+		return crypto_lskcipher_export(req, out);
+	return alg->export(req, out);
+}
+EXPORT_SYMBOL_GPL(crypto_skcipher_export);
+
+int crypto_skcipher_import(struct skcipher_request *req, const void *in)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+
+	if (alg->co.base.cra_type != &crypto_skcipher_type)
+		return crypto_lskcipher_import(req, in);
+	return alg->import(req, in);
+}
+EXPORT_SYMBOL_GPL(crypto_skcipher_import);
+
 static void crypto_skcipher_exit_tfm(struct crypto_tfm *tfm)
 {
 	struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm);
@@ -713,8 +771,17 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
 
 	skcipher_set_needkey(skcipher);
 
-	if (tfm->__crt_alg->cra_type != &crypto_skcipher_type)
+	if (tfm->__crt_alg->cra_type != &crypto_skcipher_type) {
+		unsigned am = crypto_skcipher_alignmask(skcipher);
+		unsigned reqsize;
+
+		reqsize = am & ~(crypto_tfm_ctx_alignment() - 1);
+		reqsize += crypto_skcipher_ivsize(skcipher);
+		reqsize += crypto_skcipher_statesize(skcipher);
+		crypto_skcipher_set_reqsize(skcipher, reqsize);
+
 		return crypto_init_lskcipher_ops_sg(tfm);
+	}
 
 	if (alg->exit)
 		skcipher->base.exit = crypto_skcipher_exit_tfm;
@@ -756,6 +823,7 @@ static void crypto_skcipher_show(struct seq_file *m, struct crypto_alg *alg)
 	seq_printf(m, "ivsize       : %u\n", skcipher->ivsize);
 	seq_printf(m, "chunksize    : %u\n", skcipher->chunksize);
 	seq_printf(m, "walksize     : %u\n", skcipher->walksize);
+	seq_printf(m, "statesize    : %u\n", skcipher->statesize);
 }
 
 static int __maybe_unused crypto_skcipher_report(
@@ -870,7 +938,9 @@ int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
 	struct crypto_istat_cipher *istat = skcipher_get_stat_common(alg);
 	struct crypto_alg *base = &alg->base;
 
-	if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8)
+	if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 ||
+	    alg->statesize > PAGE_SIZE / 2 ||
+	    (alg->ivsize + alg->statesize) > PAGE_SIZE / 2)
 		return -EINVAL;
 
 	if (!alg->chunksize)
@@ -899,6 +969,12 @@ static int skcipher_prepare_alg(struct skcipher_alg *alg)
 	if (!alg->walksize)
 		alg->walksize = alg->chunksize;
 
+	if (!alg->statesize) {
+		alg->import = skcipher_noimport;
+		alg->export = skcipher_noexport;
+	} else if (!(alg->import && alg->export))
+		return -EINVAL;
+
 	base->cra_type = &crypto_skcipher_type;
 	base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER;
 
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 0cfbe86f957b..b2faab27bed4 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -746,6 +746,39 @@ int crypto_skcipher_encrypt(struct skcipher_request *req);
  */
 int crypto_skcipher_decrypt(struct skcipher_request *req);
 
+/**
+ * crypto_skcipher_export() - export partial state
+ * @req: reference to the skcipher_request handle that holds all information
+ *	 needed to perform the operation
+ * @out: output buffer of sufficient size that can hold the state
+ *
+ * Export partial state of the transformation. This function dumps the
+ * entire state of the ongoing transformation into a provided block of
+ * data so it can be @import 'ed back later on. This is useful in case
+ * you want to save partial result of the transformation after
+ * processing certain amount of data and reload this partial result
+ * multiple times later on for multiple re-use. No data processing
+ * happens at this point.
+ *
+ * Return: 0 if the cipher operation was successful; < 0 if an error occurred
+ */
+int crypto_skcipher_export(struct skcipher_request *req, void *out);
+
+/**
+ * crypto_skcipher_import() - import partial state
+ * @req: reference to the skcipher_request handle that holds all information
+ *	 needed to perform the operation
+ * @in: buffer holding the state
+ *
+ * Import partial state of the transformation. This function loads the
+ * entire state of the ongoing transformation from a provided block of
+ * data so the transformation can continue from this point onward. No
+ * data processing happens at this point.
+ *
+ * Return: 0 if the cipher operation was successful; < 0 if an error occurred
+ */
+int crypto_skcipher_import(struct skcipher_request *req, const void *in);
+
 /**
  * crypto_lskcipher_encrypt() - encrypt plaintext
  * @tfm: lskcipher handle

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [v2 PATCH 3/4] crypto: arc4 - Add internal state
  2023-11-30  9:55                     ` [v2 PATCH " Herbert Xu
  2023-11-30  9:56                       ` [v2 PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
  2023-11-30  9:56                       ` [v2 PATCH 2/4] crypto: skcipher - Make use of internal state Herbert Xu
@ 2023-11-30  9:56                       ` Herbert Xu
  2023-11-30  9:56                       ` [v2 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
  2023-12-02  3:49                       ` [v3 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
  4 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-30  9:56 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel

The arc4 algorithm has always had internal state.  It's been buggy
from day one in that the state has been stored in the shared tfm
object.  That means two users sharing the same tfm will end up
affecting each other's output, or worse, they may end up with the
same output.

Fix this by declaring an internal state and storing the state there
instead of within the tfm context.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/arc4.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/crypto/arc4.c b/crypto/arc4.c
index 2150f94e7d03..e285bfcef667 100644
--- a/crypto/arc4.c
+++ b/crypto/arc4.c
@@ -23,10 +23,15 @@ static int crypto_arc4_setkey(struct crypto_lskcipher *tfm, const u8 *in_key,
 }
 
 static int crypto_arc4_crypt(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned nbytes, u8 *iv, u32 flags)
+			     u8 *dst, unsigned nbytes, u8 *siv, u32 flags)
 {
 	struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
 
+	if (!(flags & CRYPTO_LSKCIPHER_FLAG_CONT))
+		memcpy(siv, ctx, sizeof(*ctx));
+
+	ctx = (struct arc4_ctx *)siv;
+
 	arc4_crypt(ctx, dst, src, nbytes);
 	return 0;
 }
@@ -48,6 +53,7 @@ static struct lskcipher_alg arc4_alg = {
 	.co.base.cra_module		=	THIS_MODULE,
 	.co.min_keysize			=	ARC4_MIN_KEY_SIZE,
 	.co.max_keysize			=	ARC4_MAX_KEY_SIZE,
+	.co.statesize			=	sizeof(struct arc4_ctx),
 	.setkey				=	crypto_arc4_setkey,
 	.encrypt			=	crypto_arc4_crypt,
 	.decrypt			=	crypto_arc4_crypt,

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [v2 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining
  2023-11-30  9:55                     ` [v2 PATCH " Herbert Xu
                                         ` (2 preceding siblings ...)
  2023-11-30  9:56                       ` [v2 PATCH 3/4] crypto: arc4 - Add " Herbert Xu
@ 2023-11-30  9:56                       ` Herbert Xu
  2023-12-02  3:49                       ` [v3 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
  4 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-11-30  9:56 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel

Unlike algif_aead which is always issued in one go (thus limiting
the maximum size of the request), algif_skcipher has always allowed
unlimited input data by cutting them up as necessary and feeding
the fragments to the underlying algorithm one at a time.

However, because of deficiencies in the API, this has been broken
for most stream ciphers such as arc4 or chacha.  This is because
they have an internal state in addition to the IV that must be
preserved in order to continue processing.

Fix this by using the new skcipher state API.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/algif_skcipher.c |   71 +++++++++++++++++++++++++++++++++++++++++++++---
 include/crypto/if_alg.h |    2 +
 2 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 9ada9b741af8..59dcc6fc74a2 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -47,6 +47,52 @@ static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg,
 	return af_alg_sendmsg(sock, msg, size, ivsize);
 }
 
+static int algif_skcipher_export(struct sock *sk, struct skcipher_request *req)
+{
+	struct alg_sock *ask = alg_sk(sk);
+	struct crypto_skcipher *tfm;
+	struct af_alg_ctx *ctx;
+	struct alg_sock *pask;
+	unsigned statesize;
+	struct sock *psk;
+	int err;
+
+	if (!(req->base.flags & CRYPTO_SKCIPHER_REQ_NOTFINAL))
+		return 0;
+
+	ctx = ask->private;
+	psk = ask->parent;
+	pask = alg_sk(psk);
+	tfm = pask->private;
+
+	statesize = crypto_skcipher_statesize(tfm);
+	ctx->state = sock_kmalloc(sk, statesize, GFP_ATOMIC);
+	if (!ctx->state)
+		return -ENOMEM;
+
+	err = crypto_skcipher_export(req, ctx->state);
+	if (err) {
+		sock_kzfree_s(sk, ctx->state, statesize);
+		ctx->state = NULL;
+	}
+
+	return err;
+}
+
+static void algif_skcipher_done(void *data, int err)
+{
+	struct af_alg_async_req *areq = data;
+	struct sock *sk = areq->sk;
+
+	if (err)
+		goto out;
+
+	err = algif_skcipher_export(sk, &areq->cra_u.skcipher_req);
+
+out:
+	af_alg_async_cb(data, err);
+}
+
 static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 			     size_t ignored, int flags)
 {
@@ -58,6 +104,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	struct crypto_skcipher *tfm = pask->private;
 	unsigned int bs = crypto_skcipher_chunksize(tfm);
 	struct af_alg_async_req *areq;
+	unsigned cflags = 0;
 	int err = 0;
 	size_t len = 0;
 
@@ -82,8 +129,10 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	 * If more buffers are to be expected to be processed, process only
 	 * full block size buffers.
 	 */
-	if (ctx->more || len < ctx->used)
+	if (ctx->more || len < ctx->used) {
 		len -= len % bs;
+		cflags |= CRYPTO_SKCIPHER_REQ_NOTFINAL;
+	}
 
 	/*
 	 * Create a per request TX SGL for this request which tracks the
@@ -107,6 +156,16 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	skcipher_request_set_crypt(&areq->cra_u.skcipher_req, areq->tsgl,
 				   areq->first_rsgl.sgl.sgt.sgl, len, ctx->iv);
 
+	if (ctx->state) {
+		err = crypto_skcipher_import(&areq->cra_u.skcipher_req,
+					     ctx->state);
+		sock_kzfree_s(sk, ctx->state, crypto_skcipher_statesize(tfm));
+		ctx->state = NULL;
+		if (err)
+			goto free;
+		cflags |= CRYPTO_SKCIPHER_REQ_CONT;
+	}
+
 	if (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) {
 		/* AIO operation */
 		sock_hold(sk);
@@ -116,8 +175,9 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 		areq->outlen = len;
 
 		skcipher_request_set_callback(&areq->cra_u.skcipher_req,
+					      cflags |
 					      CRYPTO_TFM_REQ_MAY_SLEEP,
-					      af_alg_async_cb, areq);
+					      algif_skcipher_done, areq);
 		err = ctx->enc ?
 			crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) :
 			crypto_skcipher_decrypt(&areq->cra_u.skcipher_req);
@@ -130,6 +190,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	} else {
 		/* Synchronous operation */
 		skcipher_request_set_callback(&areq->cra_u.skcipher_req,
+					      cflags |
 					      CRYPTO_TFM_REQ_MAY_SLEEP |
 					      CRYPTO_TFM_REQ_MAY_BACKLOG,
 					      crypto_req_done, &ctx->wait);
@@ -137,8 +198,11 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 			crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) :
 			crypto_skcipher_decrypt(&areq->cra_u.skcipher_req),
 						 &ctx->wait);
-	}
 
+		if (!err)
+			err = algif_skcipher_export(
+				sk, &areq->cra_u.skcipher_req);
+	}
 
 free:
 	af_alg_free_resources(areq);
@@ -301,6 +365,7 @@ static void skcipher_sock_destruct(struct sock *sk)
 
 	af_alg_pull_tsgl(sk, ctx->used, NULL, 0);
 	sock_kzfree_s(sk, ctx->iv, crypto_skcipher_ivsize(tfm));
+	sock_kzfree_s(sk, ctx->state, crypto_skcipher_statesize(tfm));
 	sock_kfree_s(sk, ctx, ctx->len);
 	af_alg_release_parent(sk);
 }
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 08b803a4fcde..78ecaf5db04c 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -121,6 +121,7 @@ struct af_alg_async_req {
  *
  * @tsgl_list:		Link to TX SGL
  * @iv:			IV for cipher operation
+ * @state:		Existing state for continuing operation
  * @aead_assoclen:	Length of AAD for AEAD cipher operations
  * @completion:		Work queue for synchronous operation
  * @used:		TX bytes sent to kernel. This variable is used to
@@ -142,6 +143,7 @@ struct af_alg_ctx {
 	struct list_head tsgl_list;
 
 	void *iv;
+	void *state;
 	size_t aead_assoclen;
 
 	struct crypto_wait wait;

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [v3 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now)
  2023-11-30  9:55                     ` [v2 PATCH " Herbert Xu
                                         ` (3 preceding siblings ...)
  2023-11-30  9:56                       ` [v2 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
@ 2023-12-02  3:49                       ` Herbert Xu
  2023-12-02  3:50                         ` [v3 PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
                                           ` (3 more replies)
  4 siblings, 4 replies; 50+ messages in thread
From: Herbert Xu @ 2023-12-02  3:49 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel, Vadim Fedorenko

v3 updates the documentation for crypto_lskcipher_encrypt/decrypt.
v2 fixes a crash when no export/import functions are provided.

This series of patches adds the ability to process a skcipher
request in a piecemeal fashion, which is currently only possible
for selected algorithms such as CBC and CTR.

Herbert Xu (4):
  crypto: skcipher - Add internal state support
  crypto: skcipher - Make use of internal state
  crypto: arc4 - Add internal state
  crypto: algif_skcipher - Fix stream cipher chaining

 crypto/algif_skcipher.c   |  71 ++++++++++++++++++++++-
 crypto/arc4.c             |   8 ++-
 crypto/cbc.c              |   6 +-
 crypto/ecb.c              |  10 ++--
 crypto/lskcipher.c        |  42 +++++++++++---
 crypto/skcipher.c         |  80 +++++++++++++++++++++++++-
 include/crypto/if_alg.h   |   2 +
 include/crypto/skcipher.h | 117 +++++++++++++++++++++++++++++++++++---
 8 files changed, 306 insertions(+), 30 deletions(-)

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* [v3 PATCH 1/4] crypto: skcipher - Add internal state support
  2023-12-02  3:49                       ` [v3 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
@ 2023-12-02  3:50                         ` Herbert Xu
  2023-12-02  3:50                         ` [v3 PATCH 2/4] crypto: skcipher - Make use of internal state Herbert Xu
                                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-12-02  3:50 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel,
	Vadim Fedorenko

Unlike chaining modes such as CBC, stream ciphers other than CTR
usually hold an internal state that must be preserved if the
operation is to be done piecemeal.  This has not been represented
in the API, resulting in the inability to split up stream cipher
operations.

This patch adds the basic representation of an internal state to
skcipher and lskcipher.  In the interest of backwards compatibility,
the default has been set such that existing users are assumed to
be operating in one go as opposed to piecemeal.

With the new API, each lskcipher/skcipher algorithm has a new
attribute called statesize.  For skcipher, this is the size of
the buffer that can be exported or imported similar to ahash.
For lskcipher, instead of providing a buffer of ivsize, the user
now has to provide a buffer of ivsize + statesize.

Each skcipher operation is assumed to be final as they are now,
but this may be overridden with a request flag.  When the override
occurs, the user may then export the partial state and reimport
it later.

For lskcipher operations this is reversed.  All operations are
not final and the state will be exported unless the FINAL bit is
set.  However, the CONT bit still has to be set for the state
to be used.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/arc4.c             |    2 -
 crypto/cbc.c              |    6 ++-
 crypto/ecb.c              |   10 +++--
 crypto/lskcipher.c        |   14 ++++---
 include/crypto/skcipher.h |   84 +++++++++++++++++++++++++++++++++++++++++-----
 5 files changed, 94 insertions(+), 22 deletions(-)

diff --git a/crypto/arc4.c b/crypto/arc4.c
index eb3590dc9282..2150f94e7d03 100644
--- a/crypto/arc4.c
+++ b/crypto/arc4.c
@@ -23,7 +23,7 @@ static int crypto_arc4_setkey(struct crypto_lskcipher *tfm, const u8 *in_key,
 }
 
 static int crypto_arc4_crypt(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned nbytes, u8 *iv, bool final)
+			     u8 *dst, unsigned nbytes, u8 *iv, u32 flags)
 {
 	struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
 
diff --git a/crypto/cbc.c b/crypto/cbc.c
index 28345b8d921c..eedddef9ce40 100644
--- a/crypto/cbc.c
+++ b/crypto/cbc.c
@@ -51,9 +51,10 @@ static int crypto_cbc_encrypt_inplace(struct crypto_lskcipher *tfm,
 }
 
 static int crypto_cbc_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
-			      u8 *dst, unsigned len, u8 *iv, bool final)
+			      u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+	bool final = flags & CRYPTO_LSKCIPHER_FLAG_FINAL;
 	struct crypto_lskcipher *cipher = *ctx;
 	int rem;
 
@@ -119,9 +120,10 @@ static int crypto_cbc_decrypt_inplace(struct crypto_lskcipher *tfm,
 }
 
 static int crypto_cbc_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
-			      u8 *dst, unsigned len, u8 *iv, bool final)
+			      u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+	bool final = flags & CRYPTO_LSKCIPHER_FLAG_FINAL;
 	struct crypto_lskcipher *cipher = *ctx;
 	int rem;
 
diff --git a/crypto/ecb.c b/crypto/ecb.c
index cc7625d1a475..e3a67789050e 100644
--- a/crypto/ecb.c
+++ b/crypto/ecb.c
@@ -32,22 +32,24 @@ static int crypto_ecb_crypt(struct crypto_cipher *cipher, const u8 *src,
 }
 
 static int crypto_ecb_encrypt2(struct crypto_lskcipher *tfm, const u8 *src,
-			       u8 *dst, unsigned len, u8 *iv, bool final)
+			       u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
 	struct crypto_cipher *cipher = *ctx;
 
-	return crypto_ecb_crypt(cipher, src, dst, len, final,
+	return crypto_ecb_crypt(cipher, src, dst, len,
+				flags & CRYPTO_LSKCIPHER_FLAG_FINAL,
 				crypto_cipher_alg(cipher)->cia_encrypt);
 }
 
 static int crypto_ecb_decrypt2(struct crypto_lskcipher *tfm, const u8 *src,
-			       u8 *dst, unsigned len, u8 *iv, bool final)
+			       u8 *dst, unsigned len, u8 *iv, u32 flags)
 {
 	struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
 	struct crypto_cipher *cipher = *ctx;
 
-	return crypto_ecb_crypt(cipher, src, dst, len, final,
+	return crypto_ecb_crypt(cipher, src, dst, len,
+				flags & CRYPTO_LSKCIPHER_FLAG_FINAL,
 				crypto_cipher_alg(cipher)->cia_decrypt);
 }
 
diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
index 9edc89730951..51bcf85070c7 100644
--- a/crypto/lskcipher.c
+++ b/crypto/lskcipher.c
@@ -88,7 +88,7 @@ EXPORT_SYMBOL_GPL(crypto_lskcipher_setkey);
 static int crypto_lskcipher_crypt_unaligned(
 	struct crypto_lskcipher *tfm, const u8 *src, u8 *dst, unsigned len,
 	u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned len, u8 *iv, bool final))
+			     u8 *dst, unsigned len, u8 *iv, u32 flags))
 {
 	unsigned ivsize = crypto_lskcipher_ivsize(tfm);
 	unsigned bs = crypto_lskcipher_blocksize(tfm);
@@ -119,7 +119,7 @@ static int crypto_lskcipher_crypt_unaligned(
 			chunk &= ~(cs - 1);
 
 		memcpy(p, src, chunk);
-		err = crypt(tfm, p, p, chunk, tiv, true);
+		err = crypt(tfm, p, p, chunk, tiv, CRYPTO_LSKCIPHER_FLAG_FINAL);
 		if (err)
 			goto out;
 
@@ -143,7 +143,7 @@ static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
 				  int (*crypt)(struct crypto_lskcipher *tfm,
 					       const u8 *src, u8 *dst,
 					       unsigned len, u8 *iv,
-					       bool final))
+					       u32 flags))
 {
 	unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
 	struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
@@ -156,7 +156,7 @@ static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
 		goto out;
 	}
 
-	ret = crypt(tfm, src, dst, len, iv, true);
+	ret = crypt(tfm, src, dst, len, iv, CRYPTO_LSKCIPHER_FLAG_FINAL);
 
 out:
 	return crypto_lskcipher_errstat(alg, ret);
@@ -198,7 +198,7 @@ static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
 				     int (*crypt)(struct crypto_lskcipher *tfm,
 						  const u8 *src, u8 *dst,
 						  unsigned len, u8 *iv,
-						  bool final))
+						  u32 flags))
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
@@ -210,7 +210,9 @@ static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
 
 	while (walk.nbytes) {
 		err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
-			    walk.nbytes, walk.iv, walk.nbytes == walk.total);
+			    walk.nbytes, walk.iv,
+			    walk.nbytes == walk.total ?
+			    CRYPTO_LSKCIPHER_FLAG_FINAL : 0);
 		err = skcipher_walk_done(&walk, err);
 	}
 
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index ea18af48346b..5302f8f33afc 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -15,6 +15,17 @@
 #include <linux/string.h>
 #include <linux/types.h>
 
+/* Set this bit if the lskcipher operation is a continuation. */
+#define CRYPTO_LSKCIPHER_FLAG_CONT	0x00000001
+/* Set this bit if the lskcipher operation is final. */
+#define CRYPTO_LSKCIPHER_FLAG_FINAL	0x00000002
+/* The bit CRYPTO_TFM_REQ_MAY_SLEEP can also be set if needed. */
+
+/* Set this bit if the skcipher operation is a continuation. */
+#define CRYPTO_SKCIPHER_REQ_CONT	0x00000001
+/* Set this bit if the skcipher operation is not final. */
+#define CRYPTO_SKCIPHER_REQ_NOTFINAL	0x00000002
+
 struct scatterlist;
 
 /**
@@ -91,6 +102,7 @@ struct crypto_istat_cipher {
  *	    IV of exactly that size to perform the encrypt or decrypt operation.
  * @chunksize: Equal to the block size except for stream ciphers such as
  *	       CTR where it is set to the underlying block size.
+ * @statesize: Size of the internal state for the algorithm.
  * @stat: Statistics for cipher algorithm
  * @base: Definition of a generic crypto algorithm.
  */
@@ -99,6 +111,7 @@ struct crypto_istat_cipher {
 	unsigned int max_keysize;	\
 	unsigned int ivsize;		\
 	unsigned int chunksize;		\
+	unsigned int statesize;		\
 					\
 	SKCIPHER_ALG_COMMON_STAT	\
 					\
@@ -141,6 +154,17 @@ struct skcipher_alg_common SKCIPHER_ALG_COMMON;
  *	     be called in parallel with the same transformation object.
  * @decrypt: Decrypt a single block. This is a reverse counterpart to @encrypt
  *	     and the conditions are exactly the same.
+ * @export: Export partial state of the transformation. This function dumps the
+ *	    entire state of the ongoing transformation into a provided block of
+ *	    data so it can be @import 'ed back later on. This is useful in case
+ *	    you want to save partial result of the transformation after
+ *	    processing certain amount of data and reload this partial result
+ *	    multiple times later on for multiple re-use. No data processing
+ *	    happens at this point.
+ * @import: Import partial state of the transformation. This function loads the
+ *	    entire state of the ongoing transformation from a provided block of
+ *	    data so the transformation can continue from this point onward. No
+ *	    data processing happens at this point.
  * @init: Initialize the cryptographic transformation object. This function
  *	  is used to initialize the cryptographic transformation object.
  *	  This function is called only once at the instantiation time, right
@@ -170,6 +194,8 @@ struct skcipher_alg {
 	              unsigned int keylen);
 	int (*encrypt)(struct skcipher_request *req);
 	int (*decrypt)(struct skcipher_request *req);
+	int (*export)(struct skcipher_request *req, void *out);
+	int (*import)(struct skcipher_request *req, const void *in);
 	int (*init)(struct crypto_skcipher *tfm);
 	void (*exit)(struct crypto_skcipher *tfm);
 
@@ -200,6 +226,9 @@ struct skcipher_alg {
  *	     may be left over if length is not a multiple of blocks
  *	     and there is more to come (final == false).  The number of
  *	     left-over bytes should be returned in case of success.
+ *	     The siv field shall be as long as ivsize + statesize with
+ *	     the IV placed at the front.  The state will be used by the
+ *	     algorithm internally.
  * @decrypt: Decrypt a number of bytes. This is a reverse counterpart to
  *	     @encrypt and the conditions are exactly the same.
  * @init: Initialize the cryptographic transformation object. This function
@@ -215,9 +244,9 @@ struct lskcipher_alg {
 	int (*setkey)(struct crypto_lskcipher *tfm, const u8 *key,
 	              unsigned int keylen);
 	int (*encrypt)(struct crypto_lskcipher *tfm, const u8 *src,
-		       u8 *dst, unsigned len, u8 *iv, bool final);
+		       u8 *dst, unsigned len, u8 *siv, u32 flags);
 	int (*decrypt)(struct crypto_lskcipher *tfm, const u8 *src,
-		       u8 *dst, unsigned len, u8 *iv, bool final);
+		       u8 *dst, unsigned len, u8 *siv, u32 flags);
 	int (*init)(struct crypto_lskcipher *tfm);
 	void (*exit)(struct crypto_lskcipher *tfm);
 
@@ -496,6 +525,40 @@ static inline unsigned int crypto_lskcipher_chunksize(
 	return crypto_lskcipher_alg(tfm)->co.chunksize;
 }
 
+/**
+ * crypto_skcipher_statesize() - obtain state size
+ * @tfm: cipher handle
+ *
+ * Some algorithms cannot be chained with the IV alone.  They carry
+ * internal state which must be replicated if data is to be processed
+ * incrementally.  The size of that state can be obtained with this
+ * function.
+ *
+ * Return: state size in bytes
+ */
+static inline unsigned int crypto_skcipher_statesize(
+	struct crypto_skcipher *tfm)
+{
+	return crypto_skcipher_alg_common(tfm)->statesize;
+}
+
+/**
+ * crypto_lskcipher_statesize() - obtain state size
+ * @tfm: cipher handle
+ *
+ * Some algorithms cannot be chained with the IV alone.  They carry
+ * internal state which must be replicated if data is to be processed
+ * incrementally.  The size of that state can be obtained with this
+ * function.
+ *
+ * Return: state size in bytes
+ */
+static inline unsigned int crypto_lskcipher_statesize(
+	struct crypto_lskcipher *tfm)
+{
+	return crypto_lskcipher_alg(tfm)->co.statesize;
+}
+
 static inline unsigned int crypto_sync_skcipher_blocksize(
 	struct crypto_sync_skcipher *tfm)
 {
@@ -689,9 +752,10 @@ int crypto_skcipher_decrypt(struct skcipher_request *req);
  * @src: source buffer
  * @dst: destination buffer
  * @len: number of bytes to process
- * @iv: IV for the cipher operation which must comply with the IV size defined
- *      by crypto_lskcipher_ivsize
- *
+ * @siv: IV + state for the cipher operation.  The length of the IV must
+ *	 comply with the IV size defined by crypto_lskcipher_ivsize.  The
+ *	 IV is then followed with a buffer with the length as specified by
+ *	 crypto_lskcipher_statesize.
  * Encrypt plaintext data using the lskcipher handle.
  *
  * Return: >=0 if the cipher operation was successful, if positive
@@ -699,7 +763,7 @@ int crypto_skcipher_decrypt(struct skcipher_request *req);
  *	   < 0 if an error occurred
  */
 int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned len, u8 *iv);
+			     u8 *dst, unsigned len, u8 *siv);
 
 /**
  * crypto_lskcipher_decrypt() - decrypt ciphertext
@@ -707,8 +771,10 @@ int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
  * @src: source buffer
  * @dst: destination buffer
  * @len: number of bytes to process
- * @iv: IV for the cipher operation which must comply with the IV size defined
- *      by crypto_lskcipher_ivsize
+ * @siv: IV + state for the cipher operation.  The length of the IV must
+ *	 comply with the IV size defined by crypto_lskcipher_ivsize.  The
+ *	 IV is then followed with a buffer with the length as specified by
+ *	 crypto_lskcipher_statesize.
  *
  * Decrypt ciphertext data using the lskcipher handle.
  *
@@ -717,7 +783,7 @@ int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
  *	   < 0 if an error occurred
  */
 int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned len, u8 *iv);
+			     u8 *dst, unsigned len, u8 *siv);
 
 /**
  * DOC: Symmetric Key Cipher Request Handle

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [v3 PATCH 2/4] crypto: skcipher - Make use of internal state
  2023-12-02  3:49                       ` [v3 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
  2023-12-02  3:50                         ` [v3 PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
@ 2023-12-02  3:50                         ` Herbert Xu
  2023-12-02  3:50                         ` [v3 PATCH 3/4] crypto: arc4 - Add " Herbert Xu
  2023-12-02  3:50                         ` [v3 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
  3 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-12-02  3:50 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel,
	Vadim Fedorenko

This patch adds code to the skcipher/lskcipher API to make use
of the internal state if present.  In particular, the skcipher
lskcipher wrapper will allocate a buffer for the IV/state and
feed that to the underlying lskcipher algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/lskcipher.c        |   34 ++++++++++++++++---
 crypto/skcipher.c         |   80 ++++++++++++++++++++++++++++++++++++++++++++--
 include/crypto/skcipher.h |   33 ++++++++++++++++++
 3 files changed, 139 insertions(+), 8 deletions(-)

diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
index 51bcf85070c7..e6b87787bd64 100644
--- a/crypto/lskcipher.c
+++ b/crypto/lskcipher.c
@@ -90,6 +90,7 @@ static int crypto_lskcipher_crypt_unaligned(
 	u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
 			     u8 *dst, unsigned len, u8 *iv, u32 flags))
 {
+	unsigned statesize = crypto_lskcipher_statesize(tfm);
 	unsigned ivsize = crypto_lskcipher_ivsize(tfm);
 	unsigned bs = crypto_lskcipher_blocksize(tfm);
 	unsigned cs = crypto_lskcipher_chunksize(tfm);
@@ -104,7 +105,7 @@ static int crypto_lskcipher_crypt_unaligned(
 	if (!tiv)
 		return -ENOMEM;
 
-	memcpy(tiv, iv, ivsize);
+	memcpy(tiv, iv, ivsize + statesize);
 
 	p = kmalloc(PAGE_SIZE, GFP_ATOMIC);
 	err = -ENOMEM;
@@ -132,7 +133,7 @@ static int crypto_lskcipher_crypt_unaligned(
 	err = len ? -EINVAL : 0;
 
 out:
-	memcpy(iv, tiv, ivsize);
+	memcpy(iv, tiv, ivsize + statesize);
 	kfree_sensitive(p);
 	kfree_sensitive(tiv);
 	return err;
@@ -197,25 +198,45 @@ EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt);
 static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
 				     int (*crypt)(struct crypto_lskcipher *tfm,
 						  const u8 *src, u8 *dst,
-						  unsigned len, u8 *iv,
+						  unsigned len, u8 *ivs,
 						  u32 flags))
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+	u8 *ivs = skcipher_request_ctx(req);
 	struct crypto_lskcipher *tfm = *ctx;
 	struct skcipher_walk walk;
+	unsigned ivsize;
+	u32 flags;
 	int err;
 
+	ivsize = crypto_lskcipher_ivsize(tfm);
+	ivs = PTR_ALIGN(ivs, crypto_skcipher_alignmask(skcipher) + 1);
+
+	flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP;
+
+	if (req->base.flags & CRYPTO_SKCIPHER_REQ_CONT)
+		flags |= CRYPTO_LSKCIPHER_FLAG_CONT;
+	else
+		memcpy(ivs, req->iv, ivsize);
+
+	if (!(req->base.flags & CRYPTO_SKCIPHER_REQ_NOTFINAL))
+		flags |= CRYPTO_LSKCIPHER_FLAG_FINAL;
+
 	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes) {
 		err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
-			    walk.nbytes, walk.iv,
-			    walk.nbytes == walk.total ?
-			    CRYPTO_LSKCIPHER_FLAG_FINAL : 0);
+			    walk.nbytes, ivs,
+			    flags & ~(walk.nbytes == walk.total ?
+				      0 : CRYPTO_LSKCIPHER_FLAG_FINAL));
 		err = skcipher_walk_done(&walk, err);
+		flags |= CRYPTO_LSKCIPHER_FLAG_CONT;
 	}
 
+	if (flags & CRYPTO_LSKCIPHER_FLAG_FINAL)
+		memcpy(req->iv, ivs, ivsize);
+
 	return err;
 }
 
@@ -278,6 +299,7 @@ static void __maybe_unused crypto_lskcipher_show(
 	seq_printf(m, "max keysize  : %u\n", skcipher->co.max_keysize);
 	seq_printf(m, "ivsize       : %u\n", skcipher->co.ivsize);
 	seq_printf(m, "chunksize    : %u\n", skcipher->co.chunksize);
+	seq_printf(m, "statesize    : %u\n", skcipher->co.statesize);
 }
 
 static int __maybe_unused crypto_lskcipher_report(
diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index ac8b8c042654..bc70e159d27d 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -698,6 +698,64 @@ int crypto_skcipher_decrypt(struct skcipher_request *req)
 }
 EXPORT_SYMBOL_GPL(crypto_skcipher_decrypt);
 
+static int crypto_lskcipher_export(struct skcipher_request *req, void *out)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	u8 *ivs = skcipher_request_ctx(req);
+
+	ivs = PTR_ALIGN(ivs, crypto_skcipher_alignmask(tfm) + 1);
+
+	memcpy(out, ivs + crypto_skcipher_ivsize(tfm),
+	       crypto_skcipher_statesize(tfm));
+
+	return 0;
+}
+
+static int crypto_lskcipher_import(struct skcipher_request *req, const void *in)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	u8 *ivs = skcipher_request_ctx(req);
+
+	ivs = PTR_ALIGN(ivs, crypto_skcipher_alignmask(tfm) + 1);
+
+	memcpy(ivs + crypto_skcipher_ivsize(tfm), in,
+	       crypto_skcipher_statesize(tfm));
+
+	return 0;
+}
+
+static int skcipher_noexport(struct skcipher_request *req, void *out)
+{
+	return 0;
+}
+
+static int skcipher_noimport(struct skcipher_request *req, const void *in)
+{
+	return 0;
+}
+
+int crypto_skcipher_export(struct skcipher_request *req, void *out)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+
+	if (alg->co.base.cra_type != &crypto_skcipher_type)
+		return crypto_lskcipher_export(req, out);
+	return alg->export(req, out);
+}
+EXPORT_SYMBOL_GPL(crypto_skcipher_export);
+
+int crypto_skcipher_import(struct skcipher_request *req, const void *in)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
+
+	if (alg->co.base.cra_type != &crypto_skcipher_type)
+		return crypto_lskcipher_import(req, in);
+	return alg->import(req, in);
+}
+EXPORT_SYMBOL_GPL(crypto_skcipher_import);
+
 static void crypto_skcipher_exit_tfm(struct crypto_tfm *tfm)
 {
 	struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm);
@@ -713,8 +771,17 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
 
 	skcipher_set_needkey(skcipher);
 
-	if (tfm->__crt_alg->cra_type != &crypto_skcipher_type)
+	if (tfm->__crt_alg->cra_type != &crypto_skcipher_type) {
+		unsigned am = crypto_skcipher_alignmask(skcipher);
+		unsigned reqsize;
+
+		reqsize = am & ~(crypto_tfm_ctx_alignment() - 1);
+		reqsize += crypto_skcipher_ivsize(skcipher);
+		reqsize += crypto_skcipher_statesize(skcipher);
+		crypto_skcipher_set_reqsize(skcipher, reqsize);
+
 		return crypto_init_lskcipher_ops_sg(tfm);
+	}
 
 	if (alg->exit)
 		skcipher->base.exit = crypto_skcipher_exit_tfm;
@@ -756,6 +823,7 @@ static void crypto_skcipher_show(struct seq_file *m, struct crypto_alg *alg)
 	seq_printf(m, "ivsize       : %u\n", skcipher->ivsize);
 	seq_printf(m, "chunksize    : %u\n", skcipher->chunksize);
 	seq_printf(m, "walksize     : %u\n", skcipher->walksize);
+	seq_printf(m, "statesize    : %u\n", skcipher->statesize);
 }
 
 static int __maybe_unused crypto_skcipher_report(
@@ -870,7 +938,9 @@ int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
 	struct crypto_istat_cipher *istat = skcipher_get_stat_common(alg);
 	struct crypto_alg *base = &alg->base;
 
-	if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8)
+	if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 ||
+	    alg->statesize > PAGE_SIZE / 2 ||
+	    (alg->ivsize + alg->statesize) > PAGE_SIZE / 2)
 		return -EINVAL;
 
 	if (!alg->chunksize)
@@ -899,6 +969,12 @@ static int skcipher_prepare_alg(struct skcipher_alg *alg)
 	if (!alg->walksize)
 		alg->walksize = alg->chunksize;
 
+	if (!alg->statesize) {
+		alg->import = skcipher_noimport;
+		alg->export = skcipher_noexport;
+	} else if (!(alg->import && alg->export))
+		return -EINVAL;
+
 	base->cra_type = &crypto_skcipher_type;
 	base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER;
 
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 5302f8f33afc..f881740df194 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -746,6 +746,39 @@ int crypto_skcipher_encrypt(struct skcipher_request *req);
  */
 int crypto_skcipher_decrypt(struct skcipher_request *req);
 
+/**
+ * crypto_skcipher_export() - export partial state
+ * @req: reference to the skcipher_request handle that holds all information
+ *	 needed to perform the operation
+ * @out: output buffer of sufficient size that can hold the state
+ *
+ * Export partial state of the transformation. This function dumps the
+ * entire state of the ongoing transformation into a provided block of
+ * data so it can be @import 'ed back later on. This is useful in case
+ * you want to save partial result of the transformation after
+ * processing certain amount of data and reload this partial result
+ * multiple times later on for multiple re-use. No data processing
+ * happens at this point.
+ *
+ * Return: 0 if the cipher operation was successful; < 0 if an error occurred
+ */
+int crypto_skcipher_export(struct skcipher_request *req, void *out);
+
+/**
+ * crypto_skcipher_import() - import partial state
+ * @req: reference to the skcipher_request handle that holds all information
+ *	 needed to perform the operation
+ * @in: buffer holding the state
+ *
+ * Import partial state of the transformation. This function loads the
+ * entire state of the ongoing transformation from a provided block of
+ * data so the transformation can continue from this point onward. No
+ * data processing happens at this point.
+ *
+ * Return: 0 if the cipher operation was successful; < 0 if an error occurred
+ */
+int crypto_skcipher_import(struct skcipher_request *req, const void *in);
+
 /**
  * crypto_lskcipher_encrypt() - encrypt plaintext
  * @tfm: lskcipher handle

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [v3 PATCH 3/4] crypto: arc4 - Add internal state
  2023-12-02  3:49                       ` [v3 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
  2023-12-02  3:50                         ` [v3 PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
  2023-12-02  3:50                         ` [v3 PATCH 2/4] crypto: skcipher - Make use of internal state Herbert Xu
@ 2023-12-02  3:50                         ` Herbert Xu
  2023-12-02  3:50                         ` [v3 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
  3 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-12-02  3:50 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel,
	Vadim Fedorenko

The arc4 algorithm has always had internal state.  It's been buggy
from day one in that the state has been stored in the shared tfm
object.  That means two users sharing the same tfm will end up
affecting each other's output, or worse, they may end up with the
same output.

Fix this by declaring an internal state and storing the state there
instead of within the tfm context.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/arc4.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/crypto/arc4.c b/crypto/arc4.c
index 2150f94e7d03..e285bfcef667 100644
--- a/crypto/arc4.c
+++ b/crypto/arc4.c
@@ -23,10 +23,15 @@ static int crypto_arc4_setkey(struct crypto_lskcipher *tfm, const u8 *in_key,
 }
 
 static int crypto_arc4_crypt(struct crypto_lskcipher *tfm, const u8 *src,
-			     u8 *dst, unsigned nbytes, u8 *iv, u32 flags)
+			     u8 *dst, unsigned nbytes, u8 *siv, u32 flags)
 {
 	struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
 
+	if (!(flags & CRYPTO_LSKCIPHER_FLAG_CONT))
+		memcpy(siv, ctx, sizeof(*ctx));
+
+	ctx = (struct arc4_ctx *)siv;
+
 	arc4_crypt(ctx, dst, src, nbytes);
 	return 0;
 }
@@ -48,6 +53,7 @@ static struct lskcipher_alg arc4_alg = {
 	.co.base.cra_module		=	THIS_MODULE,
 	.co.min_keysize			=	ARC4_MIN_KEY_SIZE,
 	.co.max_keysize			=	ARC4_MAX_KEY_SIZE,
+	.co.statesize			=	sizeof(struct arc4_ctx),
 	.setkey				=	crypto_arc4_setkey,
 	.encrypt			=	crypto_arc4_crypt,
 	.decrypt			=	crypto_arc4_crypt,

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* [v3 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining
  2023-12-02  3:49                       ` [v3 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
                                           ` (2 preceding siblings ...)
  2023-12-02  3:50                         ` [v3 PATCH 3/4] crypto: arc4 - Add " Herbert Xu
@ 2023-12-02  3:50                         ` Herbert Xu
  2023-12-10 13:53                           ` kernel test robot
  3 siblings, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-12-02  3:50 UTC (permalink / raw)
  To: Eric Biggers, Linux Crypto Mailing List, Ard Biesheuvel,
	Vadim Fedorenko

Unlike algif_aead which is always issued in one go (thus limiting
the maximum size of the request), algif_skcipher has always allowed
unlimited input data by cutting them up as necessary and feeding
the fragments to the underlying algorithm one at a time.

However, because of deficiencies in the API, this has been broken
for most stream ciphers such as arc4 or chacha.  This is because
they have an internal state in addition to the IV that must be
preserved in order to continue processing.

Fix this by using the new skcipher state API.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/algif_skcipher.c |   71 +++++++++++++++++++++++++++++++++++++++++++++---
 include/crypto/if_alg.h |    2 +
 2 files changed, 70 insertions(+), 3 deletions(-)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index 9ada9b741af8..59dcc6fc74a2 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -47,6 +47,52 @@ static int skcipher_sendmsg(struct socket *sock, struct msghdr *msg,
 	return af_alg_sendmsg(sock, msg, size, ivsize);
 }
 
+static int algif_skcipher_export(struct sock *sk, struct skcipher_request *req)
+{
+	struct alg_sock *ask = alg_sk(sk);
+	struct crypto_skcipher *tfm;
+	struct af_alg_ctx *ctx;
+	struct alg_sock *pask;
+	unsigned statesize;
+	struct sock *psk;
+	int err;
+
+	if (!(req->base.flags & CRYPTO_SKCIPHER_REQ_NOTFINAL))
+		return 0;
+
+	ctx = ask->private;
+	psk = ask->parent;
+	pask = alg_sk(psk);
+	tfm = pask->private;
+
+	statesize = crypto_skcipher_statesize(tfm);
+	ctx->state = sock_kmalloc(sk, statesize, GFP_ATOMIC);
+	if (!ctx->state)
+		return -ENOMEM;
+
+	err = crypto_skcipher_export(req, ctx->state);
+	if (err) {
+		sock_kzfree_s(sk, ctx->state, statesize);
+		ctx->state = NULL;
+	}
+
+	return err;
+}
+
+static void algif_skcipher_done(void *data, int err)
+{
+	struct af_alg_async_req *areq = data;
+	struct sock *sk = areq->sk;
+
+	if (err)
+		goto out;
+
+	err = algif_skcipher_export(sk, &areq->cra_u.skcipher_req);
+
+out:
+	af_alg_async_cb(data, err);
+}
+
 static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 			     size_t ignored, int flags)
 {
@@ -58,6 +104,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	struct crypto_skcipher *tfm = pask->private;
 	unsigned int bs = crypto_skcipher_chunksize(tfm);
 	struct af_alg_async_req *areq;
+	unsigned cflags = 0;
 	int err = 0;
 	size_t len = 0;
 
@@ -82,8 +129,10 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	 * If more buffers are to be expected to be processed, process only
 	 * full block size buffers.
 	 */
-	if (ctx->more || len < ctx->used)
+	if (ctx->more || len < ctx->used) {
 		len -= len % bs;
+		cflags |= CRYPTO_SKCIPHER_REQ_NOTFINAL;
+	}
 
 	/*
 	 * Create a per request TX SGL for this request which tracks the
@@ -107,6 +156,16 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	skcipher_request_set_crypt(&areq->cra_u.skcipher_req, areq->tsgl,
 				   areq->first_rsgl.sgl.sgt.sgl, len, ctx->iv);
 
+	if (ctx->state) {
+		err = crypto_skcipher_import(&areq->cra_u.skcipher_req,
+					     ctx->state);
+		sock_kzfree_s(sk, ctx->state, crypto_skcipher_statesize(tfm));
+		ctx->state = NULL;
+		if (err)
+			goto free;
+		cflags |= CRYPTO_SKCIPHER_REQ_CONT;
+	}
+
 	if (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) {
 		/* AIO operation */
 		sock_hold(sk);
@@ -116,8 +175,9 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 		areq->outlen = len;
 
 		skcipher_request_set_callback(&areq->cra_u.skcipher_req,
+					      cflags |
 					      CRYPTO_TFM_REQ_MAY_SLEEP,
-					      af_alg_async_cb, areq);
+					      algif_skcipher_done, areq);
 		err = ctx->enc ?
 			crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) :
 			crypto_skcipher_decrypt(&areq->cra_u.skcipher_req);
@@ -130,6 +190,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	} else {
 		/* Synchronous operation */
 		skcipher_request_set_callback(&areq->cra_u.skcipher_req,
+					      cflags |
 					      CRYPTO_TFM_REQ_MAY_SLEEP |
 					      CRYPTO_TFM_REQ_MAY_BACKLOG,
 					      crypto_req_done, &ctx->wait);
@@ -137,8 +198,11 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 			crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) :
 			crypto_skcipher_decrypt(&areq->cra_u.skcipher_req),
 						 &ctx->wait);
-	}
 
+		if (!err)
+			err = algif_skcipher_export(
+				sk, &areq->cra_u.skcipher_req);
+	}
 
 free:
 	af_alg_free_resources(areq);
@@ -301,6 +365,7 @@ static void skcipher_sock_destruct(struct sock *sk)
 
 	af_alg_pull_tsgl(sk, ctx->used, NULL, 0);
 	sock_kzfree_s(sk, ctx->iv, crypto_skcipher_ivsize(tfm));
+	sock_kzfree_s(sk, ctx->state, crypto_skcipher_statesize(tfm));
 	sock_kfree_s(sk, ctx, ctx->len);
 	af_alg_release_parent(sk);
 }
diff --git a/include/crypto/if_alg.h b/include/crypto/if_alg.h
index 08b803a4fcde..78ecaf5db04c 100644
--- a/include/crypto/if_alg.h
+++ b/include/crypto/if_alg.h
@@ -121,6 +121,7 @@ struct af_alg_async_req {
  *
  * @tsgl_list:		Link to TX SGL
  * @iv:			IV for cipher operation
+ * @state:		Existing state for continuing operation
  * @aead_assoclen:	Length of AAD for AEAD cipher operations
  * @completion:		Work queue for synchronous operation
  * @used:		TX bytes sent to kernel. This variable is used to
@@ -142,6 +143,7 @@ struct af_alg_ctx {
 	struct list_head tsgl_list;
 
 	void *iv;
+	void *state;
 	size_t aead_assoclen;
 
 	struct crypto_wait wait;

^ permalink raw reply related	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-09-22  3:10       ` Eric Biggers
  2023-11-17  5:19         ` Herbert Xu
@ 2023-12-05  8:41         ` Herbert Xu
  2023-12-05 20:17           ` Eric Biggers
  1 sibling, 1 reply; 50+ messages in thread
From: Herbert Xu @ 2023-12-05  8:41 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Thu, Sep 21, 2023 at 08:10:30PM -0700, Eric Biggers wrote:
> 
> Yes, wide-block modes such as Adiantum and HCTR2 require multiple passes over
> the data.  As do SIV modes such as AES-GCM-SIV (though AES-GCM-SIV isn't yet
> supported by the kernel, and it would be an "aead", not an "skcipher").

Right, AEAD algorithms have never supported incremental processing,
as one of the first algorithms CCM required two-pass processing.

We could support incremental processing if we really wanted to.  It
would require a model where the user passes the data to the API twice
(or more if future algorithms requires so).  However, I see no
pressing need for this so I'm happy with just marking such algorithms
as unsupported with algif_skcipher for now.  There is also an
alternative of adding an AEAD-like mode fo algif_skcipher for these
algorithms but again I don't see the need to do this.

As such I'm going to add a field to indicate that adiantum and hctr2
cannot be used by algif_skcipher.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-12-05  8:41         ` [PATCH 4/8] crypto: skcipher - Add lskcipher Herbert Xu
@ 2023-12-05 20:17           ` Eric Biggers
  2023-12-06  1:44             ` Herbert Xu
  0 siblings, 1 reply; 50+ messages in thread
From: Eric Biggers @ 2023-12-05 20:17 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Tue, Dec 05, 2023 at 04:41:12PM +0800, Herbert Xu wrote:
> On Thu, Sep 21, 2023 at 08:10:30PM -0700, Eric Biggers wrote:
> > 
> > Yes, wide-block modes such as Adiantum and HCTR2 require multiple passes over
> > the data.  As do SIV modes such as AES-GCM-SIV (though AES-GCM-SIV isn't yet
> > supported by the kernel, and it would be an "aead", not an "skcipher").
> 
> Right, AEAD algorithms have never supported incremental processing,
> as one of the first algorithms CCM required two-pass processing.
> 
> We could support incremental processing if we really wanted to.  It
> would require a model where the user passes the data to the API twice
> (or more if future algorithms requires so).  However, I see no
> pressing need for this so I'm happy with just marking such algorithms
> as unsupported with algif_skcipher for now.  There is also an
> alternative of adding an AEAD-like mode fo algif_skcipher for these
> algorithms but again I don't see the need to do this.
> 
> As such I'm going to add a field to indicate that adiantum and hctr2
> cannot be used by algif_skcipher.
> 

Note that 'cryptsetup benchmark' uses AF_ALG, and there are recommendations
floating around the internet to use it to benchmark the various algorithms that
can be used with dm-crypt, including Adiantum.  Perhaps it's a bit late to take
away support for algorithms that are already supported?  AFAICS, algif_skcipher
only splits up operations if userspace does something like write(8192) followed
by read(4096), i.e. reading less than it wrote.  Why not just make
algif_skcipher return an error in that case if the algorithm doesn't support it?

- Eric

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 4/8] crypto: skcipher - Add lskcipher
  2023-12-05 20:17           ` Eric Biggers
@ 2023-12-06  1:44             ` Herbert Xu
  0 siblings, 0 replies; 50+ messages in thread
From: Herbert Xu @ 2023-12-06  1:44 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Ard Biesheuvel

On Tue, Dec 05, 2023 at 12:17:57PM -0800, Eric Biggers wrote:
>
> Note that 'cryptsetup benchmark' uses AF_ALG, and there are recommendations
> floating around the internet to use it to benchmark the various algorithms that
> can be used with dm-crypt, including Adiantum.  Perhaps it's a bit late to take
> away support for algorithms that are already supported?  AFAICS, algif_skcipher
> only splits up operations if userspace does something like write(8192) followed
> by read(4096), i.e. reading less than it wrote.  Why not just make
> algif_skcipher return an error in that case if the algorithm doesn't support it?

Yes that should be possible to implement.

Also I've changed my mind on the two-pass strategy.  I think
I am going to try to implement it at least internally in the
layer between skcipher and lskcihper.  Let me see whether this
is worth persuing or not for adiantum.

The reason is because after everything else switches over to
lskcipher, it'd be silly to have adiantum remain as skcipher
only.  But if adiantum moves over to lskcipher, then we'd need
to disable the skcipher version of it or linearise the input.

Both seem unpalatable and perhaps a two-pass approach won't
be that bad.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [v3 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining
  2023-12-02  3:50                         ` [v3 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
@ 2023-12-10 13:53                           ` kernel test robot
  0 siblings, 0 replies; 50+ messages in thread
From: kernel test robot @ 2023-12-10 13:53 UTC (permalink / raw)
  To: Herbert Xu
  Cc: oe-lkp, lkp, linux-crypto, ltp, Eric Biggers, Ard Biesheuvel,
	Vadim Fedorenko, oliver.sang



Hello,

kernel test robot noticed "WARNING:at_net/core/sock.c:#sock_kzfree_s" on:

commit: 29531d406c4f2b0f07b1d9eb4e24f5ac6b44bc05 ("[v3 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining")
url: https://github.com/intel-lab-lkp/linux/commits/Herbert-Xu/crypto-skcipher-Add-internal-state-support/20231202-123508
base: https://git.kernel.org/cgit/linux/kernel/git/herbert/cryptodev-2.6.git master
patch link: https://lore.kernel.org/all/E1r9H1M-00612B-10@formenos.hmeau.com/
patch subject: [v3 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining

in testcase: ltp
version: ltp-x86_64-14c1f76-1_20230715
with following parameters:

	test: crypto



compiler: gcc-12
test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz (Ivy Bridge) with 16G memory

(please refer to attached dmesg/kmsg for entire log/backtrace)



If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202312101716.7cbf38c4-oliver.sang@intel.com



kern  :warn  : [  242.028749] ------------[ cut here ]------------
kern  :warn  : [  242.029073] WARNING: CPU: 3 PID: 3733 at net/core/sock.c:2697 sock_kzfree_s+0x38/0x40
kern  :warn  : [  242.030906] Modules linked in: sm4_generic sm4 vmac poly1305_generic libpoly1305 poly1305_x86_64 chacha_generic chacha_x86_64 libchacha chacha20poly1305 sm3_generic sm3 netconsole btrfs blake2b_generic xor raid6_pq zstd_compress libcrc32c intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal sd_mod intel_powerclamp t10_pi coretemp crc64_rocksoft_generic crc64_rocksoft crc64 kvm_intel sg ipmi_devintf ipmi_msghandler i915 kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 rapl drm_buddy mxm_wmi intel_gtt intel_cstate ahci drm_display_helper firewire_ohci libahci ttm i2c_i801 firewire_core intel_uncore drm_kms_helper crc_itu_t libata lpc_ich video i2c_smbus wmi binfmt_misc drm fuse ip_tables
kern  :warn  : [  242.032427] CPU: 3 PID: 3733 Comm: af_alg05 Not tainted 6.7.0-rc1-00040-g29531d406c4f #1
kern  :warn  : [  242.033686] Hardware name:  /DZ77BH-55K, BIOS BHZ7710H.86A.0097.2012.1228.1346 12/28/2012
kern  :warn  : [  242.033949] RIP: 0010:sock_kzfree_s+0x38/0x40
kern  :warn  : [  242.034146] Code: 55 89 d5 53 48 89 fb 48 89 f7 e8 53 8b 82 fe 48 8d bb 48 01 00 00 be 04 00 00 00 e8 22 ad 97 fe f0 29 ab 48 01 00 00 5b 5d c3 <0f> 0b c3 0f 1f 44 00 00 f3 0f 1e fa 0f 1f 44 00 00 55 53 48 89 fb
kern  :warn  : [  242.034731] RSP: 0018:ffffc900011bfde8 EFLAGS: 00010246
kern  :warn  : [  242.034997] RAX: dffffc0000000000 RBX: ffff8881ad1d5000 RCX: 1ffff110377659a3
kern  :warn  : [  242.035308] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff8881ad1d5000
kern  :warn  : [  242.035614] RBP: ffff8881bbb2cd00 R08: 0000000000000001 R09: ffffed1035a3aa29
kern  :warn  : [  242.035913] R10: ffff8881ad1d514b R11: ffffffff83a0009f R12: ffff8881ad1d3048
kern  :warn  : [  242.036153] R13: ffff8881a7c089a0 R14: ffff8881ad1d3048 R15: ffff88840eb21900
kern  :warn  : [  242.036455] FS:  00007f207e42c740(0000) GS:ffff888348180000(0000) knlGS:0000000000000000
kern  :warn  : [  242.036732] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
kern  :warn  : [  242.036985] CR2: 00007f89c0a5c2f0 CR3: 0000000403ede005 CR4: 00000000001706f0
kern  :warn  : [  242.037288] Call Trace:
kern  :warn  : [  242.037496]  <TASK>
kern  :warn  : [  242.037651]  ? __warn+0xcd/0x260
kern  :warn  : [  242.037828]  ? sock_kzfree_s+0x38/0x40
kern  :warn  : [  242.038013]  ? report_bug+0x267/0x2d0
kern  :warn  : [  242.038199]  ? handle_bug+0x3c/0x70
kern  :warn  : [  242.038461]  ? exc_invalid_op+0x17/0x40
kern  :warn  : [  242.038644]  ? asm_exc_invalid_op+0x1a/0x20
kern  :warn  : [  242.038854]  ? entry_SYSCALL_64_after_hwframe+0x63/0x6b
kern  :warn  : [  242.039131]  ? sock_kzfree_s+0x38/0x40
kern  :warn  : [  242.039391]  skcipher_sock_destruct+0x1af/0x280
kern  :warn  : [  242.039657]  __sk_destruct+0x46/0x4e0
kern  :warn  : [  242.039862]  af_alg_release+0x90/0xc0
kern  :warn  : [  242.040074]  __sock_release+0xa0/0x250
kern  :warn  : [  242.040435]  sock_close+0x15/0x20
kern  :warn  : [  242.040650]  __fput+0x213/0xad0
kern  :warn  : [  242.040846]  __x64_sys_close+0x7d/0xd0
kern  :warn  : [  242.041044]  do_syscall_64+0x3f/0xe0
kern  :warn  : [  242.041260]  entry_SYSCALL_64_after_hwframe+0x63/0x6b
kern  :warn  : [  242.041496] RIP: 0033:0x7f207e527780
kern  :warn  : [  242.042582] Code: 0d 00 00 00 eb b2 e8 ef f6 01 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 80 3d 61 1e 0e 00 00 74 17 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 48 c3 0f 1f 80 00 00 00 00 48 83 ec 18 89 7c
kern  :warn  : [  242.043051] RSP: 002b:00007ffef7aefff8 EFLAGS: 00000202 ORIG_RAX: 0000000000000003
kern  :warn  : [  242.043430] RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f207e527780
kern  :warn  : [  242.043766] RDX: 000055dda9c55b00 RSI: 00007ffef7aefad0 RDI: 0000000000000005
kern  :warn  : [  242.044067] RBP: 00007ffef7af2ff0 R08: 0000000000000000 R09: 00007ffef7aeff20
kern  :warn  : [  242.044415] R10: 00007ffef7aefae6 R11: 0000000000000202 R12: 00007f207e42c6c0
kern  :warn  : [  242.044763] R13: 00007ffef7af0000 R14: 000055dda9c6b01e R15: 0000000000000000
kern  :warn  : [  242.045069]  </TASK>
kern  :warn  : [  242.045310] ---[ end trace 0000000000000000 ]---



The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20231210/202312101716.7cbf38c4-oliver.sang@intel.com



-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2023-12-10 13:53 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-14  8:28 [PATCH 0/8] crypto: Add lskcipher API type Herbert Xu
2023-09-14  8:28 ` [PATCH 1/8] crypto: aead - Add crypto_has_aead Herbert Xu
2023-09-14  8:28 ` [PATCH 2/8] ipsec: Stop using crypto_has_alg Herbert Xu
2023-09-14  8:28 ` [PATCH 3/8] crypto: hash - Hide CRYPTO_ALG_TYPE_AHASH_MASK Herbert Xu
2023-09-14  8:28 ` [PATCH 4/8] crypto: skcipher - Add lskcipher Herbert Xu
2023-09-20  6:25   ` Eric Biggers
2023-09-21  4:32     ` Herbert Xu
2023-09-22  3:10       ` Eric Biggers
2023-11-17  5:19         ` Herbert Xu
2023-11-17  5:42           ` Eric Biggers
2023-11-17  9:07             ` Herbert Xu
2023-11-24 10:27               ` Herbert Xu
2023-11-27 22:28                 ` Eric Biggers
2023-11-29  6:24                   ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
2023-11-29  6:29                     ` [PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
2023-11-29  6:29                     ` [PATCH 2/4] crypto: skcipher - Make use of internal state Herbert Xu
2023-11-29  6:29                     ` [PATCH 3/4] crypto: arc4 - Add " Herbert Xu
2023-11-29  6:29                     ` [PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
2023-11-29 21:04                     ` [PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Eric Biggers
2023-11-30  2:17                       ` Herbert Xu
2023-11-30  9:55                     ` [v2 PATCH " Herbert Xu
2023-11-30  9:56                       ` [v2 PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
2023-11-30  9:56                       ` [v2 PATCH 2/4] crypto: skcipher - Make use of internal state Herbert Xu
2023-11-30  9:56                       ` [v2 PATCH 3/4] crypto: arc4 - Add " Herbert Xu
2023-11-30  9:56                       ` [v2 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
2023-12-02  3:49                       ` [v3 PATCH 0/4] crypto: Fix chaining support for stream ciphers (arc4 only for now) Herbert Xu
2023-12-02  3:50                         ` [v3 PATCH 1/4] crypto: skcipher - Add internal state support Herbert Xu
2023-12-02  3:50                         ` [v3 PATCH 2/4] crypto: skcipher - Make use of internal state Herbert Xu
2023-12-02  3:50                         ` [v3 PATCH 3/4] crypto: arc4 - Add " Herbert Xu
2023-12-02  3:50                         ` [v3 PATCH 4/4] crypto: algif_skcipher - Fix stream cipher chaining Herbert Xu
2023-12-10 13:53                           ` kernel test robot
2023-12-05  8:41         ` [PATCH 4/8] crypto: skcipher - Add lskcipher Herbert Xu
2023-12-05 20:17           ` Eric Biggers
2023-12-06  1:44             ` Herbert Xu
2023-09-14  8:28 ` [PATCH 5/8] crypto: lskcipher - Add compatibility wrapper around ECB Herbert Xu
2023-09-14  8:28 ` [PATCH 6/8] crypto: testmgr - Add support for lskcipher algorithms Herbert Xu
2023-09-14  8:28 ` [PATCH 7/8] crypto: ecb - Convert from skcipher to lskcipher Herbert Xu
2023-09-14  8:28 ` [PATCH 8/8] crypto: cbc " Herbert Xu
2023-10-02 20:25   ` Nathan Chancellor
2023-10-03  3:31     ` [PATCH] crypto: skcipher - Add dependency on ecb Herbert Xu
2023-10-03 15:25       ` Nathan Chancellor
2023-09-14  8:51 ` [PATCH 0/8] crypto: Add lskcipher API type Ard Biesheuvel
2023-09-14  8:56   ` Herbert Xu
2023-09-14  9:18     ` Ard Biesheuvel
2023-09-14  9:29       ` Herbert Xu
2023-09-14  9:31         ` Ard Biesheuvel
2023-09-14  9:34           ` Herbert Xu
2023-09-17 16:24             ` Ard Biesheuvel
2023-09-19  4:03               ` Herbert Xu
2023-09-14  9:32       ` Herbert Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).