linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [REPOST,RFC] of small geode cleanup + fallback
@ 2007-10-19 10:03 Sebastian Siewior
  2007-10-19 10:03 ` [RFC 1/5] [crypto] geode: use consistent IV copy Sebastian Siewior
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Sebastian Siewior @ 2007-10-19 10:03 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto

This is a repost of an earlier patch series. I build it on top of cryptodev.

Sebastian

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC 1/5] [crypto] geode: use consistent IV copy
  2007-10-19 10:03 [REPOST,RFC] of small geode cleanup + fallback Sebastian Siewior
@ 2007-10-19 10:03 ` Sebastian Siewior
  2007-10-21  8:13   ` Herbert Xu
  2007-10-19 10:03 ` [RFC 2/5] [crypto] geode: relax in busy loop and care about return value Sebastian Siewior
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Sebastian Siewior @ 2007-10-19 10:03 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto, Sebastian Siewior

From: Sebastian Siewior <sebastian@breakpoint.cc>

It is enough if the IV is copied before and after the while loop.
With DM-Crypt is seems not be required to save the IV after encrytion
because a new one is used in the request (dunno about other users).
It is not save to load the IV within while loop and not save afterwards
because we mill end up with the wrong IV if the request goes consists
of more than one page.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
---
 drivers/crypto/geode-aes.c |   10 ++++------
 1 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
index 7c6f13f..8bcd6d5 100644
--- a/drivers/crypto/geode-aes.c
+++ b/drivers/crypto/geode-aes.c
@@ -227,6 +227,7 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
 
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
+	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
 
 	while((nbytes = walk.nbytes)) {
 		op->src = walk.src.virt.addr,
@@ -235,16 +236,13 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
 		op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
 		op->dir = AES_DIR_DECRYPT;
 
-		memcpy(op->iv, walk.iv, AES_IV_LENGTH);
-
 		ret = geode_aes_crypt(op);
 
-		memcpy(walk.iv, op->iv, AES_IV_LENGTH);
 		nbytes -= ret;
-
 		err = blkcipher_walk_done(desc, &walk, nbytes);
 	}
 
+	memcpy(walk.iv, op->iv, AES_IV_LENGTH);
 	return err;
 }
 
@@ -259,6 +257,7 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
 
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
+	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
 
 	while((nbytes = walk.nbytes)) {
 		op->src = walk.src.virt.addr,
@@ -267,13 +266,12 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
 		op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
 		op->dir = AES_DIR_ENCRYPT;
 
-		memcpy(op->iv, walk.iv, AES_IV_LENGTH);
-
 		ret = geode_aes_crypt(op);
 		nbytes -= ret;
 		err = blkcipher_walk_done(desc, &walk, nbytes);
 	}
 
+	memcpy(walk.iv, op->iv, AES_IV_LENGTH);
 	return err;
 }
 
-- 
1.5.3.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC 2/5] [crypto] geode: relax in busy loop and care about return value
  2007-10-19 10:03 [REPOST,RFC] of small geode cleanup + fallback Sebastian Siewior
  2007-10-19 10:03 ` [RFC 1/5] [crypto] geode: use consistent IV copy Sebastian Siewior
@ 2007-10-19 10:03 ` Sebastian Siewior
  2007-10-21  8:14   ` Herbert Xu
  2007-10-19 10:03 ` [RFC 3/5] [crypto] geode: move defines into a headerfile Sebastian Siewior
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Sebastian Siewior @ 2007-10-19 10:03 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto, Sebastian Siewior

From: Sebastian Siewior <sebastian@breakpoint.cc>

The code waits in a busy loop until the hardware finishes the encryption
or decryption process. This wants a cpu_relax() :)
The busy loop finishes either if the encryption is done or if the counter
is zero. If the latter is true than the hardware failed. Since this
should not happen, leave sith a BUG().

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
---
 drivers/crypto/geode-aes.c |    9 ++++++---
 1 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
index 8bcd6d5..99ea594 100644
--- a/drivers/crypto/geode-aes.c
+++ b/drivers/crypto/geode-aes.c
@@ -88,9 +88,10 @@ do_crypt(void *src, void *dst, int len, u32 flags)
 	/* Start the operation */
 	iowrite32(AES_CTRL_START | flags, _iobase + AES_CTRLA_REG);
 
-	do
+	do {
 		status = ioread32(_iobase + AES_INTR_REG);
-	while(!(status & AES_INTRA_PENDING) && --counter);
+		cpu_relax();
+	} while(!(status & AES_INTRA_PENDING) && --counter);
 
 	/* Clear the event */
 	iowrite32((status & 0xFF) | AES_INTRA_PENDING, _iobase + AES_INTR_REG);
@@ -102,6 +103,7 @@ geode_aes_crypt(struct geode_aes_op *op)
 {
 	u32 flags = 0;
 	unsigned long iflags;
+	int ret;
 
 	if (op->len == 0)
 		return 0;
@@ -131,7 +133,8 @@ geode_aes_crypt(struct geode_aes_op *op)
 		_writefield(AES_WRITEKEY0_REG, op->key);
 	}
 
-	do_crypt(op->src, op->dst, op->len, flags);
+	ret = do_crypt(op->src, op->dst, op->len, flags);
+	BUG_ON(ret);
 
 	if (op->mode == AES_MODE_CBC)
 		_readfield(AES_WRITEIV0_REG, op->iv);
-- 
1.5.3.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC 3/5] [crypto] geode: move defines into a headerfile
  2007-10-19 10:03 [REPOST,RFC] of small geode cleanup + fallback Sebastian Siewior
  2007-10-19 10:03 ` [RFC 1/5] [crypto] geode: use consistent IV copy Sebastian Siewior
  2007-10-19 10:03 ` [RFC 2/5] [crypto] geode: relax in busy loop and care about return value Sebastian Siewior
@ 2007-10-19 10:03 ` Sebastian Siewior
  2007-10-21  8:22   ` Herbert Xu
  2007-10-19 10:03 ` [RFC 4/5] [crypto] geode: add fallback for unsupported modes Sebastian Siewior
  2007-10-19 10:03 ` [RFC 5/5] [crypto] geode: use proper defines Sebastian Siewior
  4 siblings, 1 reply; 16+ messages in thread
From: Sebastian Siewior @ 2007-10-19 10:03 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto, Sebastian Siewior

From: Sebastian Siewior <sebastian@breakpoint.cc>

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
---
 drivers/crypto/geode-aes.c |   32 --------------------------------
 drivers/crypto/geode-aes.h |   36 ++++++++++++++++++++++++++++++++++--
 2 files changed, 34 insertions(+), 34 deletions(-)

diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
index 99ea594..da6164a 100644
--- a/drivers/crypto/geode-aes.c
+++ b/drivers/crypto/geode-aes.c
@@ -20,38 +20,6 @@
 
 #include "geode-aes.h"
 
-/* Register definitions */
-
-#define AES_CTRLA_REG  0x0000
-
-#define AES_CTRL_START     0x01
-#define AES_CTRL_DECRYPT   0x00
-#define AES_CTRL_ENCRYPT   0x02
-#define AES_CTRL_WRKEY     0x04
-#define AES_CTRL_DCA       0x08
-#define AES_CTRL_SCA       0x10
-#define AES_CTRL_CBC       0x20
-
-#define AES_INTR_REG  0x0008
-
-#define AES_INTRA_PENDING (1 << 16)
-#define AES_INTRB_PENDING (1 << 17)
-
-#define AES_INTR_PENDING  (AES_INTRA_PENDING | AES_INTRB_PENDING)
-#define AES_INTR_MASK     0x07
-
-#define AES_SOURCEA_REG   0x0010
-#define AES_DSTA_REG      0x0014
-#define AES_LENA_REG      0x0018
-#define AES_WRITEKEY0_REG 0x0030
-#define AES_WRITEIV0_REG  0x0040
-
-/*  A very large counter that is used to gracefully bail out of an
- *  operation in case of trouble
- */
-
-#define AES_OP_TIMEOUT    0x50000
-
 /* Static structures */
 
 static void __iomem * _iobase;
diff --git a/drivers/crypto/geode-aes.h b/drivers/crypto/geode-aes.h
index f479686..2f1d559 100644
--- a/drivers/crypto/geode-aes.h
+++ b/drivers/crypto/geode-aes.h
@@ -9,9 +9,9 @@
 #ifndef _GEODE_AES_H_
 #define _GEODE_AES_H_
 
-#define AES_KEY_LENGTH 16
+/* driver logic flags */
 #define AES_IV_LENGTH  16
-
+#define AES_KEY_LENGTH 16
 #define AES_MIN_BLOCK_SIZE 16
 
 #define AES_MODE_ECB 0
@@ -22,6 +22,38 @@
 
 #define AES_FLAGS_HIDDENKEY (1 << 0)
 
+/* Register definitions */
+
+#define AES_CTRLA_REG  0x0000
+
+#define AES_CTRL_START     0x01
+#define AES_CTRL_DECRYPT   0x00
+#define AES_CTRL_ENCRYPT   0x02
+#define AES_CTRL_WRKEY     0x04
+#define AES_CTRL_DCA       0x08
+#define AES_CTRL_SCA       0x10
+#define AES_CTRL_CBC       0x20
+
+#define AES_INTR_REG  0x0008
+
+#define AES_INTRA_PENDING (1 << 16)
+#define AES_INTRB_PENDING (1 << 17)
+
+#define AES_INTR_PENDING  (AES_INTRA_PENDING | AES_INTRB_PENDING)
+#define AES_INTR_MASK     0x07
+
+#define AES_SOURCEA_REG   0x0010
+#define AES_DSTA_REG      0x0014
+#define AES_LENA_REG      0x0018
+#define AES_WRITEKEY0_REG 0x0030
+#define AES_WRITEIV0_REG  0x0040
+
+/*  A very large counter that is used to gracefully bail out of an
+ *  operation in case of trouble
+ */
+
+#define AES_OP_TIMEOUT    0x50000
+
 struct geode_aes_op {
 
 	void *src;
-- 
1.5.3.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC 4/5] [crypto] geode: add fallback for unsupported modes.
  2007-10-19 10:03 [REPOST,RFC] of small geode cleanup + fallback Sebastian Siewior
                   ` (2 preceding siblings ...)
  2007-10-19 10:03 ` [RFC 3/5] [crypto] geode: move defines into a headerfile Sebastian Siewior
@ 2007-10-19 10:03 ` Sebastian Siewior
  2007-10-21  8:31   ` Herbert Xu
  2007-10-19 10:03 ` [RFC 5/5] [crypto] geode: use proper defines Sebastian Siewior
  4 siblings, 1 reply; 16+ messages in thread
From: Sebastian Siewior @ 2007-10-19 10:03 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto, Sebastian Siewior

From: Sebastian Siewior <sebastian@breakpoint.cc>

The Geode AES crypto engine supports only 128 bit long key. This
patch adds fallback for other key sizes which are required by the
AES standard.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
---
 drivers/crypto/geode-aes.c |  188 +++++++++++++++++++++++++++++++++++++++-----
 drivers/crypto/geode-aes.h |    6 ++
 2 files changed, 173 insertions(+), 21 deletions(-)

diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
index da6164a..761d600 100644
--- a/drivers/crypto/geode-aes.c
+++ b/drivers/crypto/geode-aes.c
@@ -118,14 +118,82 @@ static int
 geode_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int len)
 {
 	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+	unsigned int type;
+	unsigned int ret;
 
-	if (len != AES_KEY_LENGTH) {
+	op->keylen = len;
+
+	if (len == AES_KEY_LENGTH) {
+		memcpy(op->key, key, len);
+		return 0;
+	}
+
+	if (len != 24 && len != 32) {
+		/* not supported at all */
 		tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
 		return -EINVAL;
 	}
 
-	memcpy(op->key, key, len);
-	return 0;
+	/*
+	 * The requested key size is not supported by HW, do a fallback
+	 */
+
+	type = tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK;
+
+	/* propagate requested flags to the fallback driver */
+	op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
+	op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
+
+	if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
+		ret = crypto_blkcipher_setkey(op->fallback.blk, key, len);
+
+	} else if (type == CRYPTO_ALG_TYPE_CIPHER) {
+		ret = crypto_cipher_setkey(op->fallback.cip, key, len);
+
+	} else {
+		printk(KERN_ERR "Neither a cipher nor a block cipher. %x / %x\n",
+				type, tfm->crt_flags);
+		return -EINVAL;
+	}
+
+	if (ret) {
+		tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
+		tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
+	}
+	return ret;
+}
+
+static int fallback_blk_dec(struct blkcipher_desc *desc,
+		struct scatterlist *dst, struct scatterlist *src,
+		unsigned int nbytes)
+{
+	unsigned int ret;
+	struct crypto_blkcipher *tfm;
+	struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
+
+	tfm = desc->tfm;
+	desc->tfm = op->fallback.blk;
+
+	ret = crypto_blkcipher_decrypt(desc, dst, src, nbytes);
+
+	desc->tfm = tfm;
+	return ret;
+}
+static int fallback_blk_enc(struct blkcipher_desc *desc,
+		struct scatterlist *dst, struct scatterlist *src,
+		unsigned int nbytes)
+{
+	unsigned int ret;
+	struct crypto_blkcipher *tfm;
+	struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
+
+	tfm = desc->tfm;
+	desc->tfm = op->fallback.blk;
+
+	ret = crypto_blkcipher_encrypt(desc, dst, src, nbytes);
+
+	desc->tfm = tfm;
+	return ret;
 }
 
 static void
@@ -133,8 +201,10 @@ geode_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
 
-	if ((out == NULL) || (in == NULL))
+	if (unlikely(op->keylen != 16)) {
+		crypto_cipher_encrypt_one(op->fallback.cip, out, in);
 		return;
+	}
 
 	op->src = (void *) in;
 	op->dst = (void *) out;
@@ -152,8 +222,10 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
 
-	if ((out == NULL) || (in == NULL))
+	if (unlikely(op->keylen != 16)) {
+		crypto_cipher_decrypt_one(op->fallback.cip, out, in);
 		return;
+	}
 
 	op->src = (void *) in;
 	op->dst = (void *) out;
@@ -165,21 +237,76 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 	geode_aes_crypt(op);
 }
 
+static int fallback_init(struct crypto_tfm *tfm)
+{
+	const char *name = tfm->__crt_alg->cra_name;
+	unsigned int type;
+	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+	type = tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK;
+
+	if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
+		op->fallback.blk = crypto_alloc_blkcipher(name, 0,
+				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+
+	} else if (type == CRYPTO_ALG_TYPE_CIPHER) {
+		op->fallback.cip = crypto_alloc_cipher(name, 0,
+				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+	} else {
+		printk(KERN_ERR "%s is neither a cipher nor a block cipher: %x\n",
+				name, type);
+		return -EINVAL;
+	}
+
+	/*
+	 * cipher / blkcipher, this is the same pointer for both. One check is
+	 * enough
+	 */
+	if (IS_ERR(op->fallback.blk)) {
+		printk(KERN_ERR "Error allocating fallback algo %s\n", name);
+		return PTR_ERR(op->fallback.blk);
+	}
+
+	return 0;
+}
+
+static void fallback_exit(struct crypto_tfm *tfm)
+{
+	unsigned int type;
+	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+	type = tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK;
+	if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
+		crypto_free_blkcipher(op->fallback.blk);
+		op->fallback.blk = NULL;
+		return;
+
+	} else if (type == CRYPTO_ALG_TYPE_CIPHER) {
+		crypto_free_cipher(op->fallback.cip);
+		op->fallback.cip = NULL;
+		return;
+	}
+
+	WARN_ON(1);
+}
 
 static struct crypto_alg geode_alg = {
 	.cra_name               =       "aes",
-	.cra_driver_name	=       "geode-aes-128",
+	.cra_driver_name	=       "geode-aes",
 	.cra_priority           =       300,
 	.cra_alignmask          =       15,
-	.cra_flags		=	CRYPTO_ALG_TYPE_CIPHER,
+	.cra_flags		=	CRYPTO_ALG_TYPE_CIPHER |
+					CRYPTO_ALG_NEED_FALLBACK,
+	.cra_init               =       fallback_init,
+	.cra_exit               =       fallback_exit,
 	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
 	.cra_ctxsize		=	sizeof(struct geode_aes_op),
 	.cra_module		=	THIS_MODULE,
 	.cra_list		=	LIST_HEAD_INIT(geode_alg.cra_list),
 	.cra_u			=	{
 		.cipher = {
-			.cia_min_keysize	=  AES_KEY_LENGTH,
-			.cia_max_keysize	=  AES_KEY_LENGTH,
+			.cia_min_keysize	=  AES_MIN_KEY_SIZE,
+			.cia_max_keysize	=  AES_MAX_KEY_SIZE,
 			.cia_setkey		=  geode_setkey,
 			.cia_encrypt		=  geode_encrypt,
 			.cia_decrypt		=  geode_decrypt
@@ -196,6 +323,9 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
+	if (unlikely(op->keylen != 16))
+		return fallback_blk_dec(desc, dst, src, nbytes);
+
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
 	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
@@ -226,6 +356,9 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
+	if (unlikely(op->keylen != 16))
+		return fallback_blk_enc(desc, dst, src, nbytes);
+
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
 	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
@@ -248,9 +381,12 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
 
 static struct crypto_alg geode_cbc_alg = {
 	.cra_name		=	"cbc(aes)",
-	.cra_driver_name	=	"cbc-aes-geode-128",
-	.cra_priority		=	400,
-	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
+	.cra_driver_name	=	"cbc-aes-geode",
+	.cra_priority		=	300,
+	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER |
+					CRYPTO_ALG_NEED_FALLBACK,
+	.cra_init               =       fallback_init,
+	.cra_exit               =       fallback_exit,
 	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
 	.cra_ctxsize		=	sizeof(struct geode_aes_op),
 	.cra_alignmask		=	15,
@@ -259,8 +395,8 @@ static struct crypto_alg geode_cbc_alg = {
 	.cra_list		=	LIST_HEAD_INIT(geode_cbc_alg.cra_list),
 	.cra_u			=	{
 		.blkcipher = {
-			.min_keysize		=	AES_KEY_LENGTH,
-			.max_keysize		=	AES_KEY_LENGTH,
+			.min_keysize		=	AES_MIN_KEY_SIZE,
+			.max_keysize		=	AES_MAX_KEY_SIZE,
 			.setkey			=	geode_setkey,
 			.encrypt		=	geode_cbc_encrypt,
 			.decrypt		=	geode_cbc_decrypt,
@@ -278,6 +414,9 @@ geode_ecb_decrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
+	if (unlikely(op->keylen != 16))
+		return fallback_blk_dec(desc, dst, src, nbytes);
+
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
 
@@ -305,6 +444,9 @@ geode_ecb_encrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
+	if (unlikely(op->keylen != 16))
+		return fallback_blk_enc(desc, dst, src, nbytes);
+
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
 
@@ -325,9 +467,12 @@ geode_ecb_encrypt(struct blkcipher_desc *desc,
 
 static struct crypto_alg geode_ecb_alg = {
 	.cra_name		=	"ecb(aes)",
-	.cra_driver_name	=	"ecb-aes-geode-128",
-	.cra_priority		=	400,
-	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
+	.cra_driver_name	=	"ecb-aes-geode",
+	.cra_priority		=	300,
+	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER |
+					CRYPTO_ALG_NEED_FALLBACK,
+	.cra_init               =       fallback_init,
+	.cra_exit               =       fallback_exit,
 	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
 	.cra_ctxsize		=	sizeof(struct geode_aes_op),
 	.cra_alignmask		=	15,
@@ -336,8 +481,8 @@ static struct crypto_alg geode_ecb_alg = {
 	.cra_list		=	LIST_HEAD_INIT(geode_ecb_alg.cra_list),
 	.cra_u			=	{
 		.blkcipher = {
-			.min_keysize		=	AES_KEY_LENGTH,
-			.max_keysize		=	AES_KEY_LENGTH,
+			.min_keysize		=	AES_MIN_KEY_SIZE,
+			.max_keysize		=	AES_MAX_KEY_SIZE,
 			.setkey			=	geode_setkey,
 			.encrypt		=	geode_ecb_encrypt,
 			.decrypt		=	geode_ecb_decrypt,
@@ -368,7 +513,7 @@ geode_aes_probe(struct pci_dev *dev, const struct pci_device_id *id)
 	if ((ret = pci_enable_device(dev)))
 		return ret;
 
-	if ((ret = pci_request_regions(dev, "geode-aes-128")))
+	if ((ret = pci_request_regions(dev, "geode-aes")))
 		goto eenable;
 
 	_iobase = pci_iomap(dev, 0, 0);
@@ -392,7 +537,8 @@ geode_aes_probe(struct pci_dev *dev, const struct pci_device_id *id)
 	if ((ret = crypto_register_alg(&geode_cbc_alg)))
 		goto eecb;
 
-	printk(KERN_NOTICE "geode-aes: GEODE AES engine enabled.\n");
+	printk(KERN_NOTICE "geode-aes: GEODE AES engine enabled. "
+			"Only 128-bit keys are supported by the Hardware.\n");
 	return 0;
 
  eecb:
diff --git a/drivers/crypto/geode-aes.h b/drivers/crypto/geode-aes.h
index 2f1d559..14cc763 100644
--- a/drivers/crypto/geode-aes.h
+++ b/drivers/crypto/geode-aes.h
@@ -66,6 +66,12 @@ struct geode_aes_op {
 
 	u8 key[AES_KEY_LENGTH];
 	u8 iv[AES_IV_LENGTH];
+
+	union {
+		struct crypto_blkcipher *blk;
+		struct crypto_cipher *cip;
+	} fallback;
+	u32 keylen;
 };
 
 #endif
-- 
1.5.3.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC 5/5] [crypto] geode: use proper defines
  2007-10-19 10:03 [REPOST,RFC] of small geode cleanup + fallback Sebastian Siewior
                   ` (3 preceding siblings ...)
  2007-10-19 10:03 ` [RFC 4/5] [crypto] geode: add fallback for unsupported modes Sebastian Siewior
@ 2007-10-19 10:03 ` Sebastian Siewior
  4 siblings, 0 replies; 16+ messages in thread
From: Sebastian Siewior @ 2007-10-19 10:03 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto, Sebastian Siewior

From: Sebastian Siewior <sebastian@breakpoint.cc>

Use proper defines for constants even if they point to the same
value:
- there is no min/max blocksize, there is one for AES.
- the size of the IV is the algorithm's block size in case of CBC.
- there is no key size for AES but three different :)

This is a nitpicker patch, no added value :)

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
---
 drivers/crypto/geode-aes.c |   44 ++++++++++++++++++++++----------------------
 drivers/crypto/geode-aes.h |    9 +++------
 2 files changed, 25 insertions(+), 28 deletions(-)

diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
index 761d600..5c7ed90 100644
--- a/drivers/crypto/geode-aes.c
+++ b/drivers/crypto/geode-aes.c
@@ -123,12 +123,12 @@ geode_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int len)
 
 	op->keylen = len;
 
-	if (len == AES_KEY_LENGTH) {
+	if (len == AES_KEYSIZE_128) {
 		memcpy(op->key, key, len);
 		return 0;
 	}
 
-	if (len != 24 && len != 32) {
+	if (len != AES_KEYSIZE_192 && len != AES_KEYSIZE_256) {
 		/* not supported at all */
 		tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
 		return -EINVAL;
@@ -201,7 +201,7 @@ geode_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
 
-	if (unlikely(op->keylen != 16)) {
+	if (unlikely(op->keylen != AES_KEYSIZE_128)) {
 		crypto_cipher_encrypt_one(op->fallback.cip, out, in);
 		return;
 	}
@@ -210,7 +210,7 @@ geode_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 	op->dst = (void *) out;
 	op->mode = AES_MODE_ECB;
 	op->flags = 0;
-	op->len = AES_MIN_BLOCK_SIZE;
+	op->len = AES_BLOCK_SIZE;
 	op->dir = AES_DIR_ENCRYPT;
 
 	geode_aes_crypt(op);
@@ -222,7 +222,7 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
 
-	if (unlikely(op->keylen != 16)) {
+	if (unlikely(op->keylen != AES_KEYSIZE_128)) {
 		crypto_cipher_decrypt_one(op->fallback.cip, out, in);
 		return;
 	}
@@ -231,7 +231,7 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 	op->dst = (void *) out;
 	op->mode = AES_MODE_ECB;
 	op->flags = 0;
-	op->len = AES_MIN_BLOCK_SIZE;
+	op->len = AES_BLOCK_SIZE;
 	op->dir = AES_DIR_DECRYPT;
 
 	geode_aes_crypt(op);
@@ -299,7 +299,7 @@ static struct crypto_alg geode_alg = {
 					CRYPTO_ALG_NEED_FALLBACK,
 	.cra_init               =       fallback_init,
 	.cra_exit               =       fallback_exit,
-	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
+	.cra_blocksize		=	AES_BLOCK_SIZE,
 	.cra_ctxsize		=	sizeof(struct geode_aes_op),
 	.cra_module		=	THIS_MODULE,
 	.cra_list		=	LIST_HEAD_INIT(geode_alg.cra_list),
@@ -323,18 +323,18 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
-	if (unlikely(op->keylen != 16))
+	if (unlikely(op->keylen != AES_KEYSIZE_128))
 		return fallback_blk_dec(desc, dst, src, nbytes);
 
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
-	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
+	memcpy(op->iv, walk.iv, AES_BLOCK_SIZE);
 
 	while((nbytes = walk.nbytes)) {
 		op->src = walk.src.virt.addr,
 		op->dst = walk.dst.virt.addr;
 		op->mode = AES_MODE_CBC;
-		op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
+		op->len = nbytes - (nbytes % AES_BLOCK_SIZE);
 		op->dir = AES_DIR_DECRYPT;
 
 		ret = geode_aes_crypt(op);
@@ -343,7 +343,7 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
 		err = blkcipher_walk_done(desc, &walk, nbytes);
 	}
 
-	memcpy(walk.iv, op->iv, AES_IV_LENGTH);
+	memcpy(walk.iv, op->iv, AES_BLOCK_SIZE);
 	return err;
 }
 
@@ -356,18 +356,18 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
-	if (unlikely(op->keylen != 16))
+	if (unlikely(op->keylen != AES_KEYSIZE_128))
 		return fallback_blk_enc(desc, dst, src, nbytes);
 
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
-	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
+	memcpy(op->iv, walk.iv, AES_BLOCK_SIZE);
 
 	while((nbytes = walk.nbytes)) {
 		op->src = walk.src.virt.addr,
 		op->dst = walk.dst.virt.addr;
 		op->mode = AES_MODE_CBC;
-		op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
+		op->len = nbytes - (nbytes % AES_BLOCK_SIZE);
 		op->dir = AES_DIR_ENCRYPT;
 
 		ret = geode_aes_crypt(op);
@@ -375,7 +375,7 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
 		err = blkcipher_walk_done(desc, &walk, nbytes);
 	}
 
-	memcpy(walk.iv, op->iv, AES_IV_LENGTH);
+	memcpy(walk.iv, op->iv, AES_BLOCK_SIZE);
 	return err;
 }
 
@@ -387,7 +387,7 @@ static struct crypto_alg geode_cbc_alg = {
 					CRYPTO_ALG_NEED_FALLBACK,
 	.cra_init               =       fallback_init,
 	.cra_exit               =       fallback_exit,
-	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
+	.cra_blocksize		=	AES_BLOCK_SIZE,
 	.cra_ctxsize		=	sizeof(struct geode_aes_op),
 	.cra_alignmask		=	15,
 	.cra_type		=	&crypto_blkcipher_type,
@@ -400,7 +400,7 @@ static struct crypto_alg geode_cbc_alg = {
 			.setkey			=	geode_setkey,
 			.encrypt		=	geode_cbc_encrypt,
 			.decrypt		=	geode_cbc_decrypt,
-			.ivsize			=	AES_IV_LENGTH,
+			.ivsize			=	AES_BLOCK_SIZE,
 		}
 	}
 };
@@ -414,7 +414,7 @@ geode_ecb_decrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
-	if (unlikely(op->keylen != 16))
+	if (unlikely(op->keylen != AES_KEYSIZE_128))
 		return fallback_blk_dec(desc, dst, src, nbytes);
 
 	blkcipher_walk_init(&walk, dst, src, nbytes);
@@ -424,7 +424,7 @@ geode_ecb_decrypt(struct blkcipher_desc *desc,
 		op->src = walk.src.virt.addr,
 		op->dst = walk.dst.virt.addr;
 		op->mode = AES_MODE_ECB;
-		op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
+		op->len = nbytes - (nbytes % AES_BLOCK_SIZE);
 		op->dir = AES_DIR_DECRYPT;
 
 		ret = geode_aes_crypt(op);
@@ -444,7 +444,7 @@ geode_ecb_encrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
-	if (unlikely(op->keylen != 16))
+	if (unlikely(op->keylen != AES_KEYSIZE_128))
 		return fallback_blk_enc(desc, dst, src, nbytes);
 
 	blkcipher_walk_init(&walk, dst, src, nbytes);
@@ -454,7 +454,7 @@ geode_ecb_encrypt(struct blkcipher_desc *desc,
 		op->src = walk.src.virt.addr,
 		op->dst = walk.dst.virt.addr;
 		op->mode = AES_MODE_ECB;
-		op->len = nbytes - (nbytes % AES_MIN_BLOCK_SIZE);
+		op->len = nbytes - (nbytes % AES_BLOCK_SIZE);
 		op->dir = AES_DIR_ENCRYPT;
 
 		ret = geode_aes_crypt(op);
@@ -473,7 +473,7 @@ static struct crypto_alg geode_ecb_alg = {
 					CRYPTO_ALG_NEED_FALLBACK,
 	.cra_init               =       fallback_init,
 	.cra_exit               =       fallback_exit,
-	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
+	.cra_blocksize		=	AES_BLOCK_SIZE,
 	.cra_ctxsize		=	sizeof(struct geode_aes_op),
 	.cra_alignmask		=	15,
 	.cra_type		=	&crypto_blkcipher_type,
diff --git a/drivers/crypto/geode-aes.h b/drivers/crypto/geode-aes.h
index 14cc763..a395df8 100644
--- a/drivers/crypto/geode-aes.h
+++ b/drivers/crypto/geode-aes.h
@@ -8,12 +8,9 @@
 
 #ifndef _GEODE_AES_H_
 #define _GEODE_AES_H_
+#include <crypto/aes.h>
 
 /* driver logic flags */
-#define AES_IV_LENGTH  16
-#define AES_KEY_LENGTH 16
-#define AES_MIN_BLOCK_SIZE 16
-
 #define AES_MODE_ECB 0
 #define AES_MODE_CBC 1
 
@@ -64,8 +61,8 @@ struct geode_aes_op {
 	u32 flags;
 	int len;
 
-	u8 key[AES_KEY_LENGTH];
-	u8 iv[AES_IV_LENGTH];
+	u8 key[AES_KEYSIZE_128];
+	u8 iv[AES_BLOCK_SIZE];
 
 	union {
 		struct crypto_blkcipher *blk;
-- 
1.5.3.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC 1/5] [crypto] geode: use consistent IV copy
  2007-10-19 10:03 ` [RFC 1/5] [crypto] geode: use consistent IV copy Sebastian Siewior
@ 2007-10-21  8:13   ` Herbert Xu
  0 siblings, 0 replies; 16+ messages in thread
From: Herbert Xu @ 2007-10-21  8:13 UTC (permalink / raw)
  To: Sebastian Siewior; +Cc: linux-crypto, Sebastian Siewior

On Fri, Oct 19, 2007 at 12:03:49PM +0200, Sebastian Siewior wrote:
> From: Sebastian Siewior <sebastian@breakpoint.cc>
> 
> It is enough if the IV is copied before and after the while loop.
> With DM-Crypt is seems not be required to save the IV after encrytion
> because a new one is used in the request (dunno about other users).
> It is not save to load the IV within while loop and not save afterwards
> because we mill end up with the wrong IV if the request goes consists
> of more than one page.
> 
> Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>

Nice cleanup.  Patch applied.

Thanks,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC 2/5] [crypto] geode: relax in busy loop and care about return value
  2007-10-19 10:03 ` [RFC 2/5] [crypto] geode: relax in busy loop and care about return value Sebastian Siewior
@ 2007-10-21  8:14   ` Herbert Xu
  2007-10-22 17:30     ` Sebastian Siewior
  0 siblings, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2007-10-21  8:14 UTC (permalink / raw)
  To: Sebastian Siewior; +Cc: linux-crypto, Sebastian Siewior

On Fri, Oct 19, 2007 at 12:03:50PM +0200, Sebastian Siewior wrote:
> From: Sebastian Siewior <sebastian@breakpoint.cc>
> 
> The code waits in a busy loop until the hardware finishes the encryption
> or decryption process. This wants a cpu_relax() :)
> The busy loop finishes either if the encryption is done or if the counter
> is zero. If the latter is true than the hardware failed. Since this
> should not happen, leave sith a BUG().
> 
> Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>

Applied.

I'd like to see the error propagated up though.  We'd need
to change the simple cipher interface to allow errors to be
returned.

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC 3/5] [crypto] geode: move defines into a headerfile
  2007-10-19 10:03 ` [RFC 3/5] [crypto] geode: move defines into a headerfile Sebastian Siewior
@ 2007-10-21  8:22   ` Herbert Xu
  0 siblings, 0 replies; 16+ messages in thread
From: Herbert Xu @ 2007-10-21  8:22 UTC (permalink / raw)
  To: Sebastian Siewior; +Cc: linux-crypto, Sebastian Siewior

On Fri, Oct 19, 2007 at 12:03:51PM +0200, Sebastian Siewior wrote:
> From: Sebastian Siewior <sebastian@breakpoint.cc>
> 
> Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>

Applied.  Thanks,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC 4/5] [crypto] geode: add fallback for unsupported modes.
  2007-10-19 10:03 ` [RFC 4/5] [crypto] geode: add fallback for unsupported modes Sebastian Siewior
@ 2007-10-21  8:31   ` Herbert Xu
  2007-10-22 17:30     ` Sebastian Siewior
  2007-11-04 20:59     ` [PATCH] [crypto] geode: add fallback for unsupported modes, take 2 Sebastian Siewior
  0 siblings, 2 replies; 16+ messages in thread
From: Herbert Xu @ 2007-10-21  8:31 UTC (permalink / raw)
  To: Sebastian Siewior; +Cc: linux-crypto, Sebastian Siewior

On Fri, Oct 19, 2007 at 12:03:52PM +0200, Sebastian Siewior wrote:
> From: Sebastian Siewior <sebastian@breakpoint.cc>
> 
> The Geode AES crypto engine supports only 128 bit long key. This
> patch adds fallback for other key sizes which are required by the
> AES standard.
> 
> Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>

Thanks for working on this!

> +	if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
> +		op->fallback.blk = crypto_alloc_blkcipher(name, 0,
> +				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
> +
> +	} else if (type == CRYPTO_ALG_TYPE_CIPHER) {
> +		op->fallback.cip = crypto_alloc_cipher(name, 0,
> +				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
> +	} else {
> +		printk(KERN_ERR "%s is neither a cipher nor a block cipher: %x\n",
> +				name, type);
> +		return -EINVAL;
> +	}

This looks really icky though.  Couldn't we just have different
functions for each case (blkcipher vs. cipher)?

>  static struct crypto_alg geode_ecb_alg = {
>  	.cra_name		=	"ecb(aes)",
> -	.cra_driver_name	=	"ecb-aes-geode-128",
> -	.cra_priority		=	400,
> -	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
> +	.cra_driver_name	=	"ecb-aes-geode",
> +	.cra_priority		=	300,

What's with the priority decrease?

Cheers,
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC 4/5] [crypto] geode: add fallback for unsupported modes.
  2007-10-21  8:31   ` Herbert Xu
@ 2007-10-22 17:30     ` Sebastian Siewior
  2007-11-04 20:59     ` [PATCH] [crypto] geode: add fallback for unsupported modes, take 2 Sebastian Siewior
  1 sibling, 0 replies; 16+ messages in thread
From: Sebastian Siewior @ 2007-10-22 17:30 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto

* Herbert Xu | 2007-10-21 16:31:58 [+0800]:

>> +	if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
>> +		op->fallback.blk = crypto_alloc_blkcipher(name, 0,
>> +				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
>> +
>> +	} else if (type == CRYPTO_ALG_TYPE_CIPHER) {
>> +		op->fallback.cip = crypto_alloc_cipher(name, 0,
>> +				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
>> +	} else {
>> +		printk(KERN_ERR "%s is neither a cipher nor a block cipher: %x\n",
>> +				name, type);
>> +		return -EINVAL;
>> +	}
>
>This looks really icky though.  Couldn't we just have different
>functions for each case (blkcipher vs. cipher)?
will take look.

>
>>  static struct crypto_alg geode_ecb_alg = {
>>  	.cra_name		=	"ecb(aes)",
>> -	.cra_driver_name	=	"ecb-aes-geode-128",
>> -	.cra_priority		=	400,
>> -	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
>> +	.cra_driver_name	=	"ecb-aes-geode",
>> +	.cra_priority		=	300,
>
>What's with the priority decrease?
I don't know. I probably though lets make it 300 like the other one :).
Will revert.

>Cheers,
Sebastian

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC 2/5] [crypto] geode: relax in busy loop and care about return value
  2007-10-21  8:14   ` Herbert Xu
@ 2007-10-22 17:30     ` Sebastian Siewior
  0 siblings, 0 replies; 16+ messages in thread
From: Sebastian Siewior @ 2007-10-22 17:30 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto

* Herbert Xu | 2007-10-21 16:14:52 [+0800]:

>I'd like to see the error propagated up though.  We'd need
>to change the simple cipher interface to allow errors to be
>returned.
okey.

>Cheers,
Sebastian

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH] [crypto] geode: add fallback for unsupported modes, take 2
  2007-10-21  8:31   ` Herbert Xu
  2007-10-22 17:30     ` Sebastian Siewior
@ 2007-11-04 20:59     ` Sebastian Siewior
  2007-11-06 19:24       ` Jordan Crouse
  2007-11-10 11:31       ` [PATCH] [crypto] " Herbert Xu
  1 sibling, 2 replies; 16+ messages in thread
From: Sebastian Siewior @ 2007-11-04 20:59 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto, Jordan Crouse

The Geode AES crypto engine supports only 128 bit long key. This
patch adds fallback for other key sizes which are required by the
AES standard.

Cc: Jordan Crouse <jordan.crouse@amd.com>
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
---

Herbert, as you suggested I splitted cipher & blk code.

diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
index da6164a..f2d4fba 100644
--- a/drivers/crypto/geode-aes.c
+++ b/drivers/crypto/geode-aes.c
@@ -114,18 +114,103 @@ geode_aes_crypt(struct geode_aes_op *op)
 
 /* CRYPTO-API Functions */
 
-static int
-geode_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int len)
+static int geode_setkey_cip(struct crypto_tfm *tfm, const u8 *key,
+		unsigned int len)
 {
 	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+	unsigned int ret;
+
+	op->keylen = len;
+
+	if (len == AES_KEYSIZE_128) {
+		memcpy(op->key, key, len);
+		return 0;
+	}
 
-	if (len != AES_KEY_LENGTH) {
+	if (len != AES_KEYSIZE_192 && len != AES_KEYSIZE_256) {
+		/* not supported at all */
 		tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
 		return -EINVAL;
 	}
 
-	memcpy(op->key, key, len);
-	return 0;
+	/*
+	 * The requested key size is not supported by HW, do a fallback
+	 */
+	op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
+	op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
+
+	ret = crypto_cipher_setkey(op->fallback.cip, key, len);
+	if (ret) {
+		tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
+		tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
+	}
+	return ret;
+}
+
+static int geode_setkey_blk(struct crypto_tfm *tfm, const u8 *key,
+		unsigned int len)
+{
+	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+	unsigned int ret;
+
+	op->keylen = len;
+
+	if (len == AES_KEYSIZE_128) {
+		memcpy(op->key, key, len);
+		return 0;
+	}
+
+	if (len != AES_KEYSIZE_192 && len != AES_KEYSIZE_256) {
+		/* not supported at all */
+		tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+		return -EINVAL;
+	}
+
+	/*
+	 * The requested key size is not supported by HW, do a fallback
+	 */
+	op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
+	op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
+
+	ret = crypto_blkcipher_setkey(op->fallback.blk, key, len);
+	if (ret) {
+		tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
+		tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
+	}
+	return ret;
+}
+
+static int fallback_blk_dec(struct blkcipher_desc *desc,
+		struct scatterlist *dst, struct scatterlist *src,
+		unsigned int nbytes)
+{
+	unsigned int ret;
+	struct crypto_blkcipher *tfm;
+	struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
+
+	tfm = desc->tfm;
+	desc->tfm = op->fallback.blk;
+
+	ret = crypto_blkcipher_decrypt(desc, dst, src, nbytes);
+
+	desc->tfm = tfm;
+	return ret;
+}
+static int fallback_blk_enc(struct blkcipher_desc *desc,
+		struct scatterlist *dst, struct scatterlist *src,
+		unsigned int nbytes)
+{
+	unsigned int ret;
+	struct crypto_blkcipher *tfm;
+	struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
+
+	tfm = desc->tfm;
+	desc->tfm = op->fallback.blk;
+
+	ret = crypto_blkcipher_encrypt(desc, dst, src, nbytes);
+
+	desc->tfm = tfm;
+	return ret;
 }
 
 static void
@@ -133,8 +218,10 @@ geode_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
 
-	if ((out == NULL) || (in == NULL))
+	if (unlikely(op->keylen != AES_KEYSIZE_128)) {
+		crypto_cipher_encrypt_one(op->fallback.cip, out, in);
 		return;
+	}
 
 	op->src = (void *) in;
 	op->dst = (void *) out;
@@ -152,8 +239,10 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
 
-	if ((out == NULL) || (in == NULL))
+	if (unlikely(op->keylen != AES_KEYSIZE_128)) {
+		crypto_cipher_decrypt_one(op->fallback.cip, out, in);
 		return;
+	}
 
 	op->src = (void *) in;
 	op->dst = (void *) out;
@@ -165,24 +254,50 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 	geode_aes_crypt(op);
 }
 
+static int fallback_init_cip(struct crypto_tfm *tfm)
+{
+	const char *name = tfm->__crt_alg->cra_name;
+	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+	op->fallback.cip = crypto_alloc_cipher(name, 0,
+				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+
+	if (IS_ERR(op->fallback.cip)) {
+		printk(KERN_ERR "Error allocating fallback algo %s\n", name);
+		return PTR_ERR(op->fallback.blk);
+	}
+
+	return 0;
+}
+
+static void fallback_exit_cip(struct crypto_tfm *tfm)
+{
+	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+	crypto_free_cipher(op->fallback.cip);
+	op->fallback.cip = NULL;
+}
 
 static struct crypto_alg geode_alg = {
-	.cra_name               =       "aes",
-	.cra_driver_name	=       "geode-aes-128",
-	.cra_priority           =       300,
-	.cra_alignmask          =       15,
-	.cra_flags		=	CRYPTO_ALG_TYPE_CIPHER,
+	.cra_name			=	"aes",
+	.cra_driver_name	=	"geode-aes",
+	.cra_priority		=	300,
+	.cra_alignmask		=	15,
+	.cra_flags			=	CRYPTO_ALG_TYPE_CIPHER |
+							CRYPTO_ALG_NEED_FALLBACK,
+	.cra_init			=	fallback_init_cip,
+	.cra_exit			=	fallback_exit_cip,
 	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
 	.cra_ctxsize		=	sizeof(struct geode_aes_op),
-	.cra_module		=	THIS_MODULE,
-	.cra_list		=	LIST_HEAD_INIT(geode_alg.cra_list),
-	.cra_u			=	{
-		.cipher = {
-			.cia_min_keysize	=  AES_KEY_LENGTH,
-			.cia_max_keysize	=  AES_KEY_LENGTH,
-			.cia_setkey		=  geode_setkey,
-			.cia_encrypt		=  geode_encrypt,
-			.cia_decrypt		=  geode_decrypt
+	.cra_module			=	THIS_MODULE,
+	.cra_list			=	LIST_HEAD_INIT(geode_alg.cra_list),
+	.cra_u				=	{
+		.cipher	=	{
+			.cia_min_keysize	=	AES_MIN_KEY_SIZE,
+			.cia_max_keysize	=	AES_MAX_KEY_SIZE,
+			.cia_setkey			=	geode_setkey_cip,
+			.cia_encrypt		=	geode_encrypt,
+			.cia_decrypt		=	geode_decrypt
 		}
 	}
 };
@@ -196,6 +311,9 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
+	if (unlikely(op->keylen != AES_KEYSIZE_128))
+		return fallback_blk_dec(desc, dst, src, nbytes);
+
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
 	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
@@ -226,6 +344,9 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
+	if (unlikely(op->keylen != AES_KEYSIZE_128))
+		return fallback_blk_enc(desc, dst, src, nbytes);
+
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
 	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
@@ -246,22 +367,49 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
 	return err;
 }
 
+static int fallback_init_blk(struct crypto_tfm *tfm)
+{
+	const char *name = tfm->__crt_alg->cra_name;
+	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+	op->fallback.blk = crypto_alloc_blkcipher(name, 0,
+			CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+
+	if (IS_ERR(op->fallback.blk)) {
+		printk(KERN_ERR "Error allocating fallback algo %s\n", name);
+		return PTR_ERR(op->fallback.blk);
+	}
+
+	return 0;
+}
+
+static void fallback_exit_blk(struct crypto_tfm *tfm)
+{
+	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
+
+	crypto_free_blkcipher(op->fallback.blk);
+	op->fallback.blk = NULL;
+}
+
 static struct crypto_alg geode_cbc_alg = {
 	.cra_name		=	"cbc(aes)",
-	.cra_driver_name	=	"cbc-aes-geode-128",
+	.cra_driver_name	=	"cbc-aes-geode",
 	.cra_priority		=	400,
-	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
+	.cra_flags			=	CRYPTO_ALG_TYPE_BLKCIPHER |
+							CRYPTO_ALG_NEED_FALLBACK,
+	.cra_init			=	fallback_init_blk,
+	.cra_exit			=	fallback_exit_blk,
 	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
 	.cra_ctxsize		=	sizeof(struct geode_aes_op),
 	.cra_alignmask		=	15,
-	.cra_type		=	&crypto_blkcipher_type,
-	.cra_module		=	THIS_MODULE,
-	.cra_list		=	LIST_HEAD_INIT(geode_cbc_alg.cra_list),
-	.cra_u			=	{
-		.blkcipher = {
-			.min_keysize		=	AES_KEY_LENGTH,
-			.max_keysize		=	AES_KEY_LENGTH,
-			.setkey			=	geode_setkey,
+	.cra_type			=	&crypto_blkcipher_type,
+	.cra_module			=	THIS_MODULE,
+	.cra_list			=	LIST_HEAD_INIT(geode_cbc_alg.cra_list),
+	.cra_u				=	{
+		.blkcipher	=	{
+			.min_keysize	=	AES_MIN_KEY_SIZE,
+			.max_keysize	=	AES_MAX_KEY_SIZE,
+			.setkey			=	geode_setkey_blk,
 			.encrypt		=	geode_cbc_encrypt,
 			.decrypt		=	geode_cbc_decrypt,
 			.ivsize			=	AES_IV_LENGTH,
@@ -278,6 +426,9 @@ geode_ecb_decrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
+	if (unlikely(op->keylen != AES_KEYSIZE_128))
+		return fallback_blk_dec(desc, dst, src, nbytes);
+
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
 
@@ -305,6 +456,9 @@ geode_ecb_encrypt(struct blkcipher_desc *desc,
 	struct blkcipher_walk walk;
 	int err, ret;
 
+	if (unlikely(op->keylen != AES_KEYSIZE_128))
+		return fallback_blk_enc(desc, dst, src, nbytes);
+
 	blkcipher_walk_init(&walk, dst, src, nbytes);
 	err = blkcipher_walk_virt(desc, &walk);
 
@@ -324,21 +478,24 @@ geode_ecb_encrypt(struct blkcipher_desc *desc,
 }
 
 static struct crypto_alg geode_ecb_alg = {
-	.cra_name		=	"ecb(aes)",
-	.cra_driver_name	=	"ecb-aes-geode-128",
+	.cra_name			=	"ecb(aes)",
+	.cra_driver_name	=	"ecb-aes-geode",
 	.cra_priority		=	400,
-	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
+	.cra_flags			=	CRYPTO_ALG_TYPE_BLKCIPHER |
+							CRYPTO_ALG_NEED_FALLBACK,
+	.cra_init			=	fallback_init_blk,
+	.cra_exit			=	fallback_exit_blk,
 	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
 	.cra_ctxsize		=	sizeof(struct geode_aes_op),
 	.cra_alignmask		=	15,
-	.cra_type		=	&crypto_blkcipher_type,
-	.cra_module		=	THIS_MODULE,
-	.cra_list		=	LIST_HEAD_INIT(geode_ecb_alg.cra_list),
-	.cra_u			=	{
-		.blkcipher = {
-			.min_keysize		=	AES_KEY_LENGTH,
-			.max_keysize		=	AES_KEY_LENGTH,
-			.setkey			=	geode_setkey,
+	.cra_type			=	&crypto_blkcipher_type,
+	.cra_module			=	THIS_MODULE,
+	.cra_list			=	LIST_HEAD_INIT(geode_ecb_alg.cra_list),
+	.cra_u				=	{
+		.blkcipher	=	{
+			.min_keysize	=	AES_MIN_KEY_SIZE,
+			.max_keysize	=	AES_MAX_KEY_SIZE,
+			.setkey			=	geode_setkey_blk,
 			.encrypt		=	geode_ecb_encrypt,
 			.decrypt		=	geode_ecb_decrypt,
 		}
@@ -368,7 +525,7 @@ geode_aes_probe(struct pci_dev *dev, const struct pci_device_id *id)
 	if ((ret = pci_enable_device(dev)))
 		return ret;
 
-	if ((ret = pci_request_regions(dev, "geode-aes-128")))
+	if ((ret = pci_request_regions(dev, "geode-aes")))
 		goto eenable;
 
 	_iobase = pci_iomap(dev, 0, 0);
diff --git a/drivers/crypto/geode-aes.h b/drivers/crypto/geode-aes.h
index 2f1d559..14cc763 100644
--- a/drivers/crypto/geode-aes.h
+++ b/drivers/crypto/geode-aes.h
@@ -66,6 +66,12 @@ struct geode_aes_op {
 
 	u8 key[AES_KEY_LENGTH];
 	u8 iv[AES_IV_LENGTH];
+
+	union {
+		struct crypto_blkcipher *blk;
+		struct crypto_cipher *cip;
+	} fallback;
+	u32 keylen;
 };
 
 #endif
-- 
1.5.3.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: geode: add fallback for unsupported modes, take 2
  2007-11-04 20:59     ` [PATCH] [crypto] geode: add fallback for unsupported modes, take 2 Sebastian Siewior
@ 2007-11-06 19:24       ` Jordan Crouse
  2007-11-10 11:31       ` [PATCH] [crypto] " Herbert Xu
  1 sibling, 0 replies; 16+ messages in thread
From: Jordan Crouse @ 2007-11-06 19:24 UTC (permalink / raw)
  To: Herbert Xu, linux-crypto

On 04/11/07 21:59 +0100, Sebastian Siewior wrote:
> The Geode AES crypto engine supports only 128 bit long key. This
> patch adds fallback for other key sizes which are required by the
> AES standard.
> 
> Cc: Jordan Crouse <jordan.crouse@amd.com>
> Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>

Acked-by: Jordan Crouse <jordan.crouse@amd.com>

Thanks!

> ---
> 
> Herbert, as you suggested I splitted cipher & blk code.
> 
> diff --git a/drivers/crypto/geode-aes.c b/drivers/crypto/geode-aes.c
> index da6164a..f2d4fba 100644
> --- a/drivers/crypto/geode-aes.c
> +++ b/drivers/crypto/geode-aes.c
> @@ -114,18 +114,103 @@ geode_aes_crypt(struct geode_aes_op *op)
>  
>  /* CRYPTO-API Functions */
>  
> -static int
> -geode_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int len)
> +static int geode_setkey_cip(struct crypto_tfm *tfm, const u8 *key,
> +		unsigned int len)
>  {
>  	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
> +	unsigned int ret;
> +
> +	op->keylen = len;
> +
> +	if (len == AES_KEYSIZE_128) {
> +		memcpy(op->key, key, len);
> +		return 0;
> +	}
>  
> -	if (len != AES_KEY_LENGTH) {
> +	if (len != AES_KEYSIZE_192 && len != AES_KEYSIZE_256) {
> +		/* not supported at all */
>  		tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
>  		return -EINVAL;
>  	}
>  
> -	memcpy(op->key, key, len);
> -	return 0;
> +	/*
> +	 * The requested key size is not supported by HW, do a fallback
> +	 */
> +	op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
> +	op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
> +
> +	ret = crypto_cipher_setkey(op->fallback.cip, key, len);
> +	if (ret) {
> +		tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
> +		tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
> +	}
> +	return ret;
> +}
> +
> +static int geode_setkey_blk(struct crypto_tfm *tfm, const u8 *key,
> +		unsigned int len)
> +{
> +	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
> +	unsigned int ret;
> +
> +	op->keylen = len;
> +
> +	if (len == AES_KEYSIZE_128) {
> +		memcpy(op->key, key, len);
> +		return 0;
> +	}
> +
> +	if (len != AES_KEYSIZE_192 && len != AES_KEYSIZE_256) {
> +		/* not supported at all */
> +		tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * The requested key size is not supported by HW, do a fallback
> +	 */
> +	op->fallback.blk->base.crt_flags &= ~CRYPTO_TFM_REQ_MASK;
> +	op->fallback.blk->base.crt_flags |= (tfm->crt_flags & CRYPTO_TFM_REQ_MASK);
> +
> +	ret = crypto_blkcipher_setkey(op->fallback.blk, key, len);
> +	if (ret) {
> +		tfm->crt_flags &= ~CRYPTO_TFM_RES_MASK;
> +		tfm->crt_flags |= (op->fallback.blk->base.crt_flags & CRYPTO_TFM_RES_MASK);
> +	}
> +	return ret;
> +}
> +
> +static int fallback_blk_dec(struct blkcipher_desc *desc,
> +		struct scatterlist *dst, struct scatterlist *src,
> +		unsigned int nbytes)
> +{
> +	unsigned int ret;
> +	struct crypto_blkcipher *tfm;
> +	struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
> +
> +	tfm = desc->tfm;
> +	desc->tfm = op->fallback.blk;
> +
> +	ret = crypto_blkcipher_decrypt(desc, dst, src, nbytes);
> +
> +	desc->tfm = tfm;
> +	return ret;
> +}
> +static int fallback_blk_enc(struct blkcipher_desc *desc,
> +		struct scatterlist *dst, struct scatterlist *src,
> +		unsigned int nbytes)
> +{
> +	unsigned int ret;
> +	struct crypto_blkcipher *tfm;
> +	struct geode_aes_op *op = crypto_blkcipher_ctx(desc->tfm);
> +
> +	tfm = desc->tfm;
> +	desc->tfm = op->fallback.blk;
> +
> +	ret = crypto_blkcipher_encrypt(desc, dst, src, nbytes);
> +
> +	desc->tfm = tfm;
> +	return ret;
>  }
>  
>  static void
> @@ -133,8 +218,10 @@ geode_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
>  {
>  	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
>  
> -	if ((out == NULL) || (in == NULL))
> +	if (unlikely(op->keylen != AES_KEYSIZE_128)) {
> +		crypto_cipher_encrypt_one(op->fallback.cip, out, in);
>  		return;
> +	}
>  
>  	op->src = (void *) in;
>  	op->dst = (void *) out;
> @@ -152,8 +239,10 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
>  {
>  	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
>  
> -	if ((out == NULL) || (in == NULL))
> +	if (unlikely(op->keylen != AES_KEYSIZE_128)) {
> +		crypto_cipher_decrypt_one(op->fallback.cip, out, in);
>  		return;
> +	}
>  
>  	op->src = (void *) in;
>  	op->dst = (void *) out;
> @@ -165,24 +254,50 @@ geode_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
>  	geode_aes_crypt(op);
>  }
>  
> +static int fallback_init_cip(struct crypto_tfm *tfm)
> +{
> +	const char *name = tfm->__crt_alg->cra_name;
> +	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
> +
> +	op->fallback.cip = crypto_alloc_cipher(name, 0,
> +				CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
> +
> +	if (IS_ERR(op->fallback.cip)) {
> +		printk(KERN_ERR "Error allocating fallback algo %s\n", name);
> +		return PTR_ERR(op->fallback.blk);
> +	}
> +
> +	return 0;
> +}
> +
> +static void fallback_exit_cip(struct crypto_tfm *tfm)
> +{
> +	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
> +
> +	crypto_free_cipher(op->fallback.cip);
> +	op->fallback.cip = NULL;
> +}
>  
>  static struct crypto_alg geode_alg = {
> -	.cra_name               =       "aes",
> -	.cra_driver_name	=       "geode-aes-128",
> -	.cra_priority           =       300,
> -	.cra_alignmask          =       15,
> -	.cra_flags		=	CRYPTO_ALG_TYPE_CIPHER,
> +	.cra_name			=	"aes",
> +	.cra_driver_name	=	"geode-aes",
> +	.cra_priority		=	300,
> +	.cra_alignmask		=	15,
> +	.cra_flags			=	CRYPTO_ALG_TYPE_CIPHER |
> +							CRYPTO_ALG_NEED_FALLBACK,
> +	.cra_init			=	fallback_init_cip,
> +	.cra_exit			=	fallback_exit_cip,
>  	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
>  	.cra_ctxsize		=	sizeof(struct geode_aes_op),
> -	.cra_module		=	THIS_MODULE,
> -	.cra_list		=	LIST_HEAD_INIT(geode_alg.cra_list),
> -	.cra_u			=	{
> -		.cipher = {
> -			.cia_min_keysize	=  AES_KEY_LENGTH,
> -			.cia_max_keysize	=  AES_KEY_LENGTH,
> -			.cia_setkey		=  geode_setkey,
> -			.cia_encrypt		=  geode_encrypt,
> -			.cia_decrypt		=  geode_decrypt
> +	.cra_module			=	THIS_MODULE,
> +	.cra_list			=	LIST_HEAD_INIT(geode_alg.cra_list),
> +	.cra_u				=	{
> +		.cipher	=	{
> +			.cia_min_keysize	=	AES_MIN_KEY_SIZE,
> +			.cia_max_keysize	=	AES_MAX_KEY_SIZE,
> +			.cia_setkey			=	geode_setkey_cip,
> +			.cia_encrypt		=	geode_encrypt,
> +			.cia_decrypt		=	geode_decrypt
>  		}
>  	}
>  };
> @@ -196,6 +311,9 @@ geode_cbc_decrypt(struct blkcipher_desc *desc,
>  	struct blkcipher_walk walk;
>  	int err, ret;
>  
> +	if (unlikely(op->keylen != AES_KEYSIZE_128))
> +		return fallback_blk_dec(desc, dst, src, nbytes);
> +
>  	blkcipher_walk_init(&walk, dst, src, nbytes);
>  	err = blkcipher_walk_virt(desc, &walk);
>  	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
> @@ -226,6 +344,9 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
>  	struct blkcipher_walk walk;
>  	int err, ret;
>  
> +	if (unlikely(op->keylen != AES_KEYSIZE_128))
> +		return fallback_blk_enc(desc, dst, src, nbytes);
> +
>  	blkcipher_walk_init(&walk, dst, src, nbytes);
>  	err = blkcipher_walk_virt(desc, &walk);
>  	memcpy(op->iv, walk.iv, AES_IV_LENGTH);
> @@ -246,22 +367,49 @@ geode_cbc_encrypt(struct blkcipher_desc *desc,
>  	return err;
>  }
>  
> +static int fallback_init_blk(struct crypto_tfm *tfm)
> +{
> +	const char *name = tfm->__crt_alg->cra_name;
> +	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
> +
> +	op->fallback.blk = crypto_alloc_blkcipher(name, 0,
> +			CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
> +
> +	if (IS_ERR(op->fallback.blk)) {
> +		printk(KERN_ERR "Error allocating fallback algo %s\n", name);
> +		return PTR_ERR(op->fallback.blk);
> +	}
> +
> +	return 0;
> +}
> +
> +static void fallback_exit_blk(struct crypto_tfm *tfm)
> +{
> +	struct geode_aes_op *op = crypto_tfm_ctx(tfm);
> +
> +	crypto_free_blkcipher(op->fallback.blk);
> +	op->fallback.blk = NULL;
> +}
> +
>  static struct crypto_alg geode_cbc_alg = {
>  	.cra_name		=	"cbc(aes)",
> -	.cra_driver_name	=	"cbc-aes-geode-128",
> +	.cra_driver_name	=	"cbc-aes-geode",
>  	.cra_priority		=	400,
> -	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
> +	.cra_flags			=	CRYPTO_ALG_TYPE_BLKCIPHER |
> +							CRYPTO_ALG_NEED_FALLBACK,
> +	.cra_init			=	fallback_init_blk,
> +	.cra_exit			=	fallback_exit_blk,
>  	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
>  	.cra_ctxsize		=	sizeof(struct geode_aes_op),
>  	.cra_alignmask		=	15,
> -	.cra_type		=	&crypto_blkcipher_type,
> -	.cra_module		=	THIS_MODULE,
> -	.cra_list		=	LIST_HEAD_INIT(geode_cbc_alg.cra_list),
> -	.cra_u			=	{
> -		.blkcipher = {
> -			.min_keysize		=	AES_KEY_LENGTH,
> -			.max_keysize		=	AES_KEY_LENGTH,
> -			.setkey			=	geode_setkey,
> +	.cra_type			=	&crypto_blkcipher_type,
> +	.cra_module			=	THIS_MODULE,
> +	.cra_list			=	LIST_HEAD_INIT(geode_cbc_alg.cra_list),
> +	.cra_u				=	{
> +		.blkcipher	=	{
> +			.min_keysize	=	AES_MIN_KEY_SIZE,
> +			.max_keysize	=	AES_MAX_KEY_SIZE,
> +			.setkey			=	geode_setkey_blk,
>  			.encrypt		=	geode_cbc_encrypt,
>  			.decrypt		=	geode_cbc_decrypt,
>  			.ivsize			=	AES_IV_LENGTH,
> @@ -278,6 +426,9 @@ geode_ecb_decrypt(struct blkcipher_desc *desc,
>  	struct blkcipher_walk walk;
>  	int err, ret;
>  
> +	if (unlikely(op->keylen != AES_KEYSIZE_128))
> +		return fallback_blk_dec(desc, dst, src, nbytes);
> +
>  	blkcipher_walk_init(&walk, dst, src, nbytes);
>  	err = blkcipher_walk_virt(desc, &walk);
>  
> @@ -305,6 +456,9 @@ geode_ecb_encrypt(struct blkcipher_desc *desc,
>  	struct blkcipher_walk walk;
>  	int err, ret;
>  
> +	if (unlikely(op->keylen != AES_KEYSIZE_128))
> +		return fallback_blk_enc(desc, dst, src, nbytes);
> +
>  	blkcipher_walk_init(&walk, dst, src, nbytes);
>  	err = blkcipher_walk_virt(desc, &walk);
>  
> @@ -324,21 +478,24 @@ geode_ecb_encrypt(struct blkcipher_desc *desc,
>  }
>  
>  static struct crypto_alg geode_ecb_alg = {
> -	.cra_name		=	"ecb(aes)",
> -	.cra_driver_name	=	"ecb-aes-geode-128",
> +	.cra_name			=	"ecb(aes)",
> +	.cra_driver_name	=	"ecb-aes-geode",
>  	.cra_priority		=	400,
> -	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
> +	.cra_flags			=	CRYPTO_ALG_TYPE_BLKCIPHER |
> +							CRYPTO_ALG_NEED_FALLBACK,
> +	.cra_init			=	fallback_init_blk,
> +	.cra_exit			=	fallback_exit_blk,
>  	.cra_blocksize		=	AES_MIN_BLOCK_SIZE,
>  	.cra_ctxsize		=	sizeof(struct geode_aes_op),
>  	.cra_alignmask		=	15,
> -	.cra_type		=	&crypto_blkcipher_type,
> -	.cra_module		=	THIS_MODULE,
> -	.cra_list		=	LIST_HEAD_INIT(geode_ecb_alg.cra_list),
> -	.cra_u			=	{
> -		.blkcipher = {
> -			.min_keysize		=	AES_KEY_LENGTH,
> -			.max_keysize		=	AES_KEY_LENGTH,
> -			.setkey			=	geode_setkey,
> +	.cra_type			=	&crypto_blkcipher_type,
> +	.cra_module			=	THIS_MODULE,
> +	.cra_list			=	LIST_HEAD_INIT(geode_ecb_alg.cra_list),
> +	.cra_u				=	{
> +		.blkcipher	=	{
> +			.min_keysize	=	AES_MIN_KEY_SIZE,
> +			.max_keysize	=	AES_MAX_KEY_SIZE,
> +			.setkey			=	geode_setkey_blk,
>  			.encrypt		=	geode_ecb_encrypt,
>  			.decrypt		=	geode_ecb_decrypt,
>  		}
> @@ -368,7 +525,7 @@ geode_aes_probe(struct pci_dev *dev, const struct pci_device_id *id)
>  	if ((ret = pci_enable_device(dev)))
>  		return ret;
>  
> -	if ((ret = pci_request_regions(dev, "geode-aes-128")))
> +	if ((ret = pci_request_regions(dev, "geode-aes")))
>  		goto eenable;
>  
>  	_iobase = pci_iomap(dev, 0, 0);
> diff --git a/drivers/crypto/geode-aes.h b/drivers/crypto/geode-aes.h
> index 2f1d559..14cc763 100644
> --- a/drivers/crypto/geode-aes.h
> +++ b/drivers/crypto/geode-aes.h
> @@ -66,6 +66,12 @@ struct geode_aes_op {
>  
>  	u8 key[AES_KEY_LENGTH];
>  	u8 iv[AES_IV_LENGTH];
> +
> +	union {
> +		struct crypto_blkcipher *blk;
> +		struct crypto_cipher *cip;
> +	} fallback;
> +	u32 keylen;
>  };
>  
>  #endif
> -- 
> 1.5.3.4
> 
> 
> 

-- 
Jordan Crouse
Systems Software Development Engineer 
Advanced Micro Devices, Inc.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] [crypto] geode: add fallback for unsupported modes, take 2
  2007-11-04 20:59     ` [PATCH] [crypto] geode: add fallback for unsupported modes, take 2 Sebastian Siewior
  2007-11-06 19:24       ` Jordan Crouse
@ 2007-11-10 11:31       ` Herbert Xu
  2007-11-10 21:28         ` Sebastian Siewior
  1 sibling, 1 reply; 16+ messages in thread
From: Herbert Xu @ 2007-11-10 11:31 UTC (permalink / raw)
  To: linux-crypto, Jordan Crouse

On Sun, Nov 04, 2007 at 09:59:23PM +0100, Sebastian Siewior wrote:
> The Geode AES crypto engine supports only 128 bit long key. This
> patch adds fallback for other key sizes which are required by the
> AES standard.
> 
> Cc: Jordan Crouse <jordan.crouse@amd.com>
> Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>

Patch applied.  Thanks for fixing this.
-- 
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH] [crypto] geode: add fallback for unsupported modes, take 2
  2007-11-10 11:31       ` [PATCH] [crypto] " Herbert Xu
@ 2007-11-10 21:28         ` Sebastian Siewior
  0 siblings, 0 replies; 16+ messages in thread
From: Sebastian Siewior @ 2007-11-10 21:28 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto, Jan Glauber

* Herbert Xu | 2007-11-10 19:31:30 [+0800]:

>On Sun, Nov 04, 2007 at 09:59:23PM +0100, Sebastian Siewior wrote:
>> The Geode AES crypto engine supports only 128 bit long key. This
>> patch adds fallback for other key sizes which are required by the
>> AES standard.
>> 
>> Cc: Jordan Crouse <jordan.crouse@amd.com>
>> Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
>
>Patch applied.  Thanks for fixing this.

Since you liked that one I make one for the s390 ppl as well.

Sebastian

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2007-11-10 21:28 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-19 10:03 [REPOST,RFC] of small geode cleanup + fallback Sebastian Siewior
2007-10-19 10:03 ` [RFC 1/5] [crypto] geode: use consistent IV copy Sebastian Siewior
2007-10-21  8:13   ` Herbert Xu
2007-10-19 10:03 ` [RFC 2/5] [crypto] geode: relax in busy loop and care about return value Sebastian Siewior
2007-10-21  8:14   ` Herbert Xu
2007-10-22 17:30     ` Sebastian Siewior
2007-10-19 10:03 ` [RFC 3/5] [crypto] geode: move defines into a headerfile Sebastian Siewior
2007-10-21  8:22   ` Herbert Xu
2007-10-19 10:03 ` [RFC 4/5] [crypto] geode: add fallback for unsupported modes Sebastian Siewior
2007-10-21  8:31   ` Herbert Xu
2007-10-22 17:30     ` Sebastian Siewior
2007-11-04 20:59     ` [PATCH] [crypto] geode: add fallback for unsupported modes, take 2 Sebastian Siewior
2007-11-06 19:24       ` Jordan Crouse
2007-11-10 11:31       ` [PATCH] [crypto] " Herbert Xu
2007-11-10 21:28         ` Sebastian Siewior
2007-10-19 10:03 ` [RFC 5/5] [crypto] geode: use proper defines Sebastian Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).