* [RFC][PATCH 0/7] crypto: Adding Hash-Encrypt-Hash(HEH)
@ 2016-11-14 21:01 Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 1/7] crypto: skcipher adding skciper_walk_virt_init Alex Cope
` (5 more replies)
0 siblings, 6 replies; 7+ messages in thread
From: Alex Cope @ 2016-11-14 21:01 UTC (permalink / raw)
To: linux-crypto; +Cc: mhalcrow, edknapp, Alex Cope
This patchset implements HEH, which is currently specified by the
following Internet Draft:
https://tools.ietf.org/html/draft-cope-heh-00
This patchset is a request for comments, and should not be merged at
this time. We would like to wait for further comments on the Internet
Draft before merging this patchset.
Thanks
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC][PATCH 1/7] crypto: skcipher adding skciper_walk_virt_init
2016-11-14 21:01 [RFC][PATCH 0/7] crypto: Adding Hash-Encrypt-Hash(HEH) Alex Cope
@ 2016-11-14 21:01 ` Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 2/7] crypto: gf128mul - Refactor gf128 overflow macros Alex Cope
` (4 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Alex Cope @ 2016-11-14 21:01 UTC (permalink / raw)
To: linux-crypto; +Cc: mhalcrow, edknapp, Alex Cope, Eric Biggers
Adding skcipher_walk_virt_init to allow a skciper_walk to specify
length and input/output sg. Provides similar funcationalty to
blkcipher_walk_init
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
crypto/skcipher.c | 32 +++++++++++++++++++++++---------
include/crypto/internal/skcipher.h | 4 ++++
2 files changed, 27 insertions(+), 9 deletions(-)
diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index e1633e6..df4b2de 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -447,16 +447,19 @@ static int skcipher_walk_first(struct skcipher_walk *walk)
}
static int skcipher_walk_skcipher(struct skcipher_walk *walk,
- struct skcipher_request *req)
+ struct skcipher_request *req,
+ struct scatterlist *src,
+ struct scatterlist *dst,
+ unsigned int len)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- scatterwalk_start(&walk->in, req->src);
- scatterwalk_start(&walk->out, req->dst);
+ scatterwalk_start(&walk->in, src);
+ scatterwalk_start(&walk->out, dst);
- walk->in.sg = req->src;
- walk->out.sg = req->dst;
- walk->total = req->cryptlen;
+ walk->in.sg = src;
+ walk->out.sg = dst;
+ walk->total = len;
walk->iv = req->iv;
walk->oiv = req->iv;
@@ -474,17 +477,27 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk,
int skcipher_walk_virt(struct skcipher_walk *walk,
struct skcipher_request *req, bool atomic)
{
+ return skcipher_walk_virt_init(walk, req, atomic, req->src, req->dst,
+ req->cryptlen);
+}
+EXPORT_SYMBOL_GPL(skcipher_walk_virt);
+
+int skcipher_walk_virt_init(struct skcipher_walk *walk,
+ struct skcipher_request *req, bool atomic,
+ struct scatterlist *src, struct scatterlist *dst,
+ unsigned int len)
+{
int err;
walk->flags &= ~SKCIPHER_WALK_PHYS;
- err = skcipher_walk_skcipher(walk, req);
+ err = skcipher_walk_skcipher(walk, req, src, dst, len);
walk->flags &= atomic ? ~SKCIPHER_WALK_SLEEP : ~0;
return err;
}
-EXPORT_SYMBOL_GPL(skcipher_walk_virt);
+EXPORT_SYMBOL_GPL(skcipher_walk_virt_init);
void skcipher_walk_atomise(struct skcipher_walk *walk)
{
@@ -499,7 +512,8 @@ int skcipher_walk_async(struct skcipher_walk *walk,
INIT_LIST_HEAD(&walk->buffers);
- return skcipher_walk_skcipher(walk, req);
+ return skcipher_walk_skcipher(walk, req, req->src, req->dst,
+ req->cryptlen);
}
EXPORT_SYMBOL_GPL(skcipher_walk_async);
diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h
index 26934a6..1173701 100644
--- a/include/crypto/internal/skcipher.h
+++ b/include/crypto/internal/skcipher.h
@@ -144,6 +144,10 @@ int skcipher_walk_done(struct skcipher_walk *walk, int err);
int skcipher_walk_virt(struct skcipher_walk *walk,
struct skcipher_request *req,
bool atomic);
+int skcipher_walk_virt_init(struct skcipher_walk *walk,
+ struct skcipher_request *req,
+ bool atomic, struct scatterlist *src,
+ struct scatterlist *dst, unsigned int len);
void skcipher_walk_atomise(struct skcipher_walk *walk);
int skcipher_walk_async(struct skcipher_walk *walk,
struct skcipher_request *req);
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC][PATCH 2/7] crypto: gf128mul - Refactor gf128 overflow macros
2016-11-14 21:01 [RFC][PATCH 0/7] crypto: Adding Hash-Encrypt-Hash(HEH) Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 1/7] crypto: skcipher adding skciper_walk_virt_init Alex Cope
@ 2016-11-14 21:01 ` Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 3/7] crypto: gf128mul - Add ble multiplication functions Alex Cope
` (3 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Alex Cope @ 2016-11-14 21:01 UTC (permalink / raw)
To: linux-crypto; +Cc: mhalcrow, edknapp, Alex Cope, Eric Biggers
Rename and clean up the overflow macros. Their usage is more general
than the name suggested.
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
crypto/gf128mul.c | 68 +++++++++++++++++++++++++++++++++----------------------
1 file changed, 41 insertions(+), 27 deletions(-)
diff --git a/crypto/gf128mul.c b/crypto/gf128mul.c
index 0594dd6..8b65b1e 100644
--- a/crypto/gf128mul.c
+++ b/crypto/gf128mul.c
@@ -88,33 +88,47 @@
q(0xf8), q(0xf9), q(0xfa), q(0xfb), q(0xfc), q(0xfd), q(0xfe), q(0xff) \
}
-/* Given the value i in 0..255 as the byte overflow when a field element
- in GHASH is multiplied by x^8, this function will return the values that
- are generated in the lo 16-bit word of the field value by applying the
- modular polynomial. The values lo_byte and hi_byte are returned via the
- macro xp_fun(lo_byte, hi_byte) so that the values can be assembled into
- memory as required by a suitable definition of this macro operating on
- the table above
-*/
-
-#define xx(p, q) 0x##p##q
+/*
+ * Given a value i in 0..255 as the byte overflow when a field element
+ * in GF(2^128) is multiplied by x^8, the following macro returns the
+ * 16-bit value that must be XOR-ed into the low-degree end of the
+ * product to reduce it modulo the irreducible polynomial x^128 + x^7 +
+ * x^2 + x + 1.
+ *
+ * There are two versions of the macro, and hence two tables: one for
+ * the "be" convention where the highest-order bit is the coefficient of
+ * the highest-degree polynomial term, and one for the "le" convention
+ * where the highest-order bit is the coefficient of the lowest-degree
+ * polynomial term. In both cases the values are stored in CPU byte
+ * endianness such that the coefficients are ordered consistently across
+ * bytes, i.e. in the "be" table bits 15..0 of the stored value
+ * correspond to the coefficients of x^15..x^0, and in the "le" table
+ * bits 15..0 correspond to the coefficients of x^0..x^15.
+ *
+ * Therefore, provided that the appropriate byte endianness conversions
+ * are done by the multiplication functions (and these must be in place
+ * anyway to support both little endian and big endian CPUs), the "be"
+ * table can be used for multiplications of both "bbe" and "ble"
+ * elements, and the "le" table can be used for multiplications of both
+ * "lle" and "lbe" elements.
+ */
-#define xda_bbe(i) ( \
- (i & 0x80 ? xx(43, 80) : 0) ^ (i & 0x40 ? xx(21, c0) : 0) ^ \
- (i & 0x20 ? xx(10, e0) : 0) ^ (i & 0x10 ? xx(08, 70) : 0) ^ \
- (i & 0x08 ? xx(04, 38) : 0) ^ (i & 0x04 ? xx(02, 1c) : 0) ^ \
- (i & 0x02 ? xx(01, 0e) : 0) ^ (i & 0x01 ? xx(00, 87) : 0) \
+#define xda_be(i) ( \
+ (i & 0x80 ? 0x4380 : 0) ^ (i & 0x40 ? 0x21c0 : 0) ^ \
+ (i & 0x20 ? 0x10e0 : 0) ^ (i & 0x10 ? 0x0870 : 0) ^ \
+ (i & 0x08 ? 0x0438 : 0) ^ (i & 0x04 ? 0x021c : 0) ^ \
+ (i & 0x02 ? 0x010e : 0) ^ (i & 0x01 ? 0x0087 : 0) \
)
-#define xda_lle(i) ( \
- (i & 0x80 ? xx(e1, 00) : 0) ^ (i & 0x40 ? xx(70, 80) : 0) ^ \
- (i & 0x20 ? xx(38, 40) : 0) ^ (i & 0x10 ? xx(1c, 20) : 0) ^ \
- (i & 0x08 ? xx(0e, 10) : 0) ^ (i & 0x04 ? xx(07, 08) : 0) ^ \
- (i & 0x02 ? xx(03, 84) : 0) ^ (i & 0x01 ? xx(01, c2) : 0) \
+#define xda_le(i) ( \
+ (i & 0x80 ? 0xe100 : 0) ^ (i & 0x40 ? 0x7080 : 0) ^ \
+ (i & 0x20 ? 0x3840 : 0) ^ (i & 0x10 ? 0x1c20 : 0) ^ \
+ (i & 0x08 ? 0x0e10 : 0) ^ (i & 0x04 ? 0x0708 : 0) ^ \
+ (i & 0x02 ? 0x0384 : 0) ^ (i & 0x01 ? 0x01c2 : 0) \
)
-static const u16 gf128mul_table_lle[256] = gf128mul_dat(xda_lle);
-static const u16 gf128mul_table_bbe[256] = gf128mul_dat(xda_bbe);
+static const u16 gf128mul_table_le[256] = gf128mul_dat(xda_le);
+static const u16 gf128mul_table_be[256] = gf128mul_dat(xda_be);
/* These functions multiply a field element by x, by x^4 and by x^8
* in the polynomial field representation. It uses 32-bit word operations
@@ -126,7 +140,7 @@ static void gf128mul_x_lle(be128 *r, const be128 *x)
{
u64 a = be64_to_cpu(x->a);
u64 b = be64_to_cpu(x->b);
- u64 _tt = gf128mul_table_lle[(b << 7) & 0xff];
+ u64 _tt = gf128mul_table_le[(b << 7) & 0xff];
r->b = cpu_to_be64((b >> 1) | (a << 63));
r->a = cpu_to_be64((a >> 1) ^ (_tt << 48));
@@ -136,7 +150,7 @@ static void gf128mul_x_bbe(be128 *r, const be128 *x)
{
u64 a = be64_to_cpu(x->a);
u64 b = be64_to_cpu(x->b);
- u64 _tt = gf128mul_table_bbe[a >> 63];
+ u64 _tt = gf128mul_table_be[a >> 63];
r->a = cpu_to_be64((a << 1) | (b >> 63));
r->b = cpu_to_be64((b << 1) ^ _tt);
@@ -146,7 +160,7 @@ void gf128mul_x_ble(be128 *r, const be128 *x)
{
u64 a = le64_to_cpu(x->a);
u64 b = le64_to_cpu(x->b);
- u64 _tt = gf128mul_table_bbe[b >> 63];
+ u64 _tt = gf128mul_table_be[b >> 63];
r->a = cpu_to_le64((a << 1) ^ _tt);
r->b = cpu_to_le64((b << 1) | (a >> 63));
@@ -157,7 +171,7 @@ static void gf128mul_x8_lle(be128 *x)
{
u64 a = be64_to_cpu(x->a);
u64 b = be64_to_cpu(x->b);
- u64 _tt = gf128mul_table_lle[b & 0xff];
+ u64 _tt = gf128mul_table_le[b & 0xff];
x->b = cpu_to_be64((b >> 8) | (a << 56));
x->a = cpu_to_be64((a >> 8) ^ (_tt << 48));
@@ -167,7 +181,7 @@ static void gf128mul_x8_bbe(be128 *x)
{
u64 a = be64_to_cpu(x->a);
u64 b = be64_to_cpu(x->b);
- u64 _tt = gf128mul_table_bbe[a >> 56];
+ u64 _tt = gf128mul_table_be[a >> 56];
x->a = cpu_to_be64((a << 8) | (b >> 56));
x->b = cpu_to_be64((b << 8) ^ _tt);
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC][PATCH 3/7] crypto: gf128mul - Add ble multiplication functions
2016-11-14 21:01 [RFC][PATCH 0/7] crypto: Adding Hash-Encrypt-Hash(HEH) Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 1/7] crypto: skcipher adding skciper_walk_virt_init Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 2/7] crypto: gf128mul - Refactor gf128 overflow macros Alex Cope
@ 2016-11-14 21:01 ` Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 4/7] crypto: shash - Add crypto_grab_shash() and crypto_spawn_shash_alg() Alex Cope
` (2 subsequent siblings)
5 siblings, 0 replies; 7+ messages in thread
From: Alex Cope @ 2016-11-14 21:01 UTC (permalink / raw)
To: linux-crypto; +Cc: mhalcrow, edknapp, Alex Cope, Eric Biggers
Adding ble multiplication to GF128mul, and fixing up comments.
The ble multiplication functions multiply GF(2^128) elements in the
ble format. This format is preferable because the bits within each
byte map to polynomial coefficients in the natural order (lowest order
bit = coefficient of lowest degree polynomial term), and the bytes are
stored in little endian order which matches the endianness of most
modern CPUs.
These new functions will be used by the HEH algorithm.
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
crypto/gf128mul.c | 99 ++++++++++++++++++++++++++++++++++++++++++++---
include/crypto/gf128mul.h | 45 +++++++++++----------
2 files changed, 117 insertions(+), 27 deletions(-)
diff --git a/crypto/gf128mul.c b/crypto/gf128mul.c
index 8b65b1e..f3d9f6d 100644
--- a/crypto/gf128mul.c
+++ b/crypto/gf128mul.c
@@ -44,7 +44,7 @@
---------------------------------------------------------------------------
Issue 31/01/2006
- This file provides fast multiplication in GF(128) as required by several
+ This file provides fast multiplication in GF(2^128) as required by several
cryptographic authentication modes
*/
@@ -130,9 +130,10 @@
static const u16 gf128mul_table_le[256] = gf128mul_dat(xda_le);
static const u16 gf128mul_table_be[256] = gf128mul_dat(xda_be);
-/* These functions multiply a field element by x, by x^4 and by x^8
- * in the polynomial field representation. It uses 32-bit word operations
- * to gain speed but compensates for machine endianess and hence works
+/*
+ * The following functions multiply a field element by x or by x^8 in
+ * the polynomial field representation. They use 64-bit word operations
+ * to gain speed but compensate for machine endianness and hence work
* correctly on both styles of machine.
*/
@@ -187,6 +188,16 @@ static void gf128mul_x8_bbe(be128 *x)
x->b = cpu_to_be64((b << 8) ^ _tt);
}
+static void gf128mul_x8_ble(be128 *x)
+{
+ u64 a = le64_to_cpu(x->b);
+ u64 b = le64_to_cpu(x->a);
+ u64 _tt = gf128mul_table_be[a >> 56];
+
+ x->b = cpu_to_le64((a << 8) | (b >> 56));
+ x->a = cpu_to_le64((b << 8) ^ _tt);
+}
+
void gf128mul_lle(be128 *r, const be128 *b)
{
be128 p[8];
@@ -263,9 +274,48 @@ void gf128mul_bbe(be128 *r, const be128 *b)
}
EXPORT_SYMBOL(gf128mul_bbe);
+void gf128mul_ble(be128 *r, const be128 *b)
+{
+ be128 p[8];
+ int i;
+
+ p[0] = *r;
+ for (i = 0; i < 7; ++i)
+ gf128mul_x_ble((be128 *)&p[i + 1], (be128 *)&p[i]);
+
+ memset(r, 0, sizeof(*r));
+ for (i = 0;;) {
+ u8 ch = ((u8 *)b)[15 - i];
+
+ if (ch & 0x80)
+ be128_xor(r, r, &p[7]);
+ if (ch & 0x40)
+ be128_xor(r, r, &p[6]);
+ if (ch & 0x20)
+ be128_xor(r, r, &p[5]);
+ if (ch & 0x10)
+ be128_xor(r, r, &p[4]);
+ if (ch & 0x08)
+ be128_xor(r, r, &p[3]);
+ if (ch & 0x04)
+ be128_xor(r, r, &p[2]);
+ if (ch & 0x02)
+ be128_xor(r, r, &p[1]);
+ if (ch & 0x01)
+ be128_xor(r, r, &p[0]);
+
+ if (++i >= 16)
+ break;
+
+ gf128mul_x8_ble(r);
+ }
+}
+EXPORT_SYMBOL(gf128mul_ble);
+
+
/* This version uses 64k bytes of table space.
A 16 byte buffer has to be multiplied by a 16 byte key
- value in GF(128). If we consider a GF(128) value in
+ value in GF(2^128). If we consider a GF(2^128) value in
the buffer's lowest byte, we can construct a table of
the 256 16 byte values that result from the 256 values
of this byte. This requires 4096 bytes. But we also
@@ -399,7 +449,7 @@ EXPORT_SYMBOL(gf128mul_64k_bbe);
/* This version uses 4k bytes of table space.
A 16 byte buffer has to be multiplied by a 16 byte key
- value in GF(128). If we consider a GF(128) value in a
+ value in GF(2^128). If we consider a GF(2^128) value in a
single byte, we can construct a table of the 256 16 byte
values that result from the 256 values of this byte.
This requires 4096 bytes. If we take the highest byte in
@@ -457,6 +507,28 @@ struct gf128mul_4k *gf128mul_init_4k_bbe(const be128 *g)
}
EXPORT_SYMBOL(gf128mul_init_4k_bbe);
+struct gf128mul_4k *gf128mul_init_4k_ble(const be128 *g)
+{
+ struct gf128mul_4k *t;
+ int j, k;
+
+ t = kzalloc(sizeof(*t), GFP_KERNEL);
+ if (!t)
+ goto out;
+
+ t->t[1] = *g;
+ for (j = 1; j <= 64; j <<= 1)
+ gf128mul_x_ble(&t->t[j + j], &t->t[j]);
+
+ for (j = 2; j < 256; j += j)
+ for (k = 1; k < j; ++k)
+ be128_xor(&t->t[j + k], &t->t[j], &t->t[k]);
+
+out:
+ return t;
+}
+EXPORT_SYMBOL(gf128mul_init_4k_ble);
+
void gf128mul_4k_lle(be128 *a, struct gf128mul_4k *t)
{
u8 *ap = (u8 *)a;
@@ -487,5 +559,20 @@ void gf128mul_4k_bbe(be128 *a, struct gf128mul_4k *t)
}
EXPORT_SYMBOL(gf128mul_4k_bbe);
+void gf128mul_4k_ble(be128 *a, struct gf128mul_4k *t)
+{
+ u8 *ap = (u8 *)a;
+ be128 r[1];
+ int i = 15;
+
+ *r = t->t[ap[15]];
+ while (i--) {
+ gf128mul_x8_ble(r);
+ be128_xor(r, r, &t->t[ap[i]]);
+ }
+ *a = *r;
+}
+EXPORT_SYMBOL(gf128mul_4k_ble);
+
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Functions for multiplying elements of GF(2^128)");
diff --git a/include/crypto/gf128mul.h b/include/crypto/gf128mul.h
index 7217fe6..230760a 100644
--- a/include/crypto/gf128mul.h
+++ b/include/crypto/gf128mul.h
@@ -43,7 +43,7 @@
---------------------------------------------------------------------------
Issue Date: 31/01/2006
- An implementation of field multiplication in Galois Field GF(128)
+ An implementation of field multiplication in Galois Field GF(2^128)
*/
#ifndef _CRYPTO_GF128MUL_H
@@ -65,7 +65,7 @@
* are left and the lsb's are right. char b[16] is an array and b[0] is
* the first octet.
*
- * 80000000 00000000 00000000 00000000 .... 00000000 00000000 00000000
+ * 10000000 00000000 00000000 00000000 .... 00000000 00000000 00000000
* b[0] b[1] b[2] b[3] b[13] b[14] b[15]
*
* Every bit is a coefficient of some power of X. We can store the bits
@@ -99,21 +99,21 @@
*
* bbe on a little endian machine u32 x[4]:
*
- * MS x[0] LS MS x[1] LS
+ * MS x[0] LS MS x[1] LS
* ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
* 103..96 111.104 119.112 127.120 71...64 79...72 87...80 95...88
*
- * MS x[2] LS MS x[3] LS
+ * MS x[2] LS MS x[3] LS
* ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
* 39...32 47...40 55...48 63...56 07...00 15...08 23...16 31...24
*
* ble on a little endian machine
*
- * MS x[0] LS MS x[1] LS
+ * MS x[0] LS MS x[1] LS
* ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
* 31...24 23...16 15...08 07...00 63...56 55...48 47...40 39...32
*
- * MS x[2] LS MS x[3] LS
+ * MS x[2] LS MS x[3] LS
* ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
* 95...88 87...80 79...72 71...64 127.120 199.112 111.104 103..96
*
@@ -127,7 +127,7 @@
* machines this will automatically aligned to wordsize and on a 64-bit
* machine also.
*/
-/* Multiply a GF128 field element by x. Field elements are held in arrays
+/* Multiply a GF128 field element by x. Field elements are held in arrays
of bytes in which field bits 8n..8n + 7 are held in byte[n], with lower
indexed bits placed in the more numerically significant bit positions
within bytes.
@@ -135,45 +135,47 @@
On little endian machines the bit indexes translate into the bit
positions within four 32-bit words in the following way
- MS x[0] LS MS x[1] LS
+ MS x[0] LS MS x[1] LS
ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
24...31 16...23 08...15 00...07 56...63 48...55 40...47 32...39
- MS x[2] LS MS x[3] LS
+ MS x[2] LS MS x[3] LS
ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
88...95 80...87 72...79 64...71 120.127 112.119 104.111 96..103
On big endian machines the bit indexes translate into the bit
positions within four 32-bit words in the following way
- MS x[0] LS MS x[1] LS
+ MS x[0] LS MS x[1] LS
ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
00...07 08...15 16...23 24...31 32...39 40...47 48...55 56...63
- MS x[2] LS MS x[3] LS
+ MS x[2] LS MS x[3] LS
ms ls ms ls ms ls ms ls ms ls ms ls ms ls ms ls
64...71 72...79 80...87 88...95 96..103 104.111 112.119 120.127
*/
-/* A slow generic version of gf_mul, implemented for lle and bbe
- * It multiplies a and b and puts the result in a */
+/* A slow generic version of gf_mul, implemented for lle, bbe, and ble.
+ * It multiplies a and b and puts the result in a
+ */
void gf128mul_lle(be128 *a, const be128 *b);
-
void gf128mul_bbe(be128 *a, const be128 *b);
+void gf128mul_ble(be128 *a, const be128 *b);
-/* multiply by x in ble format, needed by XTS */
+/* multiply by x in ble format, needed by XTS and HEH */
void gf128mul_x_ble(be128 *a, const be128 *b);
/* 4k table optimization */
-
struct gf128mul_4k {
be128 t[256];
};
struct gf128mul_4k *gf128mul_init_4k_lle(const be128 *g);
struct gf128mul_4k *gf128mul_init_4k_bbe(const be128 *g);
+struct gf128mul_4k *gf128mul_init_4k_ble(const be128 *g);
void gf128mul_4k_lle(be128 *a, struct gf128mul_4k *t);
void gf128mul_4k_bbe(be128 *a, struct gf128mul_4k *t);
+void gf128mul_4k_ble(be128 *a, struct gf128mul_4k *t);
static inline void gf128mul_free_4k(struct gf128mul_4k *t)
{
@@ -181,16 +183,17 @@ static inline void gf128mul_free_4k(struct gf128mul_4k *t)
}
-/* 64k table optimization, implemented for lle and bbe */
+/* 64k table optimization, implemented for lle, ble, and bbe */
struct gf128mul_64k {
struct gf128mul_4k *t[16];
};
-/* first initialize with the constant factor with which you
- * want to multiply and then call gf128_64k_lle with the other
- * factor in the first argument, the table in the second and a
- * scratch register in the third. Afterwards *a = *r. */
+/* First initialize with the constant factor with which you
+ * want to multiply and then call gf128mul_64k_bbe with the other
+ * factor in the first argument, and the table in the second.
+ * Afterwards, the result is stored in *a.
+ */
struct gf128mul_64k *gf128mul_init_64k_lle(const be128 *g);
struct gf128mul_64k *gf128mul_init_64k_bbe(const be128 *g);
void gf128mul_free_64k(struct gf128mul_64k *t);
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC][PATCH 4/7] crypto: shash - Add crypto_grab_shash() and crypto_spawn_shash_alg()
2016-11-14 21:01 [RFC][PATCH 0/7] crypto: Adding Hash-Encrypt-Hash(HEH) Alex Cope
` (2 preceding siblings ...)
2016-11-14 21:01 ` [RFC][PATCH 3/7] crypto: gf128mul - Add ble multiplication functions Alex Cope
@ 2016-11-14 21:01 ` Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 5/7] crypto: heh - Add Hash Encrypt Hash(HEH) algorithm Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 6/7] crypto: testmgr - Add test vectors for HEH Alex Cope
5 siblings, 0 replies; 7+ messages in thread
From: Alex Cope @ 2016-11-14 21:01 UTC (permalink / raw)
To: linux-crypto; +Cc: mhalcrow, edknapp, Alex Cope, Eric Biggers
Analogous to crypto_grab_skcipher() and crypto_spawn_skcipher_alg(),
these are useful for algorithms that need to use a shash sub-algorithm,
possibly in addition to other sub-algorithms.
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
crypto/shash.c | 8 ++++++++
include/crypto/internal/hash.h | 8 ++++++++
2 files changed, 16 insertions(+)
diff --git a/crypto/shash.c b/crypto/shash.c
index a051541..55a5535 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -536,6 +536,14 @@ void shash_free_instance(struct crypto_instance *inst)
}
EXPORT_SYMBOL_GPL(shash_free_instance);
+int crypto_grab_shash(struct crypto_shash_spawn *spawn,
+ const char *name, u32 type, u32 mask)
+{
+ spawn->base.frontend = &crypto_shash_type;
+ return crypto_grab_spawn(&spawn->base, name, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_grab_shash);
+
int crypto_init_shash_spawn(struct crypto_shash_spawn *spawn,
struct shash_alg *alg,
struct crypto_instance *inst)
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index 1d4f365..54e4425 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -99,6 +99,8 @@ int shash_register_instance(struct crypto_template *tmpl,
struct shash_instance *inst);
void shash_free_instance(struct crypto_instance *inst);
+int crypto_grab_shash(struct crypto_shash_spawn *spawn,
+ const char *name, u32 type, u32 mask);
int crypto_init_shash_spawn(struct crypto_shash_spawn *spawn,
struct shash_alg *alg,
struct crypto_instance *inst);
@@ -108,6 +110,12 @@ static inline void crypto_drop_shash(struct crypto_shash_spawn *spawn)
crypto_drop_spawn(&spawn->base);
}
+static inline struct shash_alg *crypto_spawn_shash_alg(
+ struct crypto_shash_spawn *spawn)
+{
+ return container_of(spawn->base.alg, struct shash_alg, base);
+}
+
struct shash_alg *shash_attr_alg(struct rtattr *rta, u32 type, u32 mask);
int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc);
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC][PATCH 5/7] crypto: heh - Add Hash Encrypt Hash(HEH) algorithm
2016-11-14 21:01 [RFC][PATCH 0/7] crypto: Adding Hash-Encrypt-Hash(HEH) Alex Cope
` (3 preceding siblings ...)
2016-11-14 21:01 ` [RFC][PATCH 4/7] crypto: shash - Add crypto_grab_shash() and crypto_spawn_shash_alg() Alex Cope
@ 2016-11-14 21:01 ` Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 6/7] crypto: testmgr - Add test vectors for HEH Alex Cope
5 siblings, 0 replies; 7+ messages in thread
From: Alex Cope @ 2016-11-14 21:01 UTC (permalink / raw)
To: linux-crypto; +Cc: mhalcrow, edknapp, Alex Cope, Eric Biggers
Hash Encrypt Hash (HEH) is a proposed block cipher mode of operation
which extends the strong pseudo-random permutation property of block
ciphers (e.g. AES) to arbitrary length input strings. This provides a
stronger notion of security than existing block cipher modes of
operation (e.g. CBC, CTR, XTS), though it is less performant. It uses
two keyed invertible hash functions with a layer of ECB encryption
applied in-between.
This patch adds HEH as a skcipher. Support for HEH as an AEAD is not
yet implemented.
HEH will use existing accelerated ecb(block_cipher) implementation for
the encrypt step if available. Accelerated versions of the hash step
are planned but not yet implemented.
HEH will be used for filename encryption in ext4 and f2fs.
The algorithm is currently specified by the following Internet Draft:
https://tools.ietf.org/html/draft-cope-heh-00
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
crypto/Kconfig | 17 ++
crypto/Makefile | 1 +
crypto/heh.c | 814 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 832 insertions(+)
create mode 100644 crypto/heh.c
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 1db2a19..78b0e93 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -316,6 +316,23 @@ config CRYPTO_CBC
CBC: Cipher Block Chaining mode
This block cipher algorithm is required for IPSec.
+config CRYPTO_HEH
+ tristate "HEH support"
+ select CRYPTO_CMAC
+ select CRYPTO_ECB
+ select CRYPTO_GF128MUL
+ select CRYPTO_MANAGER
+ help
+ HEH: Hash Encrypt Hash mode
+ HEH is a proposed block cipher mode of operation which extends the
+ strong pseudo-random permutation (SPRP) property of block ciphers to
+ arbitrary-length input strings. This provides a stronger notion of
+ security than existing block cipher modes of operation (e.g. CBC, CTR,
+ XTS), though it is less performant. Applications include disk
+ encryption and encryption of file names and contents. Currently, this
+ implementation only provides a symmetric cipher interface, so it can't
+ yet be used as an AEAD.
+
config CRYPTO_CTR
tristate "CTR support"
select CRYPTO_BLKCIPHER
diff --git a/crypto/Makefile b/crypto/Makefile
index 82ffeee..1458d3f 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -78,6 +78,7 @@ obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
obj-$(CONFIG_CRYPTO_ECB) += ecb.o
obj-$(CONFIG_CRYPTO_CBC) += cbc.o
+obj-$(CONFIG_CRYPTO_HEH) += heh.o
obj-$(CONFIG_CRYPTO_PCBC) += pcbc.o
obj-$(CONFIG_CRYPTO_CTS) += cts.o
obj-$(CONFIG_CRYPTO_LRW) += lrw.o
diff --git a/crypto/heh.c b/crypto/heh.c
new file mode 100644
index 0000000..efd49cc
--- /dev/null
+++ b/crypto/heh.c
@@ -0,0 +1,814 @@
+/*
+ * HEH: Hash Encrypt Hash mode
+ *
+ * Copyright (c) 2016 Google Inc.
+ *
+ * Authors:
+ * Alex Cope <alexcope@google.com>
+ * Eric Biggers <ebiggers@google.com>
+ */
+
+/*
+ * Hash Encrypt Hash (HEH) is a proposed block cipher mode of operation which
+ * extends the strong pseudo-random permutation (SPRP) property of block ciphers
+ * (e.g. AES) to arbitrary length input strings. It uses two keyed invertible
+ * hash functions with a layer of ECB encryption applied in-between. The
+ * algorithm is specified by the following Internet Draft:
+ *
+ * https://tools.ietf.org/html/draft-cope-heh-00
+ *
+ * Although HEH can be used as either a regular symmetric cipher or as an AEAD,
+ * currently this module only provides it as a symmetric cipher (skcipher).
+ * Additionally, only 48-byte keys and 16-byte nonces are supported.
+ */
+
+#include <crypto/gf128mul.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/scatterwalk.h>
+#include <crypto/skcipher.h>
+#include "internal.h"
+
+/*
+ * The block size is the size of GF(2^128) elements and also the required block
+ * size of the underlying block cipher.
+ */
+#define HEH_BLOCK_SIZE 16
+
+/* Required key size in bytes */
+#define HEH_KEY_SIZE 48
+#define HEH_PRF_KEY_OFFSET 16
+#define HEH_BLK_KEY_OFFSET 32
+
+/*
+ * Macro to get the offset in bytes to the last full block
+ * (or equivalently the length of all full blocks excluding the last)
+ */
+#define HEH_TAIL_OFFSET(len) (((len) - HEH_BLOCK_SIZE) & ~(HEH_BLOCK_SIZE - 1))
+
+struct heh_instance_ctx {
+ struct crypto_shash_spawn cmac;
+ struct crypto_skcipher_spawn ecb;
+};
+
+struct heh_tfm_ctx {
+ struct crypto_shash *cmac;
+ struct crypto_skcipher *ecb;
+ struct gf128mul_4k *tau_key;
+};
+
+struct heh_cmac_data {
+ u8 nonce[HEH_BLOCK_SIZE];
+ __le32 nonce_length;
+ __le32 aad_length;
+ __le32 message_length;
+ __le32 padding;
+};
+
+struct heh_req_ctx { /* aligned to alignmask */
+ be128 beta1_key;
+ be128 beta2_key;
+ union {
+ struct {
+ struct heh_cmac_data data;
+ struct shash_desc desc;
+ /* + crypto_shash_descsize(cmac) */
+ } cmac;
+ struct {
+ u8 tail[2 * HEH_BLOCK_SIZE];
+ int (*crypt)(struct skcipher_request *);
+ struct scatterlist tmp_sgl[2];
+ struct skcipher_request req;
+ /* + crypto_skcipher_reqsize(ecb) */
+ } ecb;
+ } u;
+};
+
+static inline struct heh_req_ctx *heh_req_ctx(struct skcipher_request *req)
+{
+ unsigned int alignmask = crypto_skcipher_alignmask(
+ crypto_skcipher_reqtfm(req));
+
+ return (void *)PTR_ALIGN((u8 *)skcipher_request_ctx(req),
+ alignmask + 1);
+}
+
+static inline void async_done(struct crypto_async_request *areq, int err,
+ int (*next_step)(struct skcipher_request *, u32))
+{
+ struct skcipher_request *req = areq->data;
+
+ if (err)
+ goto out;
+
+ err = next_step(req, req->base.flags & ~CRYPTO_TFM_REQ_MAY_SLEEP);
+ if (err == -EINPROGRESS ||
+ (err == -EBUSY && (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)))
+ return;
+out:
+ skcipher_request_complete(req, err);
+}
+
+/*
+ * Generate the per-message "beta" keys used by the hashing layers of HEH. The
+ * first beta key is the CMAC of the nonce, the additional authenticated data
+ * (AAD), and the lengths in bytes of the nonce, AAD, and message. The nonce
+ * and AAD are each zero-padded to the next 16-byte block boundary, and the
+ * lengths are serialized as 4-byte little endian integers and zero-padded to
+ * the next 16-byte block boundary. The second beta key is the first one
+ * interpreted as an element in GF(2^128) and multiplied by x.
+ *
+ * Note that because the nonce and AAD may, in general, be variable-length, the
+ * key generation must be done by a pseudo-random function (PRF) on
+ * variable-length inputs. CBC-MAC does not satisfy this, as it is only a PRF
+ * on fixed-length inputs. CMAC remedies this flaw. Including the lengths of
+ * the nonce, AAD, and message is also critical to avoid collisions.
+ *
+ * That being said, this implementation does not yet operate as an AEAD and
+ * therefore there is never any AAD, nor are variable-length nonces supported.
+ */
+static int generate_betas(struct skcipher_request *req,
+ be128 *beta1_key, be128 *beta2_key)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct heh_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct heh_req_ctx *rctx = heh_req_ctx(req);
+ struct heh_cmac_data *data = &rctx->u.cmac.data;
+ struct shash_desc *desc = &rctx->u.cmac.desc;
+ int err;
+
+ BUILD_BUG_ON(sizeof(*data) != HEH_BLOCK_SIZE + 16);
+ memcpy(data->nonce, req->iv, HEH_BLOCK_SIZE);
+ data->nonce_length = cpu_to_le32(HEH_BLOCK_SIZE);
+ data->aad_length = cpu_to_le32(0);
+ data->message_length = cpu_to_le32(req->cryptlen);
+ data->padding = cpu_to_le32(0);
+
+ desc->tfm = ctx->cmac;
+ desc->flags = req->base.flags;
+
+ err = crypto_shash_digest(desc, (const u8 *)data, sizeof(*data),
+ (u8 *)beta1_key);
+ if (err)
+ return err;
+
+ gf128mul_x_ble(beta2_key, beta1_key);
+ return 0;
+}
+
+/*
+ * Evaluation of a polynomial over GF(2^128) using Horner's rule. The
+ * polynomial is evaluated at 'point'. The polynomial's coefficients are taken
+ * from 'coeffs_sgl' and are for terms with consecutive descending degree ending
+ * at degree 1. 'bytes_of_coeffs' is 16 times the number of terms.
+ */
+static be128 evaluate_polynomial(struct gf128mul_4k *point,
+ struct scatterlist *coeffs_sgl,
+ unsigned int bytes_of_coeffs)
+{
+ be128 value = {0};
+ struct sg_mapping_iter miter;
+ unsigned int remaining = bytes_of_coeffs;
+ unsigned int needed = 0;
+
+ sg_miter_start(&miter, coeffs_sgl, sg_nents(coeffs_sgl),
+ SG_MITER_FROM_SG | SG_MITER_ATOMIC);
+ while (remaining) {
+ be128 coeff;
+ const u8 *src;
+ unsigned int srclen;
+ u8 *dst = (u8 *)&value;
+
+ /*
+ * Note: scatterlist elements are not necessarily evenly
+ * divisible into blocks, nor are they necessarily aligned to
+ * __alignof__(be128).
+ */
+ sg_miter_next(&miter);
+
+ src = miter.addr;
+ srclen = min_t(unsigned int, miter.length, remaining);
+ remaining -= srclen;
+
+ if (needed) {
+ unsigned int n = min(srclen, needed);
+ u8 *pos = dst + (HEH_BLOCK_SIZE - needed);
+
+ needed -= n;
+ srclen -= n;
+
+ while (n--)
+ *pos++ ^= *src++;
+
+ if (!needed)
+ gf128mul_4k_ble(&value, point);
+ }
+
+ while (srclen >= HEH_BLOCK_SIZE) {
+ memcpy(&coeff, src, HEH_BLOCK_SIZE);
+ be128_xor(&value, &value, &coeff);
+ gf128mul_4k_ble(&value, point);
+ src += HEH_BLOCK_SIZE;
+ srclen -= HEH_BLOCK_SIZE;
+ }
+
+ if (srclen) {
+ needed = HEH_BLOCK_SIZE - srclen;
+ do {
+ *dst++ ^= *src++;
+ } while (--srclen);
+ }
+ }
+ sg_miter_stop(&miter);
+ return value;
+}
+
+/*
+ * Split the message into 16 byte blocks, padding out the last block, and use
+ * the blocks as coefficients in the evaluation of a polynomial over GF(2^128)
+ * at the secret point 'tau_key'. For ease of implementing the higher-level
+ * heh_hash_inv() function, the constant and degree-1 coefficients are swapped.
+ *
+ * Mathematically, compute:
+ * t^N * m_0 + ... + t^2 * m_{N-2} + t * m_N + m_{N-1}
+ *
+ * where:
+ * t is tau_key
+ * N is the number of full blocks in the message
+ * m_i is the i-th full block in the message for i = 0 to N-1 inclusive
+ * m_N is the (possibly empty) partial block of the message padded up to 16
+ * bytes with a 0x01 byte followed by 0x00 bytes
+ *
+ * Note that when the message length is a multiple of 16, m_N is composed
+ * entirely of padding, i.e. 0x0100...00.
+ */
+static be128 poly_hash(struct crypto_skcipher *tfm, struct scatterlist *sgl,
+ unsigned int len)
+{
+ struct heh_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+ unsigned int tail_offset = HEH_TAIL_OFFSET(len);
+ unsigned int tail_len = len - tail_offset;
+ be128 hash;
+ be128 tail[2];
+
+ /* Handle all full blocks except the last */
+ hash = evaluate_polynomial(ctx->tau_key, sgl, tail_offset);
+
+ /* Handle the last full block and the partial block */
+
+ scatterwalk_map_and_copy(tail, sgl, tail_offset, tail_len, 0);
+ *((u8 *)tail + tail_len) = 0x01;
+ memset((u8 *)tail + tail_len + 1, 0, sizeof(tail) - 1 - tail_len);
+
+ be128_xor(&hash, &hash, &tail[1]);
+ gf128mul_4k_ble(&hash, ctx->tau_key);
+ be128_xor(&hash, &hash, &tail[0]);
+ return hash;
+}
+
+/*
+ * Transform all full blocks except the last.
+ * This is used by both the hash and inverse hash phases.
+ */
+static int heh_tfm_blocks(struct skcipher_request *req,
+ struct scatterlist *src_sgl,
+ struct scatterlist *dst_sgl, unsigned int len,
+ const be128 *hash, const be128 *beta_key)
+{
+ struct skcipher_walk walk;
+ be128 e = *beta_key;
+ int err;
+ unsigned int nbytes;
+
+ err = skcipher_walk_virt_init(&walk, req, false, src_sgl, dst_sgl, len);
+ while ((nbytes = walk.nbytes)) {
+ const be128 *src = (be128 *)walk.src.virt.addr;
+ be128 *dst = (be128 *)walk.dst.virt.addr;
+
+ do {
+ gf128mul_x_ble(&e, &e);
+ be128_xor(dst, src, hash);
+ be128_xor(dst, dst, &e);
+ src++;
+ dst++;
+ } while ((nbytes -= HEH_BLOCK_SIZE) >= HEH_BLOCK_SIZE);
+ err = skcipher_walk_done(&walk, nbytes);
+ }
+ return err;
+}
+
+/*
+ * The hash phase of HEH. Given a message, compute:
+ *
+ * (m_0 + H, ..., m_{N-2} + H, H, m_N) + (xb, x^2b, ..., x^{N-1}b, b, 0)
+ *
+ * where:
+ * N is the number of full blocks in the message
+ * m_i is the i-th full block in the message for i = 0 to N-1 inclusive
+ * m_N is the unpadded partial block, possibly empty
+ * H is the poly_hash() of the message, keyed by tau_key
+ * b is beta_key
+ * x is the element x in our representation of GF(2^128)
+ *
+ * Note that the partial block remains unchanged, but it does affect the result
+ * of poly_hash() and therefore the transformation of all the full blocks.
+ */
+static int heh_hash(struct skcipher_request *req, const be128 *beta_key)
+{
+ be128 hash;
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ unsigned int tail_offset = HEH_TAIL_OFFSET(req->cryptlen);
+ unsigned int partial_len = req->cryptlen % HEH_BLOCK_SIZE;
+ int err;
+
+ /* poly_hash() the full message including the partial block */
+ hash = poly_hash(tfm, req->src, req->cryptlen);
+
+ /* Transform all full blocks except the last */
+ err = heh_tfm_blocks(req, req->src, req->dst, tail_offset, &hash,
+ beta_key);
+ if (err)
+ return err;
+
+ /* Set the last full block to hash XOR beta_key */
+ be128_xor(&hash, &hash, beta_key);
+ scatterwalk_map_and_copy(&hash, req->dst, tail_offset, HEH_BLOCK_SIZE,
+ 1);
+
+ /* Copy the partial block if needed */
+ if (partial_len != 0 && req->src != req->dst) {
+ unsigned int offs = tail_offset + HEH_BLOCK_SIZE;
+
+ scatterwalk_map_and_copy(&hash, req->src, offs, partial_len, 0);
+ scatterwalk_map_and_copy(&hash, req->dst, offs, partial_len, 1);
+ }
+ return 0;
+}
+
+/*
+ * The inverse hash phase of HEH. This undoes the result of heh_hash().
+ */
+static int heh_hash_inv(struct skcipher_request *req, const be128 *beta_key)
+{
+ be128 hash;
+ be128 tmp;
+ struct scatterlist tmp_sgl[2];
+ struct scatterlist *tail_sgl;
+ unsigned int len = req->cryptlen;
+ unsigned int tail_offset = HEH_TAIL_OFFSET(len);
+ struct scatterlist *sgl = req->dst;
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ int err;
+
+ /*
+ * The last full block was computed as hash XOR beta_key, so XOR it with
+ * beta_key to recover hash.
+ */
+ tail_sgl = scatterwalk_ffwd(tmp_sgl, sgl, tail_offset);
+ scatterwalk_map_and_copy(&hash, tail_sgl, 0, HEH_BLOCK_SIZE, 0);
+ be128_xor(&hash, &hash, beta_key);
+
+ /* Transform all full blocks except the last */
+ err = heh_tfm_blocks(req, sgl, sgl, tail_offset, &hash, beta_key);
+ if (err)
+ return err;
+
+ /*
+ * Recover the last full block. We know 'hash', i.e. the poly_hash() of
+ * the the original message. The last full block was the constant term
+ * of the polynomial. To recover the last full block, temporarily zero
+ * it, compute the poly_hash(), and take the difference from 'hash'.
+ */
+ memset(&tmp, 0, sizeof(tmp));
+ scatterwalk_map_and_copy(&tmp, tail_sgl, 0, HEH_BLOCK_SIZE, 1);
+ tmp = poly_hash(tfm, sgl, len);
+ be128_xor(&tmp, &tmp, &hash);
+ scatterwalk_map_and_copy(&tmp, tail_sgl, 0, HEH_BLOCK_SIZE, 1);
+ return 0;
+}
+
+static int heh_hash_inv_step(struct skcipher_request *req, u32 flags)
+{
+ struct heh_req_ctx *rctx = heh_req_ctx(req);
+
+ return heh_hash_inv(req, &rctx->beta2_key);
+}
+
+static void heh_ecb_tail_done(struct crypto_async_request *areq, int err)
+{
+ return async_done(areq, err, heh_hash_inv_step);
+}
+
+static int heh_ecb_tail(struct skcipher_request *req, u32 flags)
+{
+ struct heh_req_ctx *rctx = heh_req_ctx(req);
+ unsigned int partial_len = req->cryptlen % HEH_BLOCK_SIZE;
+ struct scatterlist *tail_sgl;
+ int err;
+
+ if (partial_len == 0) /* no partial block? */
+ goto next_step;
+
+ /*
+ * Extract the already encrypted/decrypted last full block and the not
+ * yet encrypted/decrypted partial block. The former will be used as a
+ * pad to encrypt/decrypt the partial block.
+ */
+ tail_sgl = scatterwalk_ffwd(rctx->u.ecb.tmp_sgl, req->dst,
+ HEH_TAIL_OFFSET(req->cryptlen));
+ scatterwalk_map_and_copy(rctx->u.ecb.tail, tail_sgl, 0,
+ HEH_BLOCK_SIZE + partial_len, 0);
+
+ /* Encrypt/decrypt the partial block using the pad */
+ crypto_xor(&rctx->u.ecb.tail[HEH_BLOCK_SIZE], rctx->u.ecb.tail,
+ partial_len);
+ scatterwalk_map_and_copy(&rctx->u.ecb.tail[HEH_BLOCK_SIZE], tail_sgl,
+ HEH_BLOCK_SIZE, partial_len, 1);
+
+ /* Encrypt/decrypt the last full block again */
+ skcipher_request_set_callback(&rctx->u.ecb.req, flags,
+ heh_ecb_tail_done, req);
+ skcipher_request_set_crypt(&rctx->u.ecb.req, tail_sgl, tail_sgl,
+ HEH_BLOCK_SIZE, NULL);
+ err = rctx->u.ecb.crypt(&rctx->u.ecb.req);
+ if (err)
+ return err;
+next_step:
+ return heh_hash_inv_step(req, flags);
+}
+
+static void heh_ecb_full_done(struct crypto_async_request *areq, int err)
+{
+ return async_done(areq, err, heh_ecb_tail);
+}
+
+/*
+ * The encrypt phase of HEH. This uses ECB encryption, with special handling
+ * for the partial block at the end if any. The source data is already in
+ * req->dst, so the encryption happens in-place.
+ *
+ * After the encrypt phase we continue on to the inverse hash phase. The
+ * functions calls are chained to support asynchronous ECB algorithms.
+ */
+static int heh_ecb(struct skcipher_request *req, bool decrypt)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct heh_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct heh_req_ctx *rctx = heh_req_ctx(req);
+ struct skcipher_request *ecb_req = &rctx->u.ecb.req;
+ unsigned int full_len = HEH_TAIL_OFFSET(req->cryptlen) + HEH_BLOCK_SIZE;
+
+ rctx->u.ecb.crypt = decrypt ? crypto_skcipher_decrypt :
+ crypto_skcipher_encrypt;
+
+ /* Encrypt/decrypt all full blocks */
+ skcipher_request_set_tfm(ecb_req, ctx->ecb);
+ skcipher_request_set_callback(ecb_req, req->base.flags,
+ heh_ecb_full_done, req);
+ skcipher_request_set_crypt(ecb_req, req->dst, req->dst, full_len, NULL);
+ return rctx->u.ecb.crypt(ecb_req) ?: heh_ecb_tail(req, req->base.flags);
+}
+
+static int heh_crypt(struct skcipher_request *req, bool decrypt)
+{
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+ struct heh_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct heh_req_ctx *rctx = heh_req_ctx(req);
+ int err;
+
+ /* Inputs must be at least one full block */
+ if (req->cryptlen < HEH_BLOCK_SIZE)
+ return -EINVAL;
+
+ /* Key must have been set */
+ if (!ctx->tau_key)
+ return -ENOKEY;
+
+ err = generate_betas(req, &rctx->beta1_key, &rctx->beta2_key);
+ if (err)
+ return err;
+
+ if (decrypt)
+ swap(rctx->beta1_key, rctx->beta2_key);
+
+ err = heh_hash(req, &rctx->beta1_key);
+ if (err)
+ return err;
+
+ return heh_ecb(req, decrypt);
+}
+
+static int heh_encrypt(struct skcipher_request *req)
+{
+ return heh_crypt(req, false);
+}
+
+static int heh_decrypt(struct skcipher_request *req)
+{
+ return heh_crypt(req, true);
+}
+
+static int heh_setkey(struct crypto_skcipher *parent, const u8 *key,
+ unsigned int keylen)
+{
+ struct heh_tfm_ctx *ctx = crypto_skcipher_ctx(parent);
+ struct crypto_shash *cmac = ctx->cmac;
+ struct crypto_skcipher *ecb = ctx->ecb;
+ const u8 *prf_key, *blk_key;
+ int err;
+
+ if (keylen != HEH_KEY_SIZE) {
+ crypto_skcipher_set_flags(parent, CRYPTO_TFM_RES_BAD_KEY_LEN);
+ return -EINVAL;
+ }
+
+ prf_key = key + HEH_PRF_KEY_OFFSET;
+ blk_key = key + HEH_BLK_KEY_OFFSET;
+
+ /* tau_key */
+ if (ctx->tau_key)
+ gf128mul_free_4k(ctx->tau_key);
+ ctx->tau_key = gf128mul_init_4k_ble((const be128 *)key);
+ if (!ctx->tau_key)
+ return -ENOMEM;
+
+ /* prf_key */
+ crypto_shash_clear_flags(cmac, CRYPTO_TFM_REQ_MASK);
+ crypto_shash_set_flags(cmac, crypto_skcipher_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_shash_setkey(cmac, prf_key, 16);
+ crypto_skcipher_set_flags(parent, crypto_shash_get_flags(cmac) &
+ CRYPTO_TFM_RES_MASK);
+ if (err)
+ return err;
+
+ /* blk_key */
+ crypto_skcipher_clear_flags(ecb, CRYPTO_TFM_REQ_MASK);
+ crypto_skcipher_set_flags(ecb, crypto_skcipher_get_flags(parent) &
+ CRYPTO_TFM_REQ_MASK);
+ err = crypto_skcipher_setkey(ecb, blk_key, 16);
+ crypto_skcipher_set_flags(parent, crypto_skcipher_get_flags(ecb) &
+ CRYPTO_TFM_RES_MASK);
+ return err;
+}
+
+static int heh_init_tfm(struct crypto_skcipher *tfm)
+{
+ struct skcipher_instance *inst = skcipher_alg_instance(tfm);
+ struct heh_instance_ctx *ictx = skcipher_instance_ctx(inst);
+ struct heh_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct crypto_shash *cmac;
+ struct crypto_skcipher *ecb;
+ unsigned int reqsize;
+ int err;
+
+ cmac = crypto_spawn_shash(&ictx->cmac);
+ if (IS_ERR(cmac))
+ return PTR_ERR(cmac);
+
+ ecb = crypto_spawn_skcipher(&ictx->ecb);
+ err = PTR_ERR(ecb);
+ if (IS_ERR(ecb))
+ goto err_free_cmac;
+
+ ctx->cmac = cmac;
+ ctx->ecb = ecb;
+
+ reqsize = crypto_skcipher_alignmask(tfm) &
+ ~(crypto_tfm_ctx_alignment() - 1);
+ reqsize += max(offsetof(struct heh_req_ctx, u.cmac.desc) +
+ sizeof(struct shash_desc) +
+ crypto_shash_descsize(cmac),
+ offsetof(struct heh_req_ctx, u.ecb.req) +
+ sizeof(struct skcipher_request) +
+ crypto_skcipher_reqsize(ecb));
+ crypto_skcipher_set_reqsize(tfm, reqsize);
+ return 0;
+
+err_free_cmac:
+ crypto_free_shash(cmac);
+ return err;
+}
+
+static void heh_exit_tfm(struct crypto_skcipher *tfm)
+{
+ struct heh_tfm_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+ gf128mul_free_4k(ctx->tau_key);
+ crypto_free_shash(ctx->cmac);
+ crypto_free_skcipher(ctx->ecb);
+}
+
+static void heh_free_instance(struct skcipher_instance *inst)
+{
+ struct heh_instance_ctx *ctx = skcipher_instance_ctx(inst);
+
+ crypto_drop_shash(&ctx->cmac);
+ crypto_drop_skcipher(&ctx->ecb);
+ kfree(inst);
+}
+
+/*
+ * Create an instance of HEH as a skcipher.
+ *
+ * This relies on underlying CMAC and ECB algorithms, usually cmac(aes) and
+ * ecb(aes). For performance reasons we support asynchronous ECB algorithms.
+ * However, we do not yet support asynchronous CMAC algorithms because CMAC is
+ * only used on a small fixed amount of data per request, independent of the
+ * request length. This would change if AEAD or variable-length nonce support
+ * were to be exposed.
+ */
+static int heh_create_common(struct crypto_template *tmpl, struct rtattr **tb,
+ const char *full_name, const char *cmac_name,
+ const char *ecb_name)
+{
+ struct crypto_attr_type *algt;
+ struct skcipher_instance *inst;
+ struct heh_instance_ctx *ctx;
+ struct shash_alg *cmac;
+ struct skcipher_alg *ecb;
+ int err;
+
+ algt = crypto_get_attr_type(tb);
+ if (IS_ERR(algt))
+ return PTR_ERR(algt);
+
+ /* User must be asking for something compatible with skcipher */
+ if ((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask)
+ return -EINVAL;
+
+ /* Allocate the skcipher instance */
+ inst = kzalloc(sizeof(*inst) + sizeof(*ctx), GFP_KERNEL);
+ if (!inst)
+ return -ENOMEM;
+
+ ctx = skcipher_instance_ctx(inst);
+
+ /* Set up the cmac and ecb spawns */
+
+ ctx->cmac.base.inst = skcipher_crypto_instance(inst);
+ err = crypto_grab_shash(&ctx->cmac, cmac_name, 0, CRYPTO_ALG_ASYNC);
+ if (err)
+ goto err_free_inst;
+ cmac = crypto_spawn_shash_alg(&ctx->cmac);
+
+ ctx->ecb.base.inst = skcipher_crypto_instance(inst);
+ err = crypto_grab_skcipher(&ctx->ecb, ecb_name, 0,
+ crypto_requires_sync(algt->type,
+ algt->mask));
+ if (err)
+ goto err_drop_cmac;
+ ecb = crypto_spawn_skcipher_alg(&ctx->ecb);
+
+ /* HEH only supports block ciphers with 16 byte block size */
+ err = -EINVAL;
+ if (ecb->base.cra_blocksize != HEH_BLOCK_SIZE)
+ goto err_drop_ecb;
+
+ /* The underlying "ECB" algorithm must not require an IV */
+ err = -EINVAL;
+ if (crypto_skcipher_alg_ivsize(ecb) != 0)
+ goto err_drop_ecb;
+
+ /* Set the instance names */
+
+ err = -ENAMETOOLONG;
+ if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+ "heh_base(%s,%s)", cmac->base.cra_driver_name,
+ ecb->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+ goto err_drop_ecb;
+
+ strcpy(inst->alg.base.cra_name, full_name); /* guaranteed to fit */
+
+ /* Finish initializing the instance */
+
+ inst->alg.base.cra_flags = (cmac->base.cra_flags |
+ ecb->base.cra_flags) & CRYPTO_ALG_ASYNC;
+ inst->alg.base.cra_blocksize = HEH_BLOCK_SIZE;
+ inst->alg.base.cra_ctxsize = sizeof(struct heh_tfm_ctx);
+ inst->alg.base.cra_alignmask = ecb->base.cra_alignmask |
+ (__alignof__(be128) - 1);
+ inst->alg.base.cra_priority = ecb->base.cra_priority;
+
+ inst->alg.ivsize = HEH_BLOCK_SIZE;
+ inst->alg.min_keysize = HEH_KEY_SIZE;
+ inst->alg.max_keysize = HEH_KEY_SIZE;
+
+ inst->alg.init = heh_init_tfm;
+ inst->alg.exit = heh_exit_tfm;
+ inst->alg.setkey = heh_setkey;
+ inst->alg.encrypt = heh_encrypt;
+ inst->alg.decrypt = heh_decrypt;
+ inst->free = heh_free_instance;
+
+ /* Register the instance */
+ err = skcipher_register_instance(tmpl, inst);
+ if (err)
+ goto err_drop_ecb;
+ return 0;
+
+err_drop_ecb:
+ crypto_drop_skcipher(&ctx->ecb);
+err_drop_cmac:
+ crypto_drop_shash(&ctx->cmac);
+err_free_inst:
+ kfree(inst);
+ return err;
+}
+
+static int heh_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ const char *cipher_name;
+ char full_name[CRYPTO_MAX_ALG_NAME];
+ char cmac_name[CRYPTO_MAX_ALG_NAME];
+ char ecb_name[CRYPTO_MAX_ALG_NAME];
+
+ /* Get the name of the requested block cipher (e.g. aes) */
+ cipher_name = crypto_attr_alg_name(tb[1]);
+ if (IS_ERR(cipher_name))
+ return PTR_ERR(cipher_name);
+
+ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "heh(%s)", cipher_name) >=
+ CRYPTO_MAX_ALG_NAME)
+ return -ENAMETOOLONG;
+
+ if (snprintf(cmac_name, CRYPTO_MAX_ALG_NAME, "cmac(%s)", cipher_name) >=
+ CRYPTO_MAX_ALG_NAME)
+ return -ENAMETOOLONG;
+
+ if (snprintf(ecb_name, CRYPTO_MAX_ALG_NAME, "ecb(%s)", cipher_name) >=
+ CRYPTO_MAX_ALG_NAME)
+ return -ENAMETOOLONG;
+
+ return heh_create_common(tmpl, tb, full_name, cmac_name, ecb_name);
+}
+
+static struct crypto_template heh_tmpl = {
+ .name = "heh",
+ .create = heh_create,
+ .module = THIS_MODULE,
+};
+
+static int heh_base_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+ char full_name[CRYPTO_MAX_ALG_NAME];
+ const char *cmac_name;
+ const char *ecb_name;
+
+ cmac_name = crypto_attr_alg_name(tb[1]);
+ if (IS_ERR(cmac_name))
+ return PTR_ERR(cmac_name);
+
+ ecb_name = crypto_attr_alg_name(tb[2]);
+ if (IS_ERR(ecb_name))
+ return PTR_ERR(ecb_name);
+
+ if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "heh_base(%s,%s)",
+ cmac_name, ecb_name) >= CRYPTO_MAX_ALG_NAME)
+ return -ENAMETOOLONG;
+
+ return heh_create_common(tmpl, tb, full_name, cmac_name, ecb_name);
+}
+
+/*
+ * If HEH is instantiated as "heh_base" instead of "heh", then specific
+ * implementations of cmac and ecb can be specified instead of just the cipher
+ */
+static struct crypto_template heh_base_tmpl = {
+ .name = "heh_base",
+ .create = heh_base_create,
+ .module = THIS_MODULE,
+};
+
+static int __init heh_module_init(void)
+{
+ int err;
+
+ err = crypto_register_template(&heh_tmpl);
+ if (err)
+ return err;
+
+ err = crypto_register_template(&heh_base_tmpl);
+ if (err)
+ goto out_undo_heh;
+
+ return 0;
+
+out_undo_heh:
+ crypto_unregister_template(&heh_tmpl);
+ return err;
+}
+
+static void __exit heh_module_exit(void)
+{
+ crypto_unregister_template(&heh_tmpl);
+ crypto_unregister_template(&heh_base_tmpl);
+}
+
+module_init(heh_module_init);
+module_exit(heh_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Hash-Encrypt-Hash block cipher mode");
+MODULE_ALIAS_CRYPTO("heh");
+MODULE_ALIAS_CRYPTO("heh_base");
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC][PATCH 6/7] crypto: testmgr - Add test vectors for HEH
2016-11-14 21:01 [RFC][PATCH 0/7] crypto: Adding Hash-Encrypt-Hash(HEH) Alex Cope
` (4 preceding siblings ...)
2016-11-14 21:01 ` [RFC][PATCH 5/7] crypto: heh - Add Hash Encrypt Hash(HEH) algorithm Alex Cope
@ 2016-11-14 21:01 ` Alex Cope
5 siblings, 0 replies; 7+ messages in thread
From: Alex Cope @ 2016-11-14 21:01 UTC (permalink / raw)
To: linux-crypto; +Cc: mhalcrow, edknapp, Alex Cope, Eric Biggers
Adding test vectors from
https://tools.ietf.org/html/draft-cope-heh-00
Signed-off-by: Alex Cope <alexcope@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
crypto/testmgr.c | 15 ++++
crypto/testmgr.h | 226 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 241 insertions(+)
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index ded50b6..bab027b 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -3481,6 +3481,21 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}
}, {
+ .alg = "heh(aes)",
+ .test = alg_test_skcipher,
+ .suite = {
+ .cipher = {
+ .enc = {
+ .vecs = aes_heh_enc_tv_template,
+ .count = AES_HEH_ENC_TEST_VECTORS
+ },
+ .dec = {
+ .vecs = aes_heh_dec_tv_template,
+ .count = AES_HEH_DEC_TEST_VECTORS
+ }
+ }
+ }
+ }, {
.alg = "hmac(crc32)",
.test = alg_test_hash,
.suite = {
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index e64a4ef..b2daad3 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -15172,6 +15172,8 @@ static struct cipher_testvec cast6_xts_dec_tv_template[] = {
#define AES_DEC_TEST_VECTORS 4
#define AES_CBC_ENC_TEST_VECTORS 5
#define AES_CBC_DEC_TEST_VECTORS 5
+#define AES_HEH_ENC_TEST_VECTORS 4
+#define AES_HEH_DEC_TEST_VECTORS 4
#define HMAC_MD5_ECB_CIPHER_NULL_ENC_TEST_VECTORS 2
#define HMAC_MD5_ECB_CIPHER_NULL_DEC_TEST_VECTORS 2
#define HMAC_SHA1_ECB_CIPHER_NULL_ENC_TEST_VEC 2
@@ -15544,6 +15546,230 @@ static struct cipher_testvec aes_dec_tv_template[] = {
},
};
+static struct cipher_testvec aes_heh_enc_tv_template[] = {
+ {
+ .key = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F",
+ .klen = 48,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F",
+ .ilen = 16,
+ .result = "\x61\x76\x38\xa5\x12\x0b\x6d\x89"
+ "\x92\x68\x30\x7e\x0d\x6e\x81\xe3",
+ .rlen = 16,
+ .also_non_np = 1,
+ .np = 2,
+ .tap = { 8, 8 },
+ }, {
+ .key = "\x68\xf8\x27\x87\xdc\x30\x33\xfd"
+ "\x65\x5b\x8e\x51\x2e\x02\xff\x9d"
+ "\x21\x28\x1e\x64\xcd\x9c\x33\x88"
+ "\xf6\x2c\x43\x8f\xf5\x6f\xf5\x8f"
+ "\xa8\xda\x24\x9b\x5e\xfa\x13\xc2"
+ "\xc1\x94\xbf\x32\xba\x38\xa3\x77",
+ .klen = 48,
+ .iv = "\x4d\x47\x61\x37\x2b\x47\x86\xf0"
+ "\xd6\x47\xb5\xc2\xe8\xcf\x85\x27",
+ .input = "\xb8\xee\x29\xe4\xa5\xd1\xe7\x55"
+ "\xd0\xfd\xe7\x22\x63\x76\x36\xe2"
+ "\xf8\x0c\xf8\xfe\x65\x76\xe7\xca"
+ "\xc1\x42\xf5\xca\x5a\xa8\xac\x2a",
+ .ilen = 32,
+ .result = "\x1f\x4c\x6a\x1e\x1d\x20\x0d\x99"
+ "\xdf\xbb\x13\xd8\x35\xdc\x1d\xbe"
+ "\xed\x50\x0a\x9f\xfd\xd6\x94\x85"
+ "\xd0\x8b\xf7\xb4\x49\x7f\x70\x6d",
+ .rlen = 32,
+ .also_non_np = 1,
+ .np = 3,
+ .tap = { 16, 13, 3 },
+ }, {
+ .key = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F",
+ .klen = 48,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00",
+ .ilen = 63,
+ .result = "\xfb\x30\x90\x47\xc5\x4e\xcc\xfd"
+ "\xc4\x90\xa2\x9f\x7c\x03\x63\xc3"
+ "\xcb\xaf\x2e\xee\x62\x18\xeb\x20"
+ "\x62\x97\xe4\x9b\xf2\x8b\xf3\x3f"
+ "\x76\x3b\xaa\xab\xf0\x19\x54\xdb"
+ "\xb4\xaf\x2e\xd9\xa7\xe0\x92\x04"
+ "\x5a\xe4\x81\xfc\x58\xf2\xda\xbf"
+ "\x5d\xc9\xb1\x47\xd5\x08\xb1",
+ .rlen = 63,
+ .also_non_np = 1,
+ .np = 8,
+ .tap = { 20, 20, 10, 8, 2, 1, 1, 1 },
+ }, {
+ .key = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F",
+ .klen = 48,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x01"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00",
+ .ilen = 63,
+ .result = "\x9c\xdf\xa5\x50\x83\xe0\xa3\xb5"
+ "\x0d\x35\x83\x34\x6e\x6e\x40\xd6"
+ "\x0f\x81\xc8\x1a\x9c\x40\x81\xfb"
+ "\xb3\x6e\xb4\xbf\xfc\xca\xc9\x50"
+ "\xcd\x33\xfd\xb3\x43\x11\xe6\x32"
+ "\x02\x3d\x3e\xc6\x49\x6e\xcf\x58"
+ "\x3e\x14\x15\x6d\x39\x2a\x58\x99"
+ "\x83\xaf\xdd\x22\x3e\x7f\x6c",
+ .rlen = 63,
+ .also_non_np = 1,
+ .np = 8,
+ .tap = { 20, 20, 10, 8, 2, 1, 1, 1 },
+ }
+};
+
+static struct cipher_testvec aes_heh_dec_tv_template[] = {
+ {
+ .key = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F",
+ .klen = 48,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x61\x76\x38\xa5\x12\x0b\x6d\x89"
+ "\x92\x68\x30\x7e\x0d\x6e\x81\xe3",
+ .ilen = 16,
+ .result = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F",
+ .rlen = 16,
+ .also_non_np = 1,
+ .np = 2,
+ .tap = { 8, 8 },
+ }, {
+ .key = "\x68\xf8\x27\x87\xdc\x30\x33\xfd"
+ "\x65\x5b\x8e\x51\x2e\x02\xff\x9d"
+ "\x21\x28\x1e\x64\xcd\x9c\x33\x88"
+ "\xf6\x2c\x43\x8f\xf5\x6f\xf5\x8f"
+ "\xa8\xda\x24\x9b\x5e\xfa\x13\xc2"
+ "\xc1\x94\xbf\x32\xba\x38\xa3\x77",
+ .klen = 48,
+ .iv = "\x4d\x47\x61\x37\x2b\x47\x86\xf0"
+ "\xd6\x47\xb5\xc2\xe8\xcf\x85\x27",
+ .input = "\x1f\x4c\x6a\x1e\x1d\x20\x0d\x99"
+ "\xdf\xbb\x13\xd8\x35\xdc\x1d\xbe"
+ "\xed\x50\x0a\x9f\xfd\xd6\x94\x85"
+ "\xd0\x8b\xf7\xb4\x49\x7f\x70\x6d",
+ .ilen = 32,
+ .result = "\xb8\xee\x29\xe4\xa5\xd1\xe7\x55"
+ "\xd0\xfd\xe7\x22\x63\x76\x36\xe2"
+ "\xf8\x0c\xf8\xfe\x65\x76\xe7\xca"
+ "\xc1\x42\xf5\xca\x5a\xa8\xac\x2a",
+ .rlen = 32,
+ .also_non_np = 1,
+ .np = 3,
+ .tap = { 16, 13, 3 },
+ }, {
+ .key = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F",
+ .klen = 48,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .result = "\xfb\x30\x90\x47\xc5\x4e\xcc\xfd"
+ "\xc4\x90\xa2\x9f\x7c\x03\x63\xc3"
+ "\xcb\xaf\x2e\xee\x62\x18\xeb\x20"
+ "\x62\x97\xe4\x9b\xf2\x8b\xf3\x3f"
+ "\x76\x3b\xaa\xab\xf0\x19\x54\xdb"
+ "\xb4\xaf\x2e\xd9\xa7\xe0\x92\x04"
+ "\x5a\xe4\x81\xfc\x58\xf2\xda\xbf"
+ "\x5d\xc9\xb1\x47\xd5\x08\xb1",
+ .ilen = 63,
+ .input = "\xfb\x30\x90\x47\xc5\x4e\xcc\xfd"
+ "\xc4\x90\xa2\x9f\x7c\x03\x63\xc3"
+ "\xcb\xaf\x2e\xee\x62\x18\xeb\x20"
+ "\x62\x97\xe4\x9b\xf2\x8b\xf3\x3f"
+ "\x76\x3b\xaa\xab\xf0\x19\x54\xdb"
+ "\xb4\xaf\x2e\xd9\xa7\xe0\x92\x04"
+ "\x5a\xe4\x81\xfc\x58\xf2\xda\xbf"
+ "\x5d\xc9\xb1\x47\xd5\x08\xb1",
+ .result = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00",
+ .rlen = 63,
+ .also_non_np = 1,
+ .np = 8,
+ .tap = { 20, 20, 10, 8, 2, 1, 1, 1 },
+ }, {
+ .key = "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F"
+ "\x00\x01\x02\x03\x04\x05\x06\x07"
+ "\x08\x09\x0A\x0B\x0C\x0D\x0E\x0F",
+ .klen = 48,
+ .iv = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00",
+ .input = "\x9c\xdf\xa5\x50\x83\xe0\xa3\xb5"
+ "\x0d\x35\x83\x34\x6e\x6e\x40\xd6"
+ "\x0f\x81\xc8\x1a\x9c\x40\x81\xfb"
+ "\xb3\x6e\xb4\xbf\xfc\xca\xc9\x50"
+ "\xcd\x33\xfd\xb3\x43\x11\xe6\x32"
+ "\x02\x3d\x3e\xc6\x49\x6e\xcf\x58"
+ "\x3e\x14\x15\x6d\x39\x2a\x58\x99"
+ "\x83\xaf\xdd\x22\x3e\x7f\x6c",
+ .ilen = 63,
+ .result = "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00\x01"
+ "\x00\x00\x00\x00\x00\x00\x00\x00"
+ "\x00\x00\x00\x00\x00\x00\x00",
+ .rlen = 63,
+ .also_non_np = 1,
+ .np = 8,
+ .tap = { 20, 20, 10, 8, 2, 1, 1, 1 },
+ }
+};
+
static struct cipher_testvec aes_cbc_enc_tv_template[] = {
{ /* From RFC 3602 */
.key = "\x06\xa9\x21\x40\x36\xb8\xa1\x5b"
--
2.8.0.rc3.226.g39d4020
^ permalink raw reply related [flat|nested] 7+ messages in thread
end of thread, other threads:[~2016-11-14 21:01 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-14 21:01 [RFC][PATCH 0/7] crypto: Adding Hash-Encrypt-Hash(HEH) Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 1/7] crypto: skcipher adding skciper_walk_virt_init Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 2/7] crypto: gf128mul - Refactor gf128 overflow macros Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 3/7] crypto: gf128mul - Add ble multiplication functions Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 4/7] crypto: shash - Add crypto_grab_shash() and crypto_spawn_shash_alg() Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 5/7] crypto: heh - Add Hash Encrypt Hash(HEH) algorithm Alex Cope
2016-11-14 21:01 ` [RFC][PATCH 6/7] crypto: testmgr - Add test vectors for HEH Alex Cope
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).