linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] crypto: chacha20 - Alignment fixes
@ 2017-11-22 19:51 Eric Biggers
  2017-11-22 19:51 ` [PATCH 1/5] crypto: chacha20 - Fix unaligned access when loading constants Eric Biggers
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Eric Biggers @ 2017-11-22 19:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu
  Cc: Theodore Ts'o, Jason A . Donenfeld, Martin Willi,
	Ard Biesheuvel, linux-kernel, Eric Biggers

From: Eric Biggers <ebiggers@google.com>

This series fixes potentially unaligned memory accesses when loading the
initial state, key, and IV for ChaCha20, and when outputting each
keystream block.

It also removes the cra_alignmask from the generic and x86 ChaCha20
implementations, once it is no longer needed.

Eric Biggers (5):
  crypto: chacha20 - Fix unaligned access when loading constants
  crypto: chacha20 - Use unaligned access macros when loading key and IV
  crypto: chacha20 - Remove cra_alignmask
  crypto: x86/chacha20 - Remove cra_alignmask
  crypto: chacha20 - Fix keystream alignment for chacha20_block()

 arch/x86/crypto/chacha20_glue.c |  1 -
 crypto/chacha20_generic.c       | 33 +++++++++++++--------------------
 drivers/char/random.c           | 24 ++++++++++++------------
 include/crypto/chacha20.h       |  3 ++-
 lib/chacha20.c                  |  2 +-
 5 files changed, 28 insertions(+), 35 deletions(-)

-- 
2.15.0.448.gf294e3d99a-goog

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/5] crypto: chacha20 - Fix unaligned access when loading constants
  2017-11-22 19:51 [PATCH 0/5] crypto: chacha20 - Alignment fixes Eric Biggers
@ 2017-11-22 19:51 ` Eric Biggers
  2017-11-22 20:26   ` Ard Biesheuvel
  2017-11-22 19:51 ` [PATCH 2/5] crypto: chacha20 - Use unaligned access macros when loading key and IV Eric Biggers
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Eric Biggers @ 2017-11-22 19:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu
  Cc: Theodore Ts'o, Jason A . Donenfeld, Martin Willi,
	Ard Biesheuvel, linux-kernel, Eric Biggers

From: Eric Biggers <ebiggers@google.com>

The four 32-bit constants for the initial state of ChaCha20 were loaded
from a char array which is not guaranteed to have the needed alignment.

Fix it by just assigning the constants directly instead.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/chacha20_generic.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
index 4a45fa4890c0..ec84e7837aac 100644
--- a/crypto/chacha20_generic.c
+++ b/crypto/chacha20_generic.c
@@ -41,12 +41,10 @@ static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src,
 
 void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv)
 {
-	static const char constant[16] = "expand 32-byte k";
-
-	state[0]  = le32_to_cpuvp(constant +  0);
-	state[1]  = le32_to_cpuvp(constant +  4);
-	state[2]  = le32_to_cpuvp(constant +  8);
-	state[3]  = le32_to_cpuvp(constant + 12);
+	state[0]  = 0x61707865; /* "expa" */
+	state[1]  = 0x3320646e; /* "nd 3" */
+	state[2]  = 0x79622d32; /* "2-by" */
+	state[3]  = 0x6b206574; /* "te k" */
 	state[4]  = ctx->key[0];
 	state[5]  = ctx->key[1];
 	state[6]  = ctx->key[2];
-- 
2.15.0.448.gf294e3d99a-goog

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/5] crypto: chacha20 - Use unaligned access macros when loading key and IV
  2017-11-22 19:51 [PATCH 0/5] crypto: chacha20 - Alignment fixes Eric Biggers
  2017-11-22 19:51 ` [PATCH 1/5] crypto: chacha20 - Fix unaligned access when loading constants Eric Biggers
@ 2017-11-22 19:51 ` Eric Biggers
  2017-11-22 20:27   ` Ard Biesheuvel
  2017-11-22 19:51 ` [PATCH 3/5] crypto: chacha20 - Remove cra_alignmask Eric Biggers
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Eric Biggers @ 2017-11-22 19:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu
  Cc: Theodore Ts'o, Jason A . Donenfeld, Martin Willi,
	Ard Biesheuvel, linux-kernel, Eric Biggers

From: Eric Biggers <ebiggers@google.com>

The generic ChaCha20 implementation has a cra_alignmask of 3, which
ensures that the key passed into crypto_chacha20_setkey() and the IV
passed into crypto_chacha20_init() are 4-byte aligned.  However, these
functions are also called from the ARM and ARM64 implementations of
ChaCha20, which intentionally do not have a cra_alignmask set.  This is
broken because 32-bit words are being loaded from potentially-unaligned
buffers without the unaligned access macros.

Fix it by using the unaligned access macros when loading the key and IV.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/chacha20_generic.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
index ec84e7837aac..b5a10ebf1b82 100644
--- a/crypto/chacha20_generic.c
+++ b/crypto/chacha20_generic.c
@@ -9,16 +9,12 @@
  * (at your option) any later version.
  */
 
+#include <asm/unaligned.h>
 #include <crypto/algapi.h>
 #include <crypto/chacha20.h>
 #include <crypto/internal/skcipher.h>
 #include <linux/module.h>
 
-static inline u32 le32_to_cpuvp(const void *p)
-{
-	return le32_to_cpup(p);
-}
-
 static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src,
 			     unsigned int bytes)
 {
@@ -53,10 +49,10 @@ void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv)
 	state[9]  = ctx->key[5];
 	state[10] = ctx->key[6];
 	state[11] = ctx->key[7];
-	state[12] = le32_to_cpuvp(iv +  0);
-	state[13] = le32_to_cpuvp(iv +  4);
-	state[14] = le32_to_cpuvp(iv +  8);
-	state[15] = le32_to_cpuvp(iv + 12);
+	state[12] = get_unaligned_le32(iv +  0);
+	state[13] = get_unaligned_le32(iv +  4);
+	state[14] = get_unaligned_le32(iv +  8);
+	state[15] = get_unaligned_le32(iv + 12);
 }
 EXPORT_SYMBOL_GPL(crypto_chacha20_init);
 
@@ -70,7 +66,7 @@ int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key,
 		return -EINVAL;
 
 	for (i = 0; i < ARRAY_SIZE(ctx->key); i++)
-		ctx->key[i] = le32_to_cpuvp(key + i * sizeof(u32));
+		ctx->key[i] = get_unaligned_le32(key + i * sizeof(u32));
 
 	return 0;
 }
-- 
2.15.0.448.gf294e3d99a-goog

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/5] crypto: chacha20 - Remove cra_alignmask
  2017-11-22 19:51 [PATCH 0/5] crypto: chacha20 - Alignment fixes Eric Biggers
  2017-11-22 19:51 ` [PATCH 1/5] crypto: chacha20 - Fix unaligned access when loading constants Eric Biggers
  2017-11-22 19:51 ` [PATCH 2/5] crypto: chacha20 - Use unaligned access macros when loading key and IV Eric Biggers
@ 2017-11-22 19:51 ` Eric Biggers
  2017-11-22 20:30   ` Ard Biesheuvel
  2017-11-22 19:51 ` [PATCH 4/5] crypto: x86/chacha20 " Eric Biggers
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Eric Biggers @ 2017-11-22 19:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu
  Cc: Theodore Ts'o, Jason A . Donenfeld, Martin Willi,
	Ard Biesheuvel, linux-kernel, Eric Biggers

From: Eric Biggers <ebiggers@google.com>

Now that crypto_chacha20_setkey() and crypto_chacha20_init() use the
unaligned access macros and crypto_xor() also accepts unaligned buffers,
there is no need to have a cra_alignmask set for chacha20-generic.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/chacha20_generic.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
index b5a10ebf1b82..bb4affbd591c 100644
--- a/crypto/chacha20_generic.c
+++ b/crypto/chacha20_generic.c
@@ -105,7 +105,6 @@ static struct skcipher_alg alg = {
 	.base.cra_priority	= 100,
 	.base.cra_blocksize	= 1,
 	.base.cra_ctxsize	= sizeof(struct chacha20_ctx),
-	.base.cra_alignmask	= sizeof(u32) - 1,
 	.base.cra_module	= THIS_MODULE,
 
 	.min_keysize		= CHACHA20_KEY_SIZE,
-- 
2.15.0.448.gf294e3d99a-goog

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/5] crypto: x86/chacha20 - Remove cra_alignmask
  2017-11-22 19:51 [PATCH 0/5] crypto: chacha20 - Alignment fixes Eric Biggers
                   ` (2 preceding siblings ...)
  2017-11-22 19:51 ` [PATCH 3/5] crypto: chacha20 - Remove cra_alignmask Eric Biggers
@ 2017-11-22 19:51 ` Eric Biggers
  2017-11-22 20:31   ` Ard Biesheuvel
  2017-11-22 19:51 ` [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block() Eric Biggers
  2017-11-29  6:39 ` [PATCH 0/5] crypto: chacha20 - Alignment fixes Herbert Xu
  5 siblings, 1 reply; 14+ messages in thread
From: Eric Biggers @ 2017-11-22 19:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu
  Cc: Theodore Ts'o, Jason A . Donenfeld, Martin Willi,
	Ard Biesheuvel, linux-kernel, Eric Biggers

From: Eric Biggers <ebiggers@google.com>

Now that the generic ChaCha20 implementation no longer needs a
cra_alignmask, the x86 one doesn't either -- given that the x86
implementation doesn't need the alignment itself.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/chacha20_glue.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/crypto/chacha20_glue.c b/arch/x86/crypto/chacha20_glue.c
index 1e6af1b35f7b..dce7c5d39c2f 100644
--- a/arch/x86/crypto/chacha20_glue.c
+++ b/arch/x86/crypto/chacha20_glue.c
@@ -107,7 +107,6 @@ static struct skcipher_alg alg = {
 	.base.cra_priority	= 300,
 	.base.cra_blocksize	= 1,
 	.base.cra_ctxsize	= sizeof(struct chacha20_ctx),
-	.base.cra_alignmask	= sizeof(u32) - 1,
 	.base.cra_module	= THIS_MODULE,
 
 	.min_keysize		= CHACHA20_KEY_SIZE,
-- 
2.15.0.448.gf294e3d99a-goog

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block()
  2017-11-22 19:51 [PATCH 0/5] crypto: chacha20 - Alignment fixes Eric Biggers
                   ` (3 preceding siblings ...)
  2017-11-22 19:51 ` [PATCH 4/5] crypto: x86/chacha20 " Eric Biggers
@ 2017-11-22 19:51 ` Eric Biggers
  2017-11-22 20:51   ` Ard Biesheuvel
  2017-11-29  6:39 ` [PATCH 0/5] crypto: chacha20 - Alignment fixes Herbert Xu
  5 siblings, 1 reply; 14+ messages in thread
From: Eric Biggers @ 2017-11-22 19:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu
  Cc: Theodore Ts'o, Jason A . Donenfeld, Martin Willi,
	Ard Biesheuvel, linux-kernel, Eric Biggers

From: Eric Biggers <ebiggers@google.com>

When chacha20_block() outputs the keystream block, it uses 'u32' stores
directly.  However, the callers (crypto/chacha20_generic.c and
drivers/char/random.c) declare the keystream buffer as a 'u8' array,
which is not guaranteed to have the needed alignment.

Fix it by having both callers declare the keystream as a 'u32' array.
For now this is preferable to switching over to the unaligned access
macros because chacha20_block() is only being used in cases where we can
easily control the alignment (stack buffers).

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/chacha20_generic.c |  6 +++---
 drivers/char/random.c     | 24 ++++++++++++------------
 include/crypto/chacha20.h |  3 ++-
 lib/chacha20.c            |  2 +-
 4 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
index bb4affbd591c..e451c3cb6a56 100644
--- a/crypto/chacha20_generic.c
+++ b/crypto/chacha20_generic.c
@@ -18,20 +18,20 @@
 static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src,
 			     unsigned int bytes)
 {
-	u8 stream[CHACHA20_BLOCK_SIZE];
+	u32 stream[CHACHA20_BLOCK_WORDS];
 
 	if (dst != src)
 		memcpy(dst, src, bytes);
 
 	while (bytes >= CHACHA20_BLOCK_SIZE) {
 		chacha20_block(state, stream);
-		crypto_xor(dst, stream, CHACHA20_BLOCK_SIZE);
+		crypto_xor(dst, (const u8 *)stream, CHACHA20_BLOCK_SIZE);
 		bytes -= CHACHA20_BLOCK_SIZE;
 		dst += CHACHA20_BLOCK_SIZE;
 	}
 	if (bytes) {
 		chacha20_block(state, stream);
-		crypto_xor(dst, stream, bytes);
+		crypto_xor(dst, (const u8 *)stream, bytes);
 	}
 }
 
diff --git a/drivers/char/random.c b/drivers/char/random.c
index ec42c8bb9b0d..11304bbc78cc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -431,9 +431,9 @@ static int crng_init = 0;
 static int crng_init_cnt = 0;
 #define CRNG_INIT_CNT_THRESH (2*CHACHA20_KEY_SIZE)
 static void _extract_crng(struct crng_state *crng,
-			  __u8 out[CHACHA20_BLOCK_SIZE]);
+			  __u32 out[CHACHA20_BLOCK_WORDS]);
 static void _crng_backtrack_protect(struct crng_state *crng,
-				    __u8 tmp[CHACHA20_BLOCK_SIZE], int used);
+				    __u32 tmp[CHACHA20_BLOCK_WORDS], int used);
 static void process_random_ready_list(void);
 static void _get_random_bytes(void *buf, int nbytes);
 
@@ -817,7 +817,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
 	unsigned long	flags;
 	int		i, num;
 	union {
-		__u8	block[CHACHA20_BLOCK_SIZE];
+		__u32	block[CHACHA20_BLOCK_WORDS];
 		__u32	key[8];
 	} buf;
 
@@ -851,7 +851,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
 }
 
 static void _extract_crng(struct crng_state *crng,
-			  __u8 out[CHACHA20_BLOCK_SIZE])
+			  __u32 out[CHACHA20_BLOCK_WORDS])
 {
 	unsigned long v, flags;
 
@@ -867,7 +867,7 @@ static void _extract_crng(struct crng_state *crng,
 	spin_unlock_irqrestore(&crng->lock, flags);
 }
 
-static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
+static void extract_crng(__u32 out[CHACHA20_BLOCK_WORDS])
 {
 	struct crng_state *crng = NULL;
 
@@ -885,7 +885,7 @@ static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
  * enough) to mutate the CRNG key to provide backtracking protection.
  */
 static void _crng_backtrack_protect(struct crng_state *crng,
-				    __u8 tmp[CHACHA20_BLOCK_SIZE], int used)
+				    __u32 tmp[CHACHA20_BLOCK_WORDS], int used)
 {
 	unsigned long	flags;
 	__u32		*s, *d;
@@ -897,14 +897,14 @@ static void _crng_backtrack_protect(struct crng_state *crng,
 		used = 0;
 	}
 	spin_lock_irqsave(&crng->lock, flags);
-	s = (__u32 *) &tmp[used];
+	s = &tmp[used / sizeof(__u32)];
 	d = &crng->state[4];
 	for (i=0; i < 8; i++)
 		*d++ ^= *s++;
 	spin_unlock_irqrestore(&crng->lock, flags);
 }
 
-static void crng_backtrack_protect(__u8 tmp[CHACHA20_BLOCK_SIZE], int used)
+static void crng_backtrack_protect(__u32 tmp[CHACHA20_BLOCK_WORDS], int used)
 {
 	struct crng_state *crng = NULL;
 
@@ -920,7 +920,7 @@ static void crng_backtrack_protect(__u8 tmp[CHACHA20_BLOCK_SIZE], int used)
 static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
 {
 	ssize_t ret = 0, i = CHACHA20_BLOCK_SIZE;
-	__u8 tmp[CHACHA20_BLOCK_SIZE];
+	__u32 tmp[CHACHA20_BLOCK_WORDS];
 	int large_request = (nbytes > 256);
 
 	while (nbytes) {
@@ -1507,7 +1507,7 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller,
  */
 static void _get_random_bytes(void *buf, int nbytes)
 {
-	__u8 tmp[CHACHA20_BLOCK_SIZE];
+	__u32 tmp[CHACHA20_BLOCK_WORDS];
 
 	trace_get_random_bytes(nbytes, _RET_IP_);
 
@@ -2114,7 +2114,7 @@ u64 get_random_u64(void)
 	if (use_lock)
 		read_lock_irqsave(&batched_entropy_reset_lock, flags);
 	if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
-		extract_crng((u8 *)batch->entropy_u64);
+		extract_crng((__u32 *)batch->entropy_u64);
 		batch->position = 0;
 	}
 	ret = batch->entropy_u64[batch->position++];
@@ -2144,7 +2144,7 @@ u32 get_random_u32(void)
 	if (use_lock)
 		read_lock_irqsave(&batched_entropy_reset_lock, flags);
 	if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
-		extract_crng((u8 *)batch->entropy_u32);
+		extract_crng(batch->entropy_u32);
 		batch->position = 0;
 	}
 	ret = batch->entropy_u32[batch->position++];
diff --git a/include/crypto/chacha20.h b/include/crypto/chacha20.h
index caaa470389e0..b83d66073db0 100644
--- a/include/crypto/chacha20.h
+++ b/include/crypto/chacha20.h
@@ -13,12 +13,13 @@
 #define CHACHA20_IV_SIZE	16
 #define CHACHA20_KEY_SIZE	32
 #define CHACHA20_BLOCK_SIZE	64
+#define CHACHA20_BLOCK_WORDS	(CHACHA20_BLOCK_SIZE / sizeof(u32))
 
 struct chacha20_ctx {
 	u32 key[8];
 };
 
-void chacha20_block(u32 *state, void *stream);
+void chacha20_block(u32 *state, u32 *stream);
 void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv);
 int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key,
 			   unsigned int keysize);
diff --git a/lib/chacha20.c b/lib/chacha20.c
index 250ceed9ec9a..29d3801dee24 100644
--- a/lib/chacha20.c
+++ b/lib/chacha20.c
@@ -21,7 +21,7 @@ static inline u32 rotl32(u32 v, u8 n)
 	return (v << n) | (v >> (sizeof(v) * 8 - n));
 }
 
-extern void chacha20_block(u32 *state, void *stream)
+void chacha20_block(u32 *state, u32 *stream)
 {
 	u32 x[16], *out = stream;
 	int i;
-- 
2.15.0.448.gf294e3d99a-goog

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/5] crypto: chacha20 - Fix unaligned access when loading constants
  2017-11-22 19:51 ` [PATCH 1/5] crypto: chacha20 - Fix unaligned access when loading constants Eric Biggers
@ 2017-11-22 20:26   ` Ard Biesheuvel
  0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2017-11-22 20:26 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-crypto@vger.kernel.org, Herbert Xu, Theodore Ts'o,
	Jason A . Donenfeld, Martin Willi, linux-kernel@vger.kernel.org,
	Eric Biggers

On 22 November 2017 at 19:51, Eric Biggers <ebiggers3@gmail.com> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> The four 32-bit constants for the initial state of ChaCha20 were loaded
> from a char array which is not guaranteed to have the needed alignment.
>
> Fix it by just assigning the constants directly instead.
>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

I'm not thrilled about the open coded hex numbers but I don't care
enough to object.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>  crypto/chacha20_generic.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
> index 4a45fa4890c0..ec84e7837aac 100644
> --- a/crypto/chacha20_generic.c
> +++ b/crypto/chacha20_generic.c
> @@ -41,12 +41,10 @@ static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src,
>
>  void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv)
>  {
> -       static const char constant[16] = "expand 32-byte k";
> -
> -       state[0]  = le32_to_cpuvp(constant +  0);
> -       state[1]  = le32_to_cpuvp(constant +  4);
> -       state[2]  = le32_to_cpuvp(constant +  8);
> -       state[3]  = le32_to_cpuvp(constant + 12);
> +       state[0]  = 0x61707865; /* "expa" */
> +       state[1]  = 0x3320646e; /* "nd 3" */
> +       state[2]  = 0x79622d32; /* "2-by" */
> +       state[3]  = 0x6b206574; /* "te k" */
>         state[4]  = ctx->key[0];
>         state[5]  = ctx->key[1];
>         state[6]  = ctx->key[2];
> --
> 2.15.0.448.gf294e3d99a-goog
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/5] crypto: chacha20 - Use unaligned access macros when loading key and IV
  2017-11-22 19:51 ` [PATCH 2/5] crypto: chacha20 - Use unaligned access macros when loading key and IV Eric Biggers
@ 2017-11-22 20:27   ` Ard Biesheuvel
  0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2017-11-22 20:27 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-crypto@vger.kernel.org, Herbert Xu, Theodore Ts'o,
	Jason A . Donenfeld, Martin Willi, linux-kernel@vger.kernel.org,
	Eric Biggers

On 22 November 2017 at 19:51, Eric Biggers <ebiggers3@gmail.com> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> The generic ChaCha20 implementation has a cra_alignmask of 3, which
> ensures that the key passed into crypto_chacha20_setkey() and the IV
> passed into crypto_chacha20_init() are 4-byte aligned.  However, these
> functions are also called from the ARM and ARM64 implementations of
> ChaCha20, which intentionally do not have a cra_alignmask set.  This is
> broken because 32-bit words are being loaded from potentially-unaligned
> buffers without the unaligned access macros.
>
> Fix it by using the unaligned access macros when loading the key and IV.
>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>  crypto/chacha20_generic.c | 16 ++++++----------
>  1 file changed, 6 insertions(+), 10 deletions(-)
>
> diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
> index ec84e7837aac..b5a10ebf1b82 100644
> --- a/crypto/chacha20_generic.c
> +++ b/crypto/chacha20_generic.c
> @@ -9,16 +9,12 @@
>   * (at your option) any later version.
>   */
>
> +#include <asm/unaligned.h>
>  #include <crypto/algapi.h>
>  #include <crypto/chacha20.h>
>  #include <crypto/internal/skcipher.h>
>  #include <linux/module.h>
>
> -static inline u32 le32_to_cpuvp(const void *p)
> -{
> -       return le32_to_cpup(p);
> -}
> -
>  static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src,
>                              unsigned int bytes)
>  {
> @@ -53,10 +49,10 @@ void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv)
>         state[9]  = ctx->key[5];
>         state[10] = ctx->key[6];
>         state[11] = ctx->key[7];
> -       state[12] = le32_to_cpuvp(iv +  0);
> -       state[13] = le32_to_cpuvp(iv +  4);
> -       state[14] = le32_to_cpuvp(iv +  8);
> -       state[15] = le32_to_cpuvp(iv + 12);
> +       state[12] = get_unaligned_le32(iv +  0);
> +       state[13] = get_unaligned_le32(iv +  4);
> +       state[14] = get_unaligned_le32(iv +  8);
> +       state[15] = get_unaligned_le32(iv + 12);
>  }
>  EXPORT_SYMBOL_GPL(crypto_chacha20_init);
>
> @@ -70,7 +66,7 @@ int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key,
>                 return -EINVAL;
>
>         for (i = 0; i < ARRAY_SIZE(ctx->key); i++)
> -               ctx->key[i] = le32_to_cpuvp(key + i * sizeof(u32));
> +               ctx->key[i] = get_unaligned_le32(key + i * sizeof(u32));
>
>         return 0;
>  }
> --
> 2.15.0.448.gf294e3d99a-goog
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/5] crypto: chacha20 - Remove cra_alignmask
  2017-11-22 19:51 ` [PATCH 3/5] crypto: chacha20 - Remove cra_alignmask Eric Biggers
@ 2017-11-22 20:30   ` Ard Biesheuvel
  0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2017-11-22 20:30 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-crypto@vger.kernel.org, Herbert Xu, Theodore Ts'o,
	Jason A . Donenfeld, Martin Willi, linux-kernel@vger.kernel.org,
	Eric Biggers

On 22 November 2017 at 19:51, Eric Biggers <ebiggers3@gmail.com> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> Now that crypto_chacha20_setkey() and crypto_chacha20_init() use the
> unaligned access macros and crypto_xor() also accepts unaligned buffers,
> there is no need to have a cra_alignmask set for chacha20-generic.
>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>  crypto/chacha20_generic.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
> index b5a10ebf1b82..bb4affbd591c 100644
> --- a/crypto/chacha20_generic.c
> +++ b/crypto/chacha20_generic.c
> @@ -105,7 +105,6 @@ static struct skcipher_alg alg = {
>         .base.cra_priority      = 100,
>         .base.cra_blocksize     = 1,
>         .base.cra_ctxsize       = sizeof(struct chacha20_ctx),
> -       .base.cra_alignmask     = sizeof(u32) - 1,
>         .base.cra_module        = THIS_MODULE,
>
>         .min_keysize            = CHACHA20_KEY_SIZE,
> --
> 2.15.0.448.gf294e3d99a-goog
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/5] crypto: x86/chacha20 - Remove cra_alignmask
  2017-11-22 19:51 ` [PATCH 4/5] crypto: x86/chacha20 " Eric Biggers
@ 2017-11-22 20:31   ` Ard Biesheuvel
  0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2017-11-22 20:31 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-crypto@vger.kernel.org, Herbert Xu, Theodore Ts'o,
	Jason A . Donenfeld, Martin Willi, linux-kernel@vger.kernel.org,
	Eric Biggers

On 22 November 2017 at 19:51, Eric Biggers <ebiggers3@gmail.com> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> Now that the generic ChaCha20 implementation no longer needs a
> cra_alignmask, the x86 one doesn't either -- given that the x86
> implementation doesn't need the alignment itself.
>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

> ---
>  arch/x86/crypto/chacha20_glue.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/arch/x86/crypto/chacha20_glue.c b/arch/x86/crypto/chacha20_glue.c
> index 1e6af1b35f7b..dce7c5d39c2f 100644
> --- a/arch/x86/crypto/chacha20_glue.c
> +++ b/arch/x86/crypto/chacha20_glue.c
> @@ -107,7 +107,6 @@ static struct skcipher_alg alg = {
>         .base.cra_priority      = 300,
>         .base.cra_blocksize     = 1,
>         .base.cra_ctxsize       = sizeof(struct chacha20_ctx),
> -       .base.cra_alignmask     = sizeof(u32) - 1,
>         .base.cra_module        = THIS_MODULE,
>
>         .min_keysize            = CHACHA20_KEY_SIZE,
> --
> 2.15.0.448.gf294e3d99a-goog
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block()
  2017-11-22 19:51 ` [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block() Eric Biggers
@ 2017-11-22 20:51   ` Ard Biesheuvel
  2017-11-22 21:29     ` Eric Biggers
  0 siblings, 1 reply; 14+ messages in thread
From: Ard Biesheuvel @ 2017-11-22 20:51 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-crypto@vger.kernel.org, Herbert Xu, Theodore Ts'o,
	Jason A . Donenfeld, Martin Willi, linux-kernel@vger.kernel.org,
	Eric Biggers

On 22 November 2017 at 19:51, Eric Biggers <ebiggers3@gmail.com> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> When chacha20_block() outputs the keystream block, it uses 'u32' stores
> directly.  However, the callers (crypto/chacha20_generic.c and
> drivers/char/random.c) declare the keystream buffer as a 'u8' array,
> which is not guaranteed to have the needed alignment.
>
> Fix it by having both callers declare the keystream as a 'u32' array.
> For now this is preferable to switching over to the unaligned access
> macros because chacha20_block() is only being used in cases where we can
> easily control the alignment (stack buffers).
>

Given this paragraph, I think we agree the correct way to fix this
would be to make chacha20_block() adhere to its prototype, so if we
deviate from that, there should be a good reason. On which
architecture that cares about alignment is this expected to result in
a measurable performance benefit?

> Signed-off-by: Eric Biggers <ebiggers@google.com>
> ---
>  crypto/chacha20_generic.c |  6 +++---
>  drivers/char/random.c     | 24 ++++++++++++------------
>  include/crypto/chacha20.h |  3 ++-
>  lib/chacha20.c            |  2 +-
>  4 files changed, 18 insertions(+), 17 deletions(-)
>
> diff --git a/crypto/chacha20_generic.c b/crypto/chacha20_generic.c
> index bb4affbd591c..e451c3cb6a56 100644
> --- a/crypto/chacha20_generic.c
> +++ b/crypto/chacha20_generic.c
> @@ -18,20 +18,20 @@
>  static void chacha20_docrypt(u32 *state, u8 *dst, const u8 *src,
>                              unsigned int bytes)
>  {
> -       u8 stream[CHACHA20_BLOCK_SIZE];
> +       u32 stream[CHACHA20_BLOCK_WORDS];
>
>         if (dst != src)
>                 memcpy(dst, src, bytes);
>
>         while (bytes >= CHACHA20_BLOCK_SIZE) {
>                 chacha20_block(state, stream);
> -               crypto_xor(dst, stream, CHACHA20_BLOCK_SIZE);
> +               crypto_xor(dst, (const u8 *)stream, CHACHA20_BLOCK_SIZE);
>                 bytes -= CHACHA20_BLOCK_SIZE;
>                 dst += CHACHA20_BLOCK_SIZE;
>         }
>         if (bytes) {
>                 chacha20_block(state, stream);
> -               crypto_xor(dst, stream, bytes);
> +               crypto_xor(dst, (const u8 *)stream, bytes);
>         }
>  }
>
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index ec42c8bb9b0d..11304bbc78cc 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -431,9 +431,9 @@ static int crng_init = 0;
>  static int crng_init_cnt = 0;
>  #define CRNG_INIT_CNT_THRESH (2*CHACHA20_KEY_SIZE)
>  static void _extract_crng(struct crng_state *crng,
> -                         __u8 out[CHACHA20_BLOCK_SIZE]);
> +                         __u32 out[CHACHA20_BLOCK_WORDS]);
>  static void _crng_backtrack_protect(struct crng_state *crng,
> -                                   __u8 tmp[CHACHA20_BLOCK_SIZE], int used);
> +                                   __u32 tmp[CHACHA20_BLOCK_WORDS], int used);
>  static void process_random_ready_list(void);
>  static void _get_random_bytes(void *buf, int nbytes);
>
> @@ -817,7 +817,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
>         unsigned long   flags;
>         int             i, num;
>         union {
> -               __u8    block[CHACHA20_BLOCK_SIZE];
> +               __u32   block[CHACHA20_BLOCK_WORDS];
>                 __u32   key[8];
>         } buf;
>
> @@ -851,7 +851,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r)
>  }
>
>  static void _extract_crng(struct crng_state *crng,
> -                         __u8 out[CHACHA20_BLOCK_SIZE])
> +                         __u32 out[CHACHA20_BLOCK_WORDS])
>  {
>         unsigned long v, flags;
>
> @@ -867,7 +867,7 @@ static void _extract_crng(struct crng_state *crng,
>         spin_unlock_irqrestore(&crng->lock, flags);
>  }
>
> -static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
> +static void extract_crng(__u32 out[CHACHA20_BLOCK_WORDS])
>  {
>         struct crng_state *crng = NULL;
>
> @@ -885,7 +885,7 @@ static void extract_crng(__u8 out[CHACHA20_BLOCK_SIZE])
>   * enough) to mutate the CRNG key to provide backtracking protection.
>   */
>  static void _crng_backtrack_protect(struct crng_state *crng,
> -                                   __u8 tmp[CHACHA20_BLOCK_SIZE], int used)
> +                                   __u32 tmp[CHACHA20_BLOCK_WORDS], int used)
>  {
>         unsigned long   flags;
>         __u32           *s, *d;
> @@ -897,14 +897,14 @@ static void _crng_backtrack_protect(struct crng_state *crng,
>                 used = 0;
>         }
>         spin_lock_irqsave(&crng->lock, flags);
> -       s = (__u32 *) &tmp[used];
> +       s = &tmp[used / sizeof(__u32)];
>         d = &crng->state[4];
>         for (i=0; i < 8; i++)
>                 *d++ ^= *s++;
>         spin_unlock_irqrestore(&crng->lock, flags);
>  }
>
> -static void crng_backtrack_protect(__u8 tmp[CHACHA20_BLOCK_SIZE], int used)
> +static void crng_backtrack_protect(__u32 tmp[CHACHA20_BLOCK_WORDS], int used)
>  {
>         struct crng_state *crng = NULL;
>
> @@ -920,7 +920,7 @@ static void crng_backtrack_protect(__u8 tmp[CHACHA20_BLOCK_SIZE], int used)
>  static ssize_t extract_crng_user(void __user *buf, size_t nbytes)
>  {
>         ssize_t ret = 0, i = CHACHA20_BLOCK_SIZE;
> -       __u8 tmp[CHACHA20_BLOCK_SIZE];
> +       __u32 tmp[CHACHA20_BLOCK_WORDS];
>         int large_request = (nbytes > 256);
>
>         while (nbytes) {
> @@ -1507,7 +1507,7 @@ static void _warn_unseeded_randomness(const char *func_name, void *caller,
>   */
>  static void _get_random_bytes(void *buf, int nbytes)
>  {
> -       __u8 tmp[CHACHA20_BLOCK_SIZE];
> +       __u32 tmp[CHACHA20_BLOCK_WORDS];
>
>         trace_get_random_bytes(nbytes, _RET_IP_);
>
> @@ -2114,7 +2114,7 @@ u64 get_random_u64(void)
>         if (use_lock)
>                 read_lock_irqsave(&batched_entropy_reset_lock, flags);
>         if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
> -               extract_crng((u8 *)batch->entropy_u64);
> +               extract_crng((__u32 *)batch->entropy_u64);
>                 batch->position = 0;
>         }
>         ret = batch->entropy_u64[batch->position++];
> @@ -2144,7 +2144,7 @@ u32 get_random_u32(void)
>         if (use_lock)
>                 read_lock_irqsave(&batched_entropy_reset_lock, flags);
>         if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
> -               extract_crng((u8 *)batch->entropy_u32);
> +               extract_crng(batch->entropy_u32);
>                 batch->position = 0;
>         }
>         ret = batch->entropy_u32[batch->position++];
> diff --git a/include/crypto/chacha20.h b/include/crypto/chacha20.h
> index caaa470389e0..b83d66073db0 100644
> --- a/include/crypto/chacha20.h
> +++ b/include/crypto/chacha20.h
> @@ -13,12 +13,13 @@
>  #define CHACHA20_IV_SIZE       16
>  #define CHACHA20_KEY_SIZE      32
>  #define CHACHA20_BLOCK_SIZE    64
> +#define CHACHA20_BLOCK_WORDS   (CHACHA20_BLOCK_SIZE / sizeof(u32))
>
>  struct chacha20_ctx {
>         u32 key[8];
>  };
>
> -void chacha20_block(u32 *state, void *stream);
> +void chacha20_block(u32 *state, u32 *stream);
>  void crypto_chacha20_init(u32 *state, struct chacha20_ctx *ctx, u8 *iv);
>  int crypto_chacha20_setkey(struct crypto_skcipher *tfm, const u8 *key,
>                            unsigned int keysize);
> diff --git a/lib/chacha20.c b/lib/chacha20.c
> index 250ceed9ec9a..29d3801dee24 100644
> --- a/lib/chacha20.c
> +++ b/lib/chacha20.c
> @@ -21,7 +21,7 @@ static inline u32 rotl32(u32 v, u8 n)
>         return (v << n) | (v >> (sizeof(v) * 8 - n));
>  }
>
> -extern void chacha20_block(u32 *state, void *stream)
> +void chacha20_block(u32 *state, u32 *stream)
>  {
>         u32 x[16], *out = stream;
>         int i;
> --
> 2.15.0.448.gf294e3d99a-goog
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block()
  2017-11-22 20:51   ` Ard Biesheuvel
@ 2017-11-22 21:29     ` Eric Biggers
  2017-11-22 22:06       ` Ard Biesheuvel
  0 siblings, 1 reply; 14+ messages in thread
From: Eric Biggers @ 2017-11-22 21:29 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: linux-crypto@vger.kernel.org, Herbert Xu, Theodore Ts'o,
	Jason A . Donenfeld, Martin Willi, linux-kernel@vger.kernel.org,
	Eric Biggers

On Wed, Nov 22, 2017 at 08:51:57PM +0000, Ard Biesheuvel wrote:
> On 22 November 2017 at 19:51, Eric Biggers <ebiggers3@gmail.com> wrote:
> > From: Eric Biggers <ebiggers@google.com>
> >
> > When chacha20_block() outputs the keystream block, it uses 'u32' stores
> > directly.  However, the callers (crypto/chacha20_generic.c and
> > drivers/char/random.c) declare the keystream buffer as a 'u8' array,
> > which is not guaranteed to have the needed alignment.
> >
> > Fix it by having both callers declare the keystream as a 'u32' array.
> > For now this is preferable to switching over to the unaligned access
> > macros because chacha20_block() is only being used in cases where we can
> > easily control the alignment (stack buffers).
> >
> 
> Given this paragraph, I think we agree the correct way to fix this
> would be to make chacha20_block() adhere to its prototype, so if we
> deviate from that, there should be a good reason. On which
> architecture that cares about alignment is this expected to result in
> a measurable performance benefit?
> 

Well, variables on the stack tend to be 4 or even 8-byte aligned anyway, so this
change probably doesn't make a difference in practice currently.  But it still
should be fixed, in case it does become a problem.

We could certainly leave the type as u8 array and use put_unaligned_le32()
instead; that would be a simpler change.  But that would be slower on
architectures where a potentially-unaligned access requires multiple
instructions.

Eric

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block()
  2017-11-22 21:29     ` Eric Biggers
@ 2017-11-22 22:06       ` Ard Biesheuvel
  0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2017-11-22 22:06 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-crypto@vger.kernel.org, Herbert Xu, Theodore Ts'o,
	Jason A . Donenfeld, Martin Willi, linux-kernel@vger.kernel.org,
	Eric Biggers

On 22 November 2017 at 21:29, Eric Biggers <ebiggers3@gmail.com> wrote:
> On Wed, Nov 22, 2017 at 08:51:57PM +0000, Ard Biesheuvel wrote:
>> On 22 November 2017 at 19:51, Eric Biggers <ebiggers3@gmail.com> wrote:
>> > From: Eric Biggers <ebiggers@google.com>
>> >
>> > When chacha20_block() outputs the keystream block, it uses 'u32' stores
>> > directly.  However, the callers (crypto/chacha20_generic.c and
>> > drivers/char/random.c) declare the keystream buffer as a 'u8' array,
>> > which is not guaranteed to have the needed alignment.
>> >
>> > Fix it by having both callers declare the keystream as a 'u32' array.
>> > For now this is preferable to switching over to the unaligned access
>> > macros because chacha20_block() is only being used in cases where we can
>> > easily control the alignment (stack buffers).
>> >
>>
>> Given this paragraph, I think we agree the correct way to fix this
>> would be to make chacha20_block() adhere to its prototype, so if we
>> deviate from that, there should be a good reason. On which
>> architecture that cares about alignment is this expected to result in
>> a measurable performance benefit?
>>
>
> Well, variables on the stack tend to be 4 or even 8-byte aligned anyway, so this
> change probably doesn't make a difference in practice currently.  But it still
> should be fixed, in case it does become a problem.
>

Agreed.

> We could certainly leave the type as u8 array and use put_unaligned_le32()
> instead; that would be a simpler change.  But that would be slower on
> architectures where a potentially-unaligned access requires multiple
> instructions.
>

The access itself would be slower, yes. But given the amount of work
performed in chacha20_block(), I seriously doubt that would actually
matter in practice.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/5] crypto: chacha20 - Alignment fixes
  2017-11-22 19:51 [PATCH 0/5] crypto: chacha20 - Alignment fixes Eric Biggers
                   ` (4 preceding siblings ...)
  2017-11-22 19:51 ` [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block() Eric Biggers
@ 2017-11-29  6:39 ` Herbert Xu
  5 siblings, 0 replies; 14+ messages in thread
From: Herbert Xu @ 2017-11-29  6:39 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-crypto, Theodore Ts'o, Jason A . Donenfeld,
	Martin Willi, Ard Biesheuvel, linux-kernel, Eric Biggers

On Wed, Nov 22, 2017 at 11:51:34AM -0800, Eric Biggers wrote:
> From: Eric Biggers <ebiggers@google.com>
> 
> This series fixes potentially unaligned memory accesses when loading the
> initial state, key, and IV for ChaCha20, and when outputting each
> keystream block.
> 
> It also removes the cra_alignmask from the generic and x86 ChaCha20
> implementations, once it is no longer needed.
> 
> Eric Biggers (5):
>   crypto: chacha20 - Fix unaligned access when loading constants
>   crypto: chacha20 - Use unaligned access macros when loading key and IV
>   crypto: chacha20 - Remove cra_alignmask
>   crypto: x86/chacha20 - Remove cra_alignmask
>   crypto: chacha20 - Fix keystream alignment for chacha20_block()

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-11-29  6:40 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-11-22 19:51 [PATCH 0/5] crypto: chacha20 - Alignment fixes Eric Biggers
2017-11-22 19:51 ` [PATCH 1/5] crypto: chacha20 - Fix unaligned access when loading constants Eric Biggers
2017-11-22 20:26   ` Ard Biesheuvel
2017-11-22 19:51 ` [PATCH 2/5] crypto: chacha20 - Use unaligned access macros when loading key and IV Eric Biggers
2017-11-22 20:27   ` Ard Biesheuvel
2017-11-22 19:51 ` [PATCH 3/5] crypto: chacha20 - Remove cra_alignmask Eric Biggers
2017-11-22 20:30   ` Ard Biesheuvel
2017-11-22 19:51 ` [PATCH 4/5] crypto: x86/chacha20 " Eric Biggers
2017-11-22 20:31   ` Ard Biesheuvel
2017-11-22 19:51 ` [PATCH 5/5] crypto: chacha20 - Fix keystream alignment for chacha20_block() Eric Biggers
2017-11-22 20:51   ` Ard Biesheuvel
2017-11-22 21:29     ` Eric Biggers
2017-11-22 22:06       ` Ard Biesheuvel
2017-11-29  6:39 ` [PATCH 0/5] crypto: chacha20 - Alignment fixes Herbert Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).