* [PATCH v5 01/11] crypto: xcbc: Remove VLA usage
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 6:35 ` Herbert Xu
2018-07-17 4:21 ` [PATCH v5 02/11] crypto: cbc: " Kees Cook
` (9 subsequent siblings)
10 siblings, 1 reply; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
Rabin Vincent, Tim Chen, linux-crypto, qat-linux, dm-devel,
linux-kernel
In the quest to remove all stack VLA usage from the kernel[1], this uses
the maximum blocksize and adds a sanity check. For xcbc, the blocksize
must always be 16, so use that, since it's already being enforced during
instantiation.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
crypto/xcbc.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/crypto/xcbc.c b/crypto/xcbc.c
index 25c75af50d3f..7aa03beed795 100644
--- a/crypto/xcbc.c
+++ b/crypto/xcbc.c
@@ -57,6 +57,8 @@ struct xcbc_desc_ctx {
u8 ctx[];
};
+#define XCBC_BLOCKSIZE 16
+
static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
const u8 *inkey, unsigned int keylen)
{
@@ -65,7 +67,10 @@ static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
int bs = crypto_shash_blocksize(parent);
u8 *consts = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
int err = 0;
- u8 key1[bs];
+ u8 key1[XCBC_BLOCKSIZE];
+
+ if (WARN_ON(bs > sizeof(key1)))
+ return -EINVAL;
if ((err = crypto_cipher_setkey(ctx->child, inkey, keylen)))
return err;
@@ -212,7 +217,7 @@ static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
return PTR_ERR(alg);
switch(alg->cra_blocksize) {
- case 16:
+ case XCBC_BLOCKSIZE:
break;
default:
goto out_put_alg;
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH v5 01/11] crypto: xcbc: Remove VLA usage
2018-07-17 4:21 ` [PATCH v5 01/11] crypto: xcbc: " Kees Cook
@ 2018-07-17 6:35 ` Herbert Xu
0 siblings, 0 replies; 19+ messages in thread
From: Herbert Xu @ 2018-07-17 6:35 UTC (permalink / raw)
To: Kees Cook
Cc: Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva, Alasdair Kergon,
Giovanni Cabiddu, Lars Persson, Mike Snitzer, Rabin Vincent,
Tim Chen, linux-crypto, qat-linux, dm-devel, linux-kernel
On Mon, Jul 16, 2018 at 09:21:40PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this uses
> the maximum blocksize and adds a sanity check. For xcbc, the blocksize
> must always be 16, so use that, since it's already being enforced during
> instantiation.
>
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> crypto/xcbc.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/crypto/xcbc.c b/crypto/xcbc.c
> index 25c75af50d3f..7aa03beed795 100644
> --- a/crypto/xcbc.c
> +++ b/crypto/xcbc.c
> @@ -57,6 +57,8 @@ struct xcbc_desc_ctx {
> u8 ctx[];
> };
>
> +#define XCBC_BLOCKSIZE 16
> +
> static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
> const u8 *inkey, unsigned int keylen)
> {
> @@ -65,7 +67,10 @@ static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
> int bs = crypto_shash_blocksize(parent);
> u8 *consts = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
> int err = 0;
> - u8 key1[bs];
> + u8 key1[XCBC_BLOCKSIZE];
> +
> + if (WARN_ON(bs > sizeof(key1)))
> + return -EINVAL;
Please remove this. You already checked it in xcbc_create.
In fact, you should just make bs XCBC_BLOCKSIZE unconditinoally
in this function.
Cheers,
--
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v5 02/11] crypto: cbc: Remove VLA usage
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
2018-07-17 4:21 ` [PATCH v5 01/11] crypto: xcbc: " Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 4:21 ` [PATCH v5 03/11] crypto: shash: " Kees Cook
` (8 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
Rabin Vincent, Tim Chen, linux-crypto, qat-linux, dm-devel,
linux-kernel
In the quest to remove all stack VLA usage from the kernel[1], this
uses the upper bounds on blocksize. Since this is always a cipher
blocksize, use the existing cipher max blocksize.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
include/crypto/cbc.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/crypto/cbc.h b/include/crypto/cbc.h
index f5b8bfc22e6d..47db0aac2ab9 100644
--- a/include/crypto/cbc.h
+++ b/include/crypto/cbc.h
@@ -113,7 +113,9 @@ static inline int crypto_cbc_decrypt_inplace(
unsigned int bsize = crypto_skcipher_blocksize(tfm);
unsigned int nbytes = walk->nbytes;
u8 *src = walk->src.virt.addr;
- u8 last_iv[bsize];
+ u8 last_iv[MAX_CIPHER_BLOCKSIZE];
+
+ BUG_ON(bsize > sizeof(last_iv));
/* Start of the last block. */
src += nbytes - (nbytes & (bsize - 1)) - bsize;
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v5 03/11] crypto: shash: Remove VLA usage
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
2018-07-17 4:21 ` [PATCH v5 01/11] crypto: xcbc: " Kees Cook
2018-07-17 4:21 ` [PATCH v5 02/11] crypto: cbc: " Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 4:21 ` [PATCH v5 04/11] dm integrity: " Kees Cook
` (7 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
Rabin Vincent, Tim Chen, linux-crypto, qat-linux, dm-devel,
linux-kernel
In the quest to remove all stack VLA usage from the kernel[1], this
removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
by using the maximum allowable size (which is now more clearly captured
in a macro). Similar limits are turned into macros as well.
A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the
largest digest size and that sizeof(struct sha3_state) (360) is the
largest descriptor size. The corresponding maximums are reduced.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
crypto/shash.c | 6 +++---
include/crypto/hash.h | 6 +++++-
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/crypto/shash.c b/crypto/shash.c
index 5d732c6bb4b2..ab6902c6dae7 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -458,9 +458,9 @@ static int shash_prepare_alg(struct shash_alg *alg)
{
struct crypto_alg *base = &alg->base;
- if (alg->digestsize > PAGE_SIZE / 8 ||
- alg->descsize > PAGE_SIZE / 8 ||
- alg->statesize > PAGE_SIZE / 8)
+ if (alg->digestsize > SHASH_MAX_DIGESTSIZE ||
+ alg->descsize > SHASH_MAX_DESCSIZE ||
+ alg->statesize > SHASH_MAX_STATESIZE)
return -EINVAL;
base->cra_type = &crypto_shash_type;
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 76e432cab75d..ae14cc0e0cdb 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -151,9 +151,13 @@ struct shash_desc {
void *__ctx[] CRYPTO_MINALIGN_ATTR;
};
+#define SHASH_MAX_DIGESTSIZE 64
+#define SHASH_MAX_DESCSIZE 360
+#define SHASH_MAX_STATESIZE 512
+
#define SHASH_DESC_ON_STACK(shash, ctx) \
char __##shash##_desc[sizeof(struct shash_desc) + \
- crypto_shash_descsize(ctx)] CRYPTO_MINALIGN_ATTR; \
+ SHASH_MAX_DESCSIZE] CRYPTO_MINALIGN_ATTR; \
struct shash_desc *shash = (struct shash_desc *)__##shash##_desc
/**
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v5 04/11] dm integrity: Remove VLA usage
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
` (2 preceding siblings ...)
2018-07-17 4:21 ` [PATCH v5 03/11] crypto: shash: " Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 4:21 ` [PATCH v5 05/11] crypto: ahash: " Kees Cook
` (6 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Giovanni Cabiddu, Kees Cook, Arnd Bergmann, Gustavo A. R. Silva,
Mike Snitzer, Eric Biggers, qat-linux, linux-kernel, dm-devel,
linux-crypto, Lars Persson, Tim Chen, Alasdair Kergon,
Rabin Vincent
In the quest to remove all stack VLA usage from the kernel[1], this uses
the new SHASH_MAX_DIGESTSIZE from the crypto layer to allocate the upper
bounds on stack usage.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
drivers/md/dm-integrity.c | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 86438b2f10dd..85e8ce1625a2 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -521,7 +521,12 @@ static void section_mac(struct dm_integrity_c *ic, unsigned section, __u8 result
}
memset(result + size, 0, JOURNAL_MAC_SIZE - size);
} else {
- __u8 digest[size];
+ __u8 digest[SHASH_MAX_DIGESTSIZE];
+
+ if (WARN_ON(size > sizeof(digest))) {
+ dm_integrity_io_error(ic, "digest_size", -EINVAL);
+ goto err;
+ }
r = crypto_shash_final(desc, digest);
if (unlikely(r)) {
dm_integrity_io_error(ic, "crypto_shash_final", r);
@@ -1244,7 +1249,7 @@ static void integrity_metadata(struct work_struct *w)
struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io));
char *checksums;
unsigned extra_space = unlikely(digest_size > ic->tag_size) ? digest_size - ic->tag_size : 0;
- char checksums_onstack[ic->tag_size + extra_space];
+ char checksums_onstack[SHASH_MAX_DIGESTSIZE];
unsigned sectors_to_process = dio->range.n_sectors;
sector_t sector = dio->range.logical_sector;
@@ -1253,8 +1258,14 @@ static void integrity_metadata(struct work_struct *w)
checksums = kmalloc((PAGE_SIZE >> SECTOR_SHIFT >> ic->sb->log2_sectors_per_block) * ic->tag_size + extra_space,
GFP_NOIO | __GFP_NORETRY | __GFP_NOWARN);
- if (!checksums)
+ if (!checksums) {
checksums = checksums_onstack;
+ if (WARN_ON(extra_space &&
+ digest_size > sizeof(checksums_onstack))) {
+ r = -EINVAL;
+ goto error;
+ }
+ }
__bio_for_each_segment(bv, bio, iter, dio->orig_bi_iter) {
unsigned pos;
@@ -1466,7 +1477,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
} while (++s < ic->sectors_per_block);
#ifdef INTERNAL_VERIFY
if (ic->internal_hash) {
- char checksums_onstack[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)];
+ char checksums_onstack[max(SHASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
@@ -1516,7 +1527,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
if (ic->internal_hash) {
unsigned digest_size = crypto_shash_digestsize(ic->internal_hash);
if (unlikely(digest_size > ic->tag_size)) {
- char checksums_onstack[digest_size];
+ char checksums_onstack[SHASH_MAX_DIGESTSIZE];
integrity_sector_checksum(ic, logical_sector, (char *)js, checksums_onstack);
memcpy(journal_entry_tag(ic, je), checksums_onstack, ic->tag_size);
} else
@@ -1937,7 +1948,7 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned write_start,
unlikely(from_replay) &&
#endif
ic->internal_hash) {
- char test_tag[max(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)];
+ char test_tag[max_t(size_t, SHASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
integrity_sector_checksum(ic, sec + ((l - j) << ic->sb->log2_sectors_per_block),
(char *)access_journal_data(ic, i, l), test_tag);
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v5 05/11] crypto: ahash: Remove VLA usage
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
` (3 preceding siblings ...)
2018-07-17 4:21 ` [PATCH v5 04/11] dm integrity: " Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 16:39 ` [dm-devel] " Eric Biggers
2018-07-17 4:21 ` [PATCH v5 06/11] dm verity fec: " Kees Cook
` (5 subsequent siblings)
10 siblings, 1 reply; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Giovanni Cabiddu, Kees Cook, Arnd Bergmann, Gustavo A. R. Silva,
Mike Snitzer, Eric Biggers, qat-linux, linux-kernel, dm-devel,
linux-crypto, Lars Persson, Tim Chen, Alasdair Kergon,
Rabin Vincent
In the quest to remove all stack VLA usage from the kernel[1], this
introduces max size macros for ahash, as already done for shash, and
adjust the crypto user to max state size.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
crypto/ahash.c | 4 ++--
crypto/algif_hash.c | 2 +-
include/crypto/hash.h | 3 +++
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/crypto/ahash.c b/crypto/ahash.c
index a64c143165b1..6435bdbe42fd 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -550,8 +550,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
{
struct crypto_alg *base = &alg->halg.base;
- if (alg->halg.digestsize > PAGE_SIZE / 8 ||
- alg->halg.statesize > PAGE_SIZE / 8 ||
+ if (alg->halg.digestsize > AHASH_MAX_DIGESTSIZE ||
+ alg->halg.statesize > AHASH_MAX_STATESIZE ||
alg->halg.statesize == 0)
return -EINVAL;
diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
index bfcf595fd8f9..8974ee8ebead 100644
--- a/crypto/algif_hash.c
+++ b/crypto/algif_hash.c
@@ -239,7 +239,7 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags,
struct alg_sock *ask = alg_sk(sk);
struct hash_ctx *ctx = ask->private;
struct ahash_request *req = &ctx->req;
- char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req)) ? : 1];
+ char state[AHASH_MAX_STATESIZE];
struct sock *sk2;
struct alg_sock *ask2;
struct hash_ctx *ctx2;
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index ae14cc0e0cdb..4fcd0e2368cd 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -64,6 +64,9 @@ struct ahash_request {
void *__ctx[] CRYPTO_MINALIGN_ATTR;
};
+#define AHASH_MAX_DIGESTSIZE 512
+#define AHASH_MAX_STATESIZE 512
+
#define AHASH_REQUEST_ON_STACK(name, ahash) \
char __##name##_desc[sizeof(struct ahash_request) + \
crypto_ahash_reqsize(ahash)] CRYPTO_MINALIGN_ATTR; \
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [dm-devel] [PATCH v5 05/11] crypto: ahash: Remove VLA usage
2018-07-17 4:21 ` [PATCH v5 05/11] crypto: ahash: " Kees Cook
@ 2018-07-17 16:39 ` Eric Biggers
2018-07-17 20:07 ` Kees Cook
0 siblings, 1 reply; 19+ messages in thread
From: Eric Biggers @ 2018-07-17 16:39 UTC (permalink / raw)
To: Kees Cook
Cc: Herbert Xu, Giovanni Cabiddu, Arnd Bergmann, Gustavo A. R. Silva,
Mike Snitzer, Eric Biggers, qat-linux, linux-kernel, dm-devel,
linux-crypto, Lars Persson, Tim Chen, Alasdair Kergon,
Rabin Vincent
On Mon, Jul 16, 2018 at 09:21:44PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this
> introduces max size macros for ahash, as already done for shash, and
> adjust the crypto user to max state size.
>
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> crypto/ahash.c | 4 ++--
> crypto/algif_hash.c | 2 +-
> include/crypto/hash.h | 3 +++
> 3 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/crypto/ahash.c b/crypto/ahash.c
> index a64c143165b1..6435bdbe42fd 100644
> --- a/crypto/ahash.c
> +++ b/crypto/ahash.c
> @@ -550,8 +550,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
> {
> struct crypto_alg *base = &alg->halg.base;
>
> - if (alg->halg.digestsize > PAGE_SIZE / 8 ||
> - alg->halg.statesize > PAGE_SIZE / 8 ||
> + if (alg->halg.digestsize > AHASH_MAX_DIGESTSIZE ||
> + alg->halg.statesize > AHASH_MAX_STATESIZE ||
> alg->halg.statesize == 0)
> return -EINVAL;
>
> diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
> index bfcf595fd8f9..8974ee8ebead 100644
> --- a/crypto/algif_hash.c
> +++ b/crypto/algif_hash.c
> @@ -239,7 +239,7 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags,
> struct alg_sock *ask = alg_sk(sk);
> struct hash_ctx *ctx = ask->private;
> struct ahash_request *req = &ctx->req;
> - char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req)) ? : 1];
> + char state[AHASH_MAX_STATESIZE];
> struct sock *sk2;
> struct alg_sock *ask2;
> struct hash_ctx *ctx2;
> diff --git a/include/crypto/hash.h b/include/crypto/hash.h
> index ae14cc0e0cdb..4fcd0e2368cd 100644
> --- a/include/crypto/hash.h
> +++ b/include/crypto/hash.h
> @@ -64,6 +64,9 @@ struct ahash_request {
> void *__ctx[] CRYPTO_MINALIGN_ATTR;
> };
>
> +#define AHASH_MAX_DIGESTSIZE 512
> +#define AHASH_MAX_STATESIZE 512
> +
Why is AHASH_MAX_DIGESTSIZE (512) so much larger than SHASH_MAX_DIGESTSIZE (64)?
I would have expected them to be the same.
- Eric
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [dm-devel] [PATCH v5 05/11] crypto: ahash: Remove VLA usage
2018-07-17 16:39 ` [dm-devel] " Eric Biggers
@ 2018-07-17 20:07 ` Kees Cook
2018-07-17 23:12 ` Eric Biggers
0 siblings, 1 reply; 19+ messages in thread
From: Kees Cook @ 2018-07-17 20:07 UTC (permalink / raw)
To: Eric Biggers
Cc: Herbert Xu, Giovanni Cabiddu, Arnd Bergmann, Gustavo A. R. Silva,
Mike Snitzer, Eric Biggers, qat-linux, LKML, dm-devel,
linux-crypto, Lars Persson, Tim Chen, Alasdair Kergon,
Rabin Vincent
On Tue, Jul 17, 2018 at 9:39 AM, Eric Biggers <ebiggers3@gmail.com> wrote:
> On Mon, Jul 16, 2018 at 09:21:44PM -0700, Kees Cook wrote:
>> In the quest to remove all stack VLA usage from the kernel[1], this
>> introduces max size macros for ahash, as already done for shash, and
>> adjust the crypto user to max state size.
>>
>> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> ---
>> crypto/ahash.c | 4 ++--
>> crypto/algif_hash.c | 2 +-
>> include/crypto/hash.h | 3 +++
>> 3 files changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/crypto/ahash.c b/crypto/ahash.c
>> index a64c143165b1..6435bdbe42fd 100644
>> --- a/crypto/ahash.c
>> +++ b/crypto/ahash.c
>> @@ -550,8 +550,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
>> {
>> struct crypto_alg *base = &alg->halg.base;
>>
>> - if (alg->halg.digestsize > PAGE_SIZE / 8 ||
>> - alg->halg.statesize > PAGE_SIZE / 8 ||
>> + if (alg->halg.digestsize > AHASH_MAX_DIGESTSIZE ||
>> + alg->halg.statesize > AHASH_MAX_STATESIZE ||
>> alg->halg.statesize == 0)
>> return -EINVAL;
>>
>> diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
>> index bfcf595fd8f9..8974ee8ebead 100644
>> --- a/crypto/algif_hash.c
>> +++ b/crypto/algif_hash.c
>> @@ -239,7 +239,7 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags,
>> struct alg_sock *ask = alg_sk(sk);
>> struct hash_ctx *ctx = ask->private;
>> struct ahash_request *req = &ctx->req;
>> - char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req)) ? : 1];
>> + char state[AHASH_MAX_STATESIZE];
>> struct sock *sk2;
>> struct alg_sock *ask2;
>> struct hash_ctx *ctx2;
>> diff --git a/include/crypto/hash.h b/include/crypto/hash.h
>> index ae14cc0e0cdb..4fcd0e2368cd 100644
>> --- a/include/crypto/hash.h
>> +++ b/include/crypto/hash.h
>> @@ -64,6 +64,9 @@ struct ahash_request {
>> void *__ctx[] CRYPTO_MINALIGN_ATTR;
>> };
>>
>> +#define AHASH_MAX_DIGESTSIZE 512
>> +#define AHASH_MAX_STATESIZE 512
>> +
>
> Why is AHASH_MAX_DIGESTSIZE (512) so much larger than SHASH_MAX_DIGESTSIZE (64)?
> I would have expected them to be the same.
This was a direct replacement of the PAGE_SIZE / 8 original limit.
This this didn't trip any frame size warnings, it seemed like we
didn't need to do the more narrow limitations done elsewhere.
I am, of course, happy to do a manual review and find a lower
alternative if that's desired.
-Kees
--
Kees Cook
Pixel Security
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [dm-devel] [PATCH v5 05/11] crypto: ahash: Remove VLA usage
2018-07-17 20:07 ` Kees Cook
@ 2018-07-17 23:12 ` Eric Biggers
2018-07-19 3:14 ` Kees Cook
0 siblings, 1 reply; 19+ messages in thread
From: Eric Biggers @ 2018-07-17 23:12 UTC (permalink / raw)
To: Kees Cook
Cc: Herbert Xu, Giovanni Cabiddu, Arnd Bergmann, Gustavo A. R. Silva,
Mike Snitzer, Eric Biggers, qat-linux, LKML, dm-devel,
linux-crypto, Lars Persson, Tim Chen, Alasdair Kergon,
Rabin Vincent
On Tue, Jul 17, 2018 at 01:07:50PM -0700, Kees Cook wrote:
> On Tue, Jul 17, 2018 at 9:39 AM, Eric Biggers <ebiggers3@gmail.com> wrote:
> > On Mon, Jul 16, 2018 at 09:21:44PM -0700, Kees Cook wrote:
> >> In the quest to remove all stack VLA usage from the kernel[1], this
> >> introduces max size macros for ahash, as already done for shash, and
> >> adjust the crypto user to max state size.
> >>
> >> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
> >>
> >> Signed-off-by: Kees Cook <keescook@chromium.org>
> >> ---
> >> crypto/ahash.c | 4 ++--
> >> crypto/algif_hash.c | 2 +-
> >> include/crypto/hash.h | 3 +++
> >> 3 files changed, 6 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/crypto/ahash.c b/crypto/ahash.c
> >> index a64c143165b1..6435bdbe42fd 100644
> >> --- a/crypto/ahash.c
> >> +++ b/crypto/ahash.c
> >> @@ -550,8 +550,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
> >> {
> >> struct crypto_alg *base = &alg->halg.base;
> >>
> >> - if (alg->halg.digestsize > PAGE_SIZE / 8 ||
> >> - alg->halg.statesize > PAGE_SIZE / 8 ||
> >> + if (alg->halg.digestsize > AHASH_MAX_DIGESTSIZE ||
> >> + alg->halg.statesize > AHASH_MAX_STATESIZE ||
> >> alg->halg.statesize == 0)
> >> return -EINVAL;
> >>
> >> diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
> >> index bfcf595fd8f9..8974ee8ebead 100644
> >> --- a/crypto/algif_hash.c
> >> +++ b/crypto/algif_hash.c
> >> @@ -239,7 +239,7 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags,
> >> struct alg_sock *ask = alg_sk(sk);
> >> struct hash_ctx *ctx = ask->private;
> >> struct ahash_request *req = &ctx->req;
> >> - char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req)) ? : 1];
> >> + char state[AHASH_MAX_STATESIZE];
> >> struct sock *sk2;
> >> struct alg_sock *ask2;
> >> struct hash_ctx *ctx2;
> >> diff --git a/include/crypto/hash.h b/include/crypto/hash.h
> >> index ae14cc0e0cdb..4fcd0e2368cd 100644
> >> --- a/include/crypto/hash.h
> >> +++ b/include/crypto/hash.h
> >> @@ -64,6 +64,9 @@ struct ahash_request {
> >> void *__ctx[] CRYPTO_MINALIGN_ATTR;
> >> };
> >>
> >> +#define AHASH_MAX_DIGESTSIZE 512
> >> +#define AHASH_MAX_STATESIZE 512
> >> +
> >
> > Why is AHASH_MAX_DIGESTSIZE (512) so much larger than SHASH_MAX_DIGESTSIZE (64)?
> > I would have expected them to be the same.
>
> This was a direct replacement of the PAGE_SIZE / 8 original limit.
> This this didn't trip any frame size warnings, it seemed like we
> didn't need to do the more narrow limitations done elsewhere.
>
> I am, of course, happy to do a manual review and find a lower
> alternative if that's desired.
>
I just don't see why ahash algorithms would need such a huge maximum digest
size. Don't the 'ahash' algorithms all have 'shash' equivalents too? Is there
actually any hash algorithm, either shash or ahash, in the Linux kernel that has
a digest size greater than 64 bytes (512 bits)? Note that for a real
cryptographic hash there isn't really any need for a digest size larger than
that, since that already gives you 256-bit collision resistance; that's why
SHA-2 and SHA-3 max out at that size.
- Eric
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [dm-devel] [PATCH v5 05/11] crypto: ahash: Remove VLA usage
2018-07-17 23:12 ` Eric Biggers
@ 2018-07-19 3:14 ` Kees Cook
0 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-19 3:14 UTC (permalink / raw)
To: Eric Biggers
Cc: Herbert Xu, Giovanni Cabiddu, Arnd Bergmann, Gustavo A. R. Silva,
Mike Snitzer, Eric Biggers, qat-linux, LKML, dm-devel,
linux-crypto, Lars Persson, Tim Chen, Alasdair Kergon,
Rabin Vincent
On Tue, Jul 17, 2018 at 4:12 PM, Eric Biggers <ebiggers3@gmail.com> wrote:
> I just don't see why ahash algorithms would need such a huge maximum digest
> size. Don't the 'ahash' algorithms all have 'shash' equivalents too? Is there
> actually any hash algorithm, either shash or ahash, in the Linux kernel that has
> a digest size greater than 64 bytes (512 bits)? Note that for a real
> cryptographic hash there isn't really any need for a digest size larger than
> that, since that already gives you 256-bit collision resistance; that's why
> SHA-2 and SHA-3 max out at that size.
Yup, it certainly looks that way on investigation. I'll adjust both
ahash and shash to use the same #define.
-Kees
--
Kees Cook
Pixel Security
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v5 06/11] dm verity fec: Remove VLA usage
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
` (4 preceding siblings ...)
2018-07-17 4:21 ` [PATCH v5 05/11] crypto: ahash: " Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 4:21 ` [PATCH v5 07/11] crypto alg: Introduce generic max blocksize and alignmask Kees Cook
` (4 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
Rabin Vincent, Tim Chen, linux-crypto, qat-linux, dm-devel,
linux-kernel
In the quest to remove all stack VLA usage from the kernel[1], this
uses the newly defined max digest size macro. Also adds a sanity-check
at use-time.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
drivers/md/dm-verity-fec.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
index 684af08d0747..fe5cfd1a5fa5 100644
--- a/drivers/md/dm-verity-fec.c
+++ b/drivers/md/dm-verity-fec.c
@@ -212,12 +212,15 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io,
struct dm_verity_fec_io *fio = fec_io(io);
u64 block, ileaved;
u8 *bbuf, *rs_block;
- u8 want_digest[v->digest_size];
+ u8 want_digest[AHASH_MAX_DIGESTSIZE];
unsigned n, k;
if (neras)
*neras = 0;
+ if (WARN_ON(v->digest_size > sizeof(want_digest)))
+ return -EINVAL;
+
/*
* read each of the rsn data blocks that are part of the RS block, and
* interleave contents to available bufs
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v5 07/11] crypto alg: Introduce generic max blocksize and alignmask
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
` (5 preceding siblings ...)
2018-07-17 4:21 ` [PATCH v5 06/11] dm verity fec: " Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 4:21 ` [PATCH v5 08/11] crypto: qat: Remove VLA usage Kees Cook
` (3 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Giovanni Cabiddu, Kees Cook, Arnd Bergmann, Gustavo A. R. Silva,
Mike Snitzer, Eric Biggers, qat-linux, linux-kernel, dm-devel,
linux-crypto, Lars Persson, Tim Chen, Alasdair Kergon,
Rabin Vincent
In the quest to remove all stack VLA usage from the kernel[1], this
exposes a new general upper bound on crypto blocksize and alignmask
(higher than for the existing cipher limits) for VLA removal,
and introduces new checks.
At present, the highest cra_alignmask in the kernel is 63. The highest
cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the
new blocksize limit, I went with 160 (20 8-byte words).
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
crypto/algapi.c | 7 ++++++-
include/crypto/algapi.h | 4 +++-
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/crypto/algapi.c b/crypto/algapi.c
index c0755cf4f53f..496fc51bf215 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -57,9 +57,14 @@ static int crypto_check_alg(struct crypto_alg *alg)
if (alg->cra_alignmask & (alg->cra_alignmask + 1))
return -EINVAL;
- if (alg->cra_blocksize > PAGE_SIZE / 8)
+ /* General maximums for all algs. */
+ if (alg->cra_alignmask > MAX_ALGAPI_ALIGNMASK)
return -EINVAL;
+ if (alg->cra_blocksize > MAX_ALGAPI_BLOCKSIZE)
+ return -EINVAL;
+
+ /* Lower maximums for specific alg types. */
if (!alg->cra_type && (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
CRYPTO_ALG_TYPE_CIPHER) {
if (alg->cra_alignmask > MAX_CIPHER_ALIGNMASK)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index bd5e8ccf1687..21371ac8f355 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -20,8 +20,10 @@
/*
* Maximum values for blocksize and alignmask, used to allocate
* static buffers that are big enough for any combination of
- * ciphers and architectures.
+ * algs and architectures. Ciphers have a lower maximum size.
*/
+#define MAX_ALGAPI_BLOCKSIZE 160
+#define MAX_ALGAPI_ALIGNMASK 63
#define MAX_CIPHER_BLOCKSIZE 16
#define MAX_CIPHER_ALIGNMASK 15
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v5 08/11] crypto: qat: Remove VLA usage
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
` (6 preceding siblings ...)
2018-07-17 4:21 ` [PATCH v5 07/11] crypto alg: Introduce generic max blocksize and alignmask Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 4:21 ` [PATCH v5 09/11] crypto: shash: Remove VLA usage in unaligned hashing Kees Cook
` (2 subsequent siblings)
10 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
Rabin Vincent, Tim Chen, linux-crypto, qat-linux, dm-devel,
linux-kernel
In the quest to remove all stack VLA usage from the kernel[1], this uses
the new upper bound for the stack buffer. Also adds a sanity check.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
drivers/crypto/qat/qat_common/qat_algs.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
index 1138e41d6805..a28edf7b792f 100644
--- a/drivers/crypto/qat/qat_common/qat_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_algs.c
@@ -153,8 +153,8 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
struct sha512_state sha512;
int block_size = crypto_shash_blocksize(ctx->hash_tfm);
int digest_size = crypto_shash_digestsize(ctx->hash_tfm);
- char ipad[block_size];
- char opad[block_size];
+ char ipad[MAX_ALGAPI_BLOCKSIZE];
+ char opad[MAX_ALGAPI_BLOCKSIZE];
__be32 *hash_state_out;
__be64 *hash512_state_out;
int i, offset;
@@ -164,6 +164,10 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
shash->tfm = ctx->hash_tfm;
shash->flags = 0x0;
+ if (WARN_ON(block_size > sizeof(ipad) ||
+ sizeof(ipad) != sizeof(opad)))
+ return -EINVAL;
+
if (auth_keylen > block_size) {
int ret = crypto_shash_digest(shash, auth_key,
auth_keylen, ipad);
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v5 09/11] crypto: shash: Remove VLA usage in unaligned hashing
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
` (7 preceding siblings ...)
2018-07-17 4:21 ` [PATCH v5 08/11] crypto: qat: Remove VLA usage Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 4:21 ` [PATCH v5 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK Kees Cook
2018-07-17 4:21 ` [PATCH v5 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
10 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
Rabin Vincent, Tim Chen, linux-crypto, qat-linux, dm-devel,
linux-kernel
In the quest to remove all stack VLA usage from the kernel[1], this uses
the newly defined max alignment to perform unaligned hashing to avoid
VLAs, and drops the helper function while adding sanity checks on the
resulting buffer sizes. Additionally, the __aligned_largest macro is
removed since this helper was the only user.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
crypto/shash.c | 27 ++++++++++++++++-----------
include/linux/compiler-gcc.h | 1 -
2 files changed, 16 insertions(+), 12 deletions(-)
diff --git a/crypto/shash.c b/crypto/shash.c
index ab6902c6dae7..8d4746b14dd5 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
}
EXPORT_SYMBOL_GPL(crypto_shash_setkey);
-static inline unsigned int shash_align_buffer_size(unsigned len,
- unsigned long mask)
-{
- typedef u8 __aligned_largest u8_aligned;
- return len + (mask & ~(__alignof__(u8_aligned) - 1));
-}
-
static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
@@ -88,11 +81,17 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
unsigned long alignmask = crypto_shash_alignmask(tfm);
unsigned int unaligned_len = alignmask + 1 -
((unsigned long)data & alignmask);
- u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)]
- __aligned_largest;
+ /*
+ * We cannot count on __aligned() working for large values:
+ * https://patchwork.kernel.org/patch/9507697/
+ */
+ u8 ubuf[MAX_ALGAPI_ALIGNMASK * 2];
u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
int err;
+ if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf)))
+ return -EINVAL;
+
if (unaligned_len > len)
unaligned_len = len;
@@ -124,11 +123,17 @@ static int shash_final_unaligned(struct shash_desc *desc, u8 *out)
unsigned long alignmask = crypto_shash_alignmask(tfm);
struct shash_alg *shash = crypto_shash_alg(tfm);
unsigned int ds = crypto_shash_digestsize(tfm);
- u8 ubuf[shash_align_buffer_size(ds, alignmask)]
- __aligned_largest;
+ /*
+ * We cannot count on __aligned() working for large values:
+ * https://patchwork.kernel.org/patch/9507697/
+ */
+ u8 ubuf[MAX_ALGAPI_ALIGNMASK + SHASH_MAX_DIGESTSIZE];
u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
int err;
+ if (WARN_ON(buf + ds > ubuf + sizeof(ubuf)))
+ return -EINVAL;
+
err = shash->final(desc, buf);
if (err)
goto out;
diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
index f1a7492a5cc8..1f1cdef36a82 100644
--- a/include/linux/compiler-gcc.h
+++ b/include/linux/compiler-gcc.h
@@ -125,7 +125,6 @@
*/
#define __pure __attribute__((pure))
#define __aligned(x) __attribute__((aligned(x)))
-#define __aligned_largest __attribute__((aligned))
#define __printf(a, b) __attribute__((format(printf, a, b)))
#define __scanf(a, b) __attribute__((format(scanf, a, b)))
#define __attribute_const__ __attribute__((__const__))
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH v5 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
` (8 preceding siblings ...)
2018-07-17 4:21 ` [PATCH v5 09/11] crypto: shash: Remove VLA usage in unaligned hashing Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
2018-07-17 16:43 ` [dm-devel] " Eric Biggers
2018-07-17 4:21 ` [PATCH v5 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK Kees Cook
10 siblings, 1 reply; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
Rabin Vincent, Tim Chen, linux-crypto, qat-linux, dm-devel,
linux-kernel
In the quest to remove all stack VLA usage from the kernel[1], this caps
the ahash request size similar to the other limits and adds a sanity
check at initialization. AHASH_REQUEST_ON_STACK is special, though: it
is only ever used for shash-wrapped ahash, so its size is bounded only
by non-async hashes. A manual inspection of this shows the largest to be:
sizeof(struct shash_desc) + SHASH_MAX_DESCSIZE
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
crypto/shash.c | 9 ++++++++-
include/crypto/hash.h | 10 +++++++++-
2 files changed, 17 insertions(+), 2 deletions(-)
diff --git a/crypto/shash.c b/crypto/shash.c
index 8d4746b14dd5..e344560458cb 100644
--- a/crypto/shash.c
+++ b/crypto/shash.c
@@ -355,6 +355,7 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
struct crypto_shash *shash;
+ size_t reqsize;
if (!crypto_mod_get(calg))
return -EAGAIN;
@@ -365,6 +366,12 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
return PTR_ERR(shash);
}
+ reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
+ if (WARN_ON(reqsize > AHASH_MAX_REQSIZE)) {
+ crypto_mod_put(calg);
+ return -EINVAL;
+ }
+
*ctx = shash;
tfm->exit = crypto_exit_shash_ops_async;
@@ -383,7 +390,7 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
if (alg->import)
crt->import = shash_async_import;
- crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
+ crt->reqsize = reqsize;
return 0;
}
diff --git a/include/crypto/hash.h b/include/crypto/hash.h
index 4fcd0e2368cd..2181fb38eab9 100644
--- a/include/crypto/hash.h
+++ b/include/crypto/hash.h
@@ -67,9 +67,17 @@ struct ahash_request {
#define AHASH_MAX_DIGESTSIZE 512
#define AHASH_MAX_STATESIZE 512
+/*
+ * HASH_REQUEST_ON_STACK must only be used when wrapping shash (i.e.
+ * allocated with a CRYPTO_ALG_ASYNC mask), and is generally discouraged
+ * (just use shash directly). This is really only ever needed when using
+ * scatter/gather input sources.
+ */
+#define AHASH_MAX_REQSIZE (sizeof(struct shash_desc) + \
+ SHASH_MAX_DESCSIZE)
#define AHASH_REQUEST_ON_STACK(name, ahash) \
char __##name##_desc[sizeof(struct ahash_request) + \
- crypto_ahash_reqsize(ahash)] CRYPTO_MINALIGN_ATTR; \
+ AHASH_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \
struct ahash_request *name = (void *)__##name##_desc
/**
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [dm-devel] [PATCH v5 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK
2018-07-17 4:21 ` [PATCH v5 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK Kees Cook
@ 2018-07-17 16:43 ` Eric Biggers
2018-07-17 20:11 ` Kees Cook
0 siblings, 1 reply; 19+ messages in thread
From: Eric Biggers @ 2018-07-17 16:43 UTC (permalink / raw)
To: Kees Cook
Cc: Herbert Xu, Giovanni Cabiddu, Arnd Bergmann, Gustavo A. R. Silva,
Mike Snitzer, Eric Biggers, qat-linux, linux-kernel, dm-devel,
linux-crypto, Lars Persson, Tim Chen, Alasdair Kergon,
Rabin Vincent
On Mon, Jul 16, 2018 at 09:21:49PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this caps
> the ahash request size similar to the other limits and adds a sanity
> check at initialization. AHASH_REQUEST_ON_STACK is special, though: it
> is only ever used for shash-wrapped ahash, so its size is bounded only
> by non-async hashes. A manual inspection of this shows the largest to be:
> sizeof(struct shash_desc) + SHASH_MAX_DESCSIZE
>
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
> crypto/shash.c | 9 ++++++++-
> include/crypto/hash.h | 10 +++++++++-
> 2 files changed, 17 insertions(+), 2 deletions(-)
>
> diff --git a/crypto/shash.c b/crypto/shash.c
> index 8d4746b14dd5..e344560458cb 100644
> --- a/crypto/shash.c
> +++ b/crypto/shash.c
> @@ -355,6 +355,7 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
> struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
> struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
> struct crypto_shash *shash;
> + size_t reqsize;
>
> if (!crypto_mod_get(calg))
> return -EAGAIN;
> @@ -365,6 +366,12 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
> return PTR_ERR(shash);
> }
>
> + reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
> + if (WARN_ON(reqsize > AHASH_MAX_REQSIZE)) {
> + crypto_mod_put(calg);
> + return -EINVAL;
> + }
'crypto_free_shash(shash);' instead of 'crypto_mod_put(calg);'
- Eric
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [dm-devel] [PATCH v5 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK
2018-07-17 16:43 ` [dm-devel] " Eric Biggers
@ 2018-07-17 20:11 ` Kees Cook
0 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-17 20:11 UTC (permalink / raw)
To: Eric Biggers
Cc: Herbert Xu, Giovanni Cabiddu, Arnd Bergmann, Gustavo A. R. Silva,
Mike Snitzer, Eric Biggers, qat-linux, LKML, dm-devel,
linux-crypto, Lars Persson, Tim Chen, Alasdair Kergon,
Rabin Vincent
On Tue, Jul 17, 2018 at 9:43 AM, Eric Biggers <ebiggers3@gmail.com> wrote:
> On Mon, Jul 16, 2018 at 09:21:49PM -0700, Kees Cook wrote:
>> + reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
>> + if (WARN_ON(reqsize > AHASH_MAX_REQSIZE)) {
>> + crypto_mod_put(calg);
>> + return -EINVAL;
>> + }
>
> 'crypto_free_shash(shash);' instead of 'crypto_mod_put(calg);'
Oops! Yes, thanks; I have fixed it now in the next version.
-Kees
--
Kees Cook
Pixel Security
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v5 11/11] crypto: skcipher: Remove VLA usage for SKCIPHER_REQUEST_ON_STACK
2018-07-17 4:21 [PATCH v5 00/11] crypto: Remove VLA usage Kees Cook
` (9 preceding siblings ...)
2018-07-17 4:21 ` [PATCH v5 10/11] crypto: ahash: Remove VLA usage for AHASH_REQUEST_ON_STACK Kees Cook
@ 2018-07-17 4:21 ` Kees Cook
10 siblings, 0 replies; 19+ messages in thread
From: Kees Cook @ 2018-07-17 4:21 UTC (permalink / raw)
To: Herbert Xu
Cc: Kees Cook, Arnd Bergmann, Eric Biggers, Gustavo A. R. Silva,
Alasdair Kergon, Giovanni Cabiddu, Lars Persson, Mike Snitzer,
Rabin Vincent, Tim Chen, linux-crypto, qat-linux, dm-devel,
linux-kernel
In the quest to remove all stack VLA usage from the kernel[1], this
caps the skcipher request size similar to other limits and adds a sanity
check at registration. Looking at instrumented tcrypt output, the largest
is for lrw:
crypt: testing lrw(aes)
crypto_skcipher_set_reqsize: 8
crypto_skcipher_set_reqsize: 88
crypto_skcipher_set_reqsize: 472
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
---
include/crypto/internal/skcipher.h | 1 +
include/crypto/skcipher.h | 4 +++-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h
index e42f7063f245..5035482cbe68 100644
--- a/include/crypto/internal/skcipher.h
+++ b/include/crypto/internal/skcipher.h
@@ -130,6 +130,7 @@ static inline struct crypto_skcipher *crypto_spawn_skcipher(
static inline void crypto_skcipher_set_reqsize(
struct crypto_skcipher *skcipher, unsigned int reqsize)
{
+ BUG_ON(reqsize > SKCIPHER_MAX_REQSIZE);
skcipher->reqsize = reqsize;
}
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 2f327f090c3e..c48e194438cf 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -139,9 +139,11 @@ struct skcipher_alg {
struct crypto_alg base;
};
+#define SKCIPHER_MAX_REQSIZE 472
+
#define SKCIPHER_REQUEST_ON_STACK(name, tfm) \
char __##name##_desc[sizeof(struct skcipher_request) + \
- crypto_skcipher_reqsize(tfm)] CRYPTO_MINALIGN_ATTR; \
+ SKCIPHER_MAX_REQSIZE] CRYPTO_MINALIGN_ATTR; \
struct skcipher_request *name = (void *)__##name##_desc
/**
--
2.17.1
^ permalink raw reply related [flat|nested] 19+ messages in thread