* [PATCH v5 00/13] Unseeded In-Kernel Randomness Fixes
@ 2017-06-07 23:21 Jason A. Donenfeld
2017-06-07 23:21 ` [PATCH v5 01/13] random: invalidate batched entropy after crng init Jason A. Donenfeld
0 siblings, 1 reply; 8+ messages in thread
From: Jason A. Donenfeld @ 2017-06-07 23:21 UTC (permalink / raw)
To: Theodore Ts'o, Linux Crypto Mailing List, LKML,
kernel-hardening, Greg Kroah-Hartman, Eric Biggers,
Linus Torvalds, David Miller
Cc: Jason A. Donenfeld
It looks like critique of this has come to an end. Could somebody take
this into their tree for 4.12?
As discussed in [1], there is a problem with get_random_bytes being
used before the RNG has actually been seeded. The solution for fixing
this appears to be multi-pronged. One of those prongs involves adding
a simple blocking API so that modules that use the RNG in process
context can just sleep (in an interruptable manner) until the RNG is
ready to be used. This winds up being a very useful API that covers
a few use cases, several of which are included in this patch set.
[1] http://www.openwall.com/lists/kernel-hardening/2017/06/02/2
Changes v4->v5:
- Old versions of gcc warned on an uninitialized variable, so set
this to silence warning.
Jason A. Donenfeld (13):
random: invalidate batched entropy after crng init
random: add synchronous API for the urandom pool
random: add get_random_{bytes,u32,u64,int,long,once}_wait family
security/keys: ensure RNG is seeded before use
crypto/rng: ensure that the RNG is ready before using
iscsi: ensure RNG is seeded before use
ceph: ensure RNG is seeded before using
cifs: use get_random_u32 for 32-bit lock random
rhashtable: use get_random_u32 for hash_rnd
net/neighbor: use get_random_u32 for 32-bit hash random
net/route: use get_random_int for random counter
bluetooth/smp: ensure RNG is properly seeded before ECDH use
random: warn when kernel uses unseeded randomness
crypto/rng.c | 6 +-
drivers/char/random.c | 93 +++++++++++++++++++++++++++----
drivers/target/iscsi/iscsi_target_auth.c | 14 ++++-
drivers/target/iscsi/iscsi_target_login.c | 22 +++++---
fs/cifs/cifsfs.c | 2 +-
include/linux/net.h | 2 +
include/linux/once.h | 2 +
include/linux/random.h | 26 +++++++++
lib/Kconfig.debug | 16 ++++++
lib/rhashtable.c | 2 +-
net/bluetooth/hci_request.c | 6 ++
net/bluetooth/smp.c | 18 ++++--
net/ceph/ceph_common.c | 6 +-
net/core/neighbour.c | 3 +-
net/ipv4/route.c | 3 +-
security/keys/encrypted-keys/encrypted.c | 8 ++-
security/keys/key.c | 16 +++---
17 files changed, 198 insertions(+), 47 deletions(-)
--
2.13.0
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v5 01/13] random: invalidate batched entropy after crng init 2017-06-07 23:21 [PATCH v5 00/13] Unseeded In-Kernel Randomness Fixes Jason A. Donenfeld @ 2017-06-07 23:21 ` Jason A. Donenfeld 0 siblings, 0 replies; 8+ messages in thread From: Jason A. Donenfeld @ 2017-06-07 23:21 UTC (permalink / raw) To: Theodore Ts'o, Linux Crypto Mailing List, LKML, kernel-hardening, Greg Kroah-Hartman, Eric Biggers, Linus Torvalds, David Miller Cc: Jason A. Donenfeld It's possible that get_random_{u32,u64} is used before the crng has initialized, in which case, its output might not be cryptographically secure. For this problem, directly, this patch set is introducing the *_wait variety of functions, but even with that, there's a subtle issue: what happens to our batched entropy that was generated before initialization. Prior to this commit, it'd stick around, supplying bad numbers. After this commit, we force the entropy to be re-extracted after each phase of the crng has initialized. In order to avoid a race condition with the position counter, we introduce a simple rwlock for this invalidation. Since it's only during this awkward transition period, after things are all set up, we stop using it, so that it doesn't have an impact on performance. This should probably be backported to 4.11. (Also: adding my copyright to the top. With the patch series from January, this patch, and then the ones that come after, I think there's a relevant amount of code in here to add my name to the top.) Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> --- drivers/char/random.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index a561f0c2f428..d35da1603e12 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1,6 +1,9 @@ /* * random.c -- A strong random number generator * + * Copyright (C) 2017 Jason A. Donenfeld <Jason@zx2c4.com>. All + * Rights Reserved. + * * Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005 * * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All @@ -762,6 +765,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct crng_state **crng_node_pool __read_mostly; #endif +static void invalidate_batched_entropy(void); + static void crng_initialize(struct crng_state *crng) { int i; @@ -799,6 +804,7 @@ static int crng_fast_load(const char *cp, size_t len) cp++; crng_init_cnt++; len--; } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { + invalidate_batched_entropy(); crng_init = 1; wake_up_interruptible(&crng_init_wait); pr_notice("random: fast init done\n"); @@ -836,6 +842,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) memzero_explicit(&buf, sizeof(buf)); crng->init_time = jiffies; if (crng == &primary_crng && crng_init < 2) { + invalidate_batched_entropy(); crng_init = 2; process_random_ready_list(); wake_up_interruptible(&crng_init_wait); @@ -2023,6 +2030,7 @@ struct batched_entropy { }; unsigned int position; }; +static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock); /* * Get a random word for internal kernel use only. The quality of the random @@ -2033,6 +2041,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); u64 get_random_u64(void) { u64 ret; + const bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; #if BITS_PER_LONG == 64 @@ -2045,11 +2055,15 @@ u64 get_random_u64(void) #endif batch = &get_cpu_var(batched_entropy_u64); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) { extract_crng((u8 *)batch->entropy_u64); batch->position = 0; } ret = batch->entropy_u64[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u64); return ret; } @@ -2059,22 +2073,45 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32); u32 get_random_u32(void) { u32 ret; + const bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; if (arch_get_random_int(&ret)) return ret; batch = &get_cpu_var(batched_entropy_u32); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) { extract_crng((u8 *)batch->entropy_u32); batch->position = 0; } ret = batch->entropy_u32[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u32); return ret; } EXPORT_SYMBOL(get_random_u32); +/* It's important to invalidate all potential batched entropy that might + * be stored before the crng is initialized, which we can do lazily by + * simply resetting the counter to zero so that it's re-extracted on the + * next usage. */ +static void invalidate_batched_entropy(void) +{ + int cpu; + unsigned long flags; + + write_lock_irqsave(&batched_entropy_reset_lock, flags); + for_each_possible_cpu (cpu) { + per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0; + per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0; + } + write_unlock_irqrestore(&batched_entropy_reset_lock, flags); +} + /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take. -- 2.13.0 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v5 00/13] Unseeded In-Kernel Randomness Fixes
@ 2017-06-07 23:25 Jason A. Donenfeld
2017-06-07 23:25 ` [PATCH v5 01/13] random: invalidate batched entropy after crng init Jason A. Donenfeld
0 siblings, 1 reply; 8+ messages in thread
From: Jason A. Donenfeld @ 2017-06-07 23:25 UTC (permalink / raw)
To: Theodore Ts'o, Linux Crypto Mailing List, LKML,
kernel-hardening, Greg Kroah-Hartman, Eric Biggers,
Linus Torvalds, David Miller
Cc: Jason A. Donenfeld
It looks like critique of this has come to an end. Could somebody take
this into their tree for 4.12?
As discussed in [1], there is a problem with get_random_bytes being
used before the RNG has actually been seeded. The solution for fixing
this appears to be multi-pronged. One of those prongs involves adding
a simple blocking API so that modules that use the RNG in process
context can just sleep (in an interruptable manner) until the RNG is
ready to be used. This winds up being a very useful API that covers
a few use cases, several of which are included in this patch set.
[1] http://www.openwall.com/lists/kernel-hardening/2017/06/02/2
Changes v4->v5:
- Old versions of gcc warned on an uninitialized variable, so set
this to silence warning.
Jason A. Donenfeld (13):
random: invalidate batched entropy after crng init
random: add synchronous API for the urandom pool
random: add get_random_{bytes,u32,u64,int,long,once}_wait family
security/keys: ensure RNG is seeded before use
crypto/rng: ensure that the RNG is ready before using
iscsi: ensure RNG is seeded before use
ceph: ensure RNG is seeded before using
cifs: use get_random_u32 for 32-bit lock random
rhashtable: use get_random_u32 for hash_rnd
net/neighbor: use get_random_u32 for 32-bit hash random
net/route: use get_random_int for random counter
bluetooth/smp: ensure RNG is properly seeded before ECDH use
random: warn when kernel uses unseeded randomness
crypto/rng.c | 6 +-
drivers/char/random.c | 93 +++++++++++++++++++++++++++----
drivers/target/iscsi/iscsi_target_auth.c | 14 ++++-
drivers/target/iscsi/iscsi_target_login.c | 22 +++++---
fs/cifs/cifsfs.c | 2 +-
include/linux/net.h | 2 +
include/linux/once.h | 2 +
include/linux/random.h | 26 +++++++++
lib/Kconfig.debug | 16 ++++++
lib/rhashtable.c | 2 +-
net/bluetooth/hci_request.c | 6 ++
net/bluetooth/smp.c | 18 ++++--
net/ceph/ceph_common.c | 6 +-
net/core/neighbour.c | 3 +-
net/ipv4/route.c | 3 +-
security/keys/encrypted-keys/encrypted.c | 8 ++-
security/keys/key.c | 16 +++---
17 files changed, 198 insertions(+), 47 deletions(-)
--
2.13.0
^ permalink raw reply [flat|nested] 8+ messages in thread* [PATCH v5 01/13] random: invalidate batched entropy after crng init 2017-06-07 23:25 [PATCH v5 00/13] Unseeded In-Kernel Randomness Fixes Jason A. Donenfeld @ 2017-06-07 23:25 ` Jason A. Donenfeld 2017-06-14 19:28 ` Sebastian Andrzej Siewior 0 siblings, 1 reply; 8+ messages in thread From: Jason A. Donenfeld @ 2017-06-07 23:25 UTC (permalink / raw) To: Theodore Ts'o, Linux Crypto Mailing List, LKML, kernel-hardening, Greg Kroah-Hartman, Eric Biggers, Linus Torvalds, David Miller Cc: Jason A. Donenfeld It's possible that get_random_{u32,u64} is used before the crng has initialized, in which case, its output might not be cryptographically secure. For this problem, directly, this patch set is introducing the *_wait variety of functions, but even with that, there's a subtle issue: what happens to our batched entropy that was generated before initialization. Prior to this commit, it'd stick around, supplying bad numbers. After this commit, we force the entropy to be re-extracted after each phase of the crng has initialized. In order to avoid a race condition with the position counter, we introduce a simple rwlock for this invalidation. Since it's only during this awkward transition period, after things are all set up, we stop using it, so that it doesn't have an impact on performance. This should probably be backported to 4.11. (Also: adding my copyright to the top. With the patch series from January, this patch, and then the ones that come after, I think there's a relevant amount of code in here to add my name to the top.) Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> --- drivers/char/random.c | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index a561f0c2f428..d35da1603e12 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1,6 +1,9 @@ /* * random.c -- A strong random number generator * + * Copyright (C) 2017 Jason A. Donenfeld <Jason@zx2c4.com>. All + * Rights Reserved. + * * Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005 * * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All @@ -762,6 +765,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); static struct crng_state **crng_node_pool __read_mostly; #endif +static void invalidate_batched_entropy(void); + static void crng_initialize(struct crng_state *crng) { int i; @@ -799,6 +804,7 @@ static int crng_fast_load(const char *cp, size_t len) cp++; crng_init_cnt++; len--; } if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { + invalidate_batched_entropy(); crng_init = 1; wake_up_interruptible(&crng_init_wait); pr_notice("random: fast init done\n"); @@ -836,6 +842,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) memzero_explicit(&buf, sizeof(buf)); crng->init_time = jiffies; if (crng == &primary_crng && crng_init < 2) { + invalidate_batched_entropy(); crng_init = 2; process_random_ready_list(); wake_up_interruptible(&crng_init_wait); @@ -2023,6 +2030,7 @@ struct batched_entropy { }; unsigned int position; }; +static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock); /* * Get a random word for internal kernel use only. The quality of the random @@ -2033,6 +2041,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); u64 get_random_u64(void) { u64 ret; + const bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; #if BITS_PER_LONG == 64 @@ -2045,11 +2055,15 @@ u64 get_random_u64(void) #endif batch = &get_cpu_var(batched_entropy_u64); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) { extract_crng((u8 *)batch->entropy_u64); batch->position = 0; } ret = batch->entropy_u64[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u64); return ret; } @@ -2059,22 +2073,45 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32); u32 get_random_u32(void) { u32 ret; + const bool use_lock = READ_ONCE(crng_init) < 2; + unsigned long flags = 0; struct batched_entropy *batch; if (arch_get_random_int(&ret)) return ret; batch = &get_cpu_var(batched_entropy_u32); + if (use_lock) + read_lock_irqsave(&batched_entropy_reset_lock, flags); if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) { extract_crng((u8 *)batch->entropy_u32); batch->position = 0; } ret = batch->entropy_u32[batch->position++]; + if (use_lock) + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); put_cpu_var(batched_entropy_u32); return ret; } EXPORT_SYMBOL(get_random_u32); +/* It's important to invalidate all potential batched entropy that might + * be stored before the crng is initialized, which we can do lazily by + * simply resetting the counter to zero so that it's re-extracted on the + * next usage. */ +static void invalidate_batched_entropy(void) +{ + int cpu; + unsigned long flags; + + write_lock_irqsave(&batched_entropy_reset_lock, flags); + for_each_possible_cpu (cpu) { + per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0; + per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0; + } + write_unlock_irqrestore(&batched_entropy_reset_lock, flags); +} + /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take. -- 2.13.0 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v5 01/13] random: invalidate batched entropy after crng init 2017-06-07 23:25 ` [PATCH v5 01/13] random: invalidate batched entropy after crng init Jason A. Donenfeld @ 2017-06-14 19:28 ` Sebastian Andrzej Siewior 2017-06-14 22:33 ` Jason A. Donenfeld 0 siblings, 1 reply; 8+ messages in thread From: Sebastian Andrzej Siewior @ 2017-06-14 19:28 UTC (permalink / raw) To: Jason A. Donenfeld Cc: Theodore Ts'o, Linux Crypto Mailing List, LKML, kernel-hardening, Greg Kroah-Hartman, Eric Biggers, Linus Torvalds, David Miller, tglx On 2017-06-08 01:25:55 [+0200], Jason A. Donenfeld wrote: > It's possible that get_random_{u32,u64} is used before the crng has > initialized, in which case, its output might not be cryptographically > secure. For this problem, directly, this patch set is introducing the > *_wait variety of functions, but even with that, there's a subtle issue: > what happens to our batched entropy that was generated before > initialization. Prior to this commit, it'd stick around, supplying bad > numbers. After this commit, we force the entropy to be re-extracted > after each phase of the crng has initialized. > > In order to avoid a race condition with the position counter, we > introduce a simple rwlock for this invalidation. Since it's only during > this awkward transition period, after things are all set up, we stop > using it, so that it doesn't have an impact on performance. > > This should probably be backported to 4.11. > > (Also: adding my copyright to the top. With the patch series from > January, this patch, and then the ones that come after, I think there's > a relevant amount of code in here to add my name to the top.) > > Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> > Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> I don't understand why lockdep notices this possible deadlock only in RT: | the existing dependency chain (in reverse order) is: | | -> #1 (primary_crng.lock){+.+...}: | lock_acquire+0xb5/0x2b0 | rt_spin_lock+0x46/0x50 | _extract_crng+0x39/0xa0 | extract_crng+0x3a/0x40 | get_random_u64+0x17a/0x200 | cache_random_seq_create+0x51/0x100 | init_cache_random_seq+0x35/0x90 | __kmem_cache_create+0xd3/0x560 | create_boot_cache+0x8c/0xb2 | create_kmalloc_cache+0x54/0x9f | create_kmalloc_caches+0xe3/0xfd | kmem_cache_init+0x14f/0x1f0 | start_kernel+0x1e7/0x3b3 | x86_64_start_reservations+0x2a/0x2c | x86_64_start_kernel+0x13d/0x14c | verify_cpu+0x0/0xfc | | -> #0 (batched_entropy_reset_lock){+.+...}: | __lock_acquire+0x11b4/0x1320 | lock_acquire+0xb5/0x2b0 | rt_write_lock+0x26/0x40 | rt_write_lock_irqsave+0x9/0x10 | invalidate_batched_entropy+0x28/0xb0 | crng_fast_load+0xb5/0xe0 | add_interrupt_randomness+0x16c/0x1a0 | irq_thread+0x15c/0x1e0 | kthread+0x112/0x150 | ret_from_fork+0x31/0x40 We have the path add_interrupt_randomness() -> crng_fast_load() which take primary_crng.lock -> batched_entropy_reset_lock and we have get_random_uXX() -> extract_crng() which take the locks in opposite order: batched_entropy_reset_lock -> crng->lock however batched_entropy_reset_lock() is only taken if "crng_init < 2" so it is possible RT hits this constraint more reliably. > --- > drivers/char/random.c | 37 +++++++++++++++++++++++++++++++++++++ > 1 file changed, 37 insertions(+) > > diff --git a/drivers/char/random.c b/drivers/char/random.c > index a561f0c2f428..d35da1603e12 100644 > --- a/drivers/char/random.c > +++ b/drivers/char/random.c > @@ -1,6 +1,9 @@ > /* > * random.c -- A strong random number generator > * > + * Copyright (C) 2017 Jason A. Donenfeld <Jason@zx2c4.com>. All > + * Rights Reserved. > + * > * Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005 > * > * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All > @@ -762,6 +765,8 @@ static DECLARE_WAIT_QUEUE_HEAD(crng_init_wait); > static struct crng_state **crng_node_pool __read_mostly; > #endif > > +static void invalidate_batched_entropy(void); > + > static void crng_initialize(struct crng_state *crng) > { > int i; > @@ -799,6 +804,7 @@ static int crng_fast_load(const char *cp, size_t len) > cp++; crng_init_cnt++; len--; > } > if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { > + invalidate_batched_entropy(); > crng_init = 1; > wake_up_interruptible(&crng_init_wait); > pr_notice("random: fast init done\n"); > @@ -836,6 +842,7 @@ static void crng_reseed(struct crng_state *crng, struct entropy_store *r) > memzero_explicit(&buf, sizeof(buf)); > crng->init_time = jiffies; > if (crng == &primary_crng && crng_init < 2) { > + invalidate_batched_entropy(); > crng_init = 2; > process_random_ready_list(); > wake_up_interruptible(&crng_init_wait); > @@ -2023,6 +2030,7 @@ struct batched_entropy { > }; > unsigned int position; > }; > +static rwlock_t batched_entropy_reset_lock = __RW_LOCK_UNLOCKED(batched_entropy_reset_lock); > > /* > * Get a random word for internal kernel use only. The quality of the random > @@ -2033,6 +2041,8 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64); > u64 get_random_u64(void) > { > u64 ret; > + const bool use_lock = READ_ONCE(crng_init) < 2; > + unsigned long flags = 0; > struct batched_entropy *batch; > > #if BITS_PER_LONG == 64 > @@ -2045,11 +2055,15 @@ u64 get_random_u64(void) > #endif > > batch = &get_cpu_var(batched_entropy_u64); > + if (use_lock) > + read_lock_irqsave(&batched_entropy_reset_lock, flags); > if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) { > extract_crng((u8 *)batch->entropy_u64); > batch->position = 0; > } > ret = batch->entropy_u64[batch->position++]; > + if (use_lock) > + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); > put_cpu_var(batched_entropy_u64); > return ret; > } > @@ -2059,22 +2073,45 @@ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32); > u32 get_random_u32(void) > { > u32 ret; > + const bool use_lock = READ_ONCE(crng_init) < 2; > + unsigned long flags = 0; > struct batched_entropy *batch; > > if (arch_get_random_int(&ret)) > return ret; > > batch = &get_cpu_var(batched_entropy_u32); > + if (use_lock) > + read_lock_irqsave(&batched_entropy_reset_lock, flags); > if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) { > extract_crng((u8 *)batch->entropy_u32); > batch->position = 0; > } > ret = batch->entropy_u32[batch->position++]; > + if (use_lock) > + read_unlock_irqrestore(&batched_entropy_reset_lock, flags); > put_cpu_var(batched_entropy_u32); > return ret; > } > EXPORT_SYMBOL(get_random_u32); > > +/* It's important to invalidate all potential batched entropy that might > + * be stored before the crng is initialized, which we can do lazily by > + * simply resetting the counter to zero so that it's re-extracted on the > + * next usage. */ > +static void invalidate_batched_entropy(void) > +{ > + int cpu; > + unsigned long flags; > + > + write_lock_irqsave(&batched_entropy_reset_lock, flags); > + for_each_possible_cpu (cpu) { > + per_cpu_ptr(&batched_entropy_u32, cpu)->position = 0; > + per_cpu_ptr(&batched_entropy_u64, cpu)->position = 0; > + } > + write_unlock_irqrestore(&batched_entropy_reset_lock, flags); > +} > + > /** > * randomize_page - Generate a random, page aligned address > * @start: The smallest acceptable address the caller will take. > -- > 2.13.0 Sebastian ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 01/13] random: invalidate batched entropy after crng init 2017-06-14 19:28 ` Sebastian Andrzej Siewior @ 2017-06-14 22:33 ` Jason A. Donenfeld 2017-06-16 8:31 ` Sebastian Andrzej Siewior 0 siblings, 1 reply; 8+ messages in thread From: Jason A. Donenfeld @ 2017-06-14 22:33 UTC (permalink / raw) To: Sebastian Andrzej Siewior Cc: Theodore Ts'o, Linux Crypto Mailing List, LKML, kernel-hardening, Greg Kroah-Hartman, Eric Biggers, Linus Torvalds, David Miller, tglx There's a potential race that I fixed in my v5 of that patch set, but Ted only took v4, and for whatever reason has been to busy to submit the additional patch I already posted showing the diff between v4&v5. Hopefully he actually gets around to it and sends this for the next rc. Here it is: https://patchwork.kernel.org/patch/9774563/ ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 01/13] random: invalidate batched entropy after crng init 2017-06-14 22:33 ` Jason A. Donenfeld @ 2017-06-16 8:31 ` Sebastian Andrzej Siewior 2017-06-16 12:12 ` Jason A. Donenfeld 0 siblings, 1 reply; 8+ messages in thread From: Sebastian Andrzej Siewior @ 2017-06-16 8:31 UTC (permalink / raw) To: Jason A. Donenfeld Cc: Theodore Ts'o, Linux Crypto Mailing List, LKML, kernel-hardening, Greg Kroah-Hartman, Eric Biggers, Linus Torvalds, David Miller, tglx On 2017-06-15 00:33:12 [+0200], Jason A. Donenfeld wrote: > There's a potential race that I fixed in my v5 of that patch set, but > Ted only took v4, and for whatever reason has been to busy to submit > the additional patch I already posted showing the diff between v4&v5. > Hopefully he actually gets around to it and sends this for the next > rc. Here it is: > > https://patchwork.kernel.org/patch/9774563/ So you replace "crng_init < 2" with use_lock instead. That is not what I am talking about. Again: add_interrupt_randomness() -> crng_fast_load() spin_trylock_irqsave(&primary_crng.lock, ) -> invalidate_batched_entropy() write_lock_irqsave(&batched_entropy_reset_lock, ); in that order while the code path get_random_uXX() read_lock_irqsave(&batched_entropy_reset_lock, ); -> extract_crng() -> _extract_crng() spin_lock_irqsave(&crng->lock, ); which allocates the same lock in opposite order. That means T1 T2 crng_fast_load() get_random_u64() extract_crng() *dead lock* invalidate_batched_entropy() _extract_crng() So T1 waits for batched_entropy_reset_lock holding primary_crng.lock and T2 waits for primary_crng.lock holding batched_entropy_reset_lock. Sebastian ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 01/13] random: invalidate batched entropy after crng init 2017-06-16 8:31 ` Sebastian Andrzej Siewior @ 2017-06-16 12:12 ` Jason A. Donenfeld 2017-06-16 14:36 ` Sebastian Andrzej Siewior 0 siblings, 1 reply; 8+ messages in thread From: Jason A. Donenfeld @ 2017-06-16 12:12 UTC (permalink / raw) To: Sebastian Andrzej Siewior Cc: Theodore Ts'o, Linux Crypto Mailing List, LKML, kernel-hardening, Greg Kroah-Hartman, Eric Biggers, Linus Torvalds, David Miller, Thomas Gleixner On Fri, Jun 16, 2017 at 10:31 AM, Sebastian Andrzej Siewior <sebastian@breakpoint.cc> wrote: > am talking about. Again: I actually figured that out myself after sending the initial email, so then I wrote a follow-up patch which I attached to this thread. You should have received it. Can you take a look? https://lkml.org/lkml/2017/6/14/942 ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v5 01/13] random: invalidate batched entropy after crng init 2017-06-16 12:12 ` Jason A. Donenfeld @ 2017-06-16 14:36 ` Sebastian Andrzej Siewior 0 siblings, 0 replies; 8+ messages in thread From: Sebastian Andrzej Siewior @ 2017-06-16 14:36 UTC (permalink / raw) To: Jason A. Donenfeld Cc: Theodore Ts'o, Linux Crypto Mailing List, LKML, kernel-hardening, Greg Kroah-Hartman, Eric Biggers, Linus Torvalds, David Miller, Thomas Gleixner On 2017-06-16 14:12:42 [+0200], Jason A. Donenfeld wrote: > I actually figured that out myself after sending the initial email, so > then I wrote a follow-up patch which I attached to this thread. You > should have received it. Can you take a look? replied to the patch. Sebastian ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-06-16 14:36 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2017-06-07 23:21 [PATCH v5 00/13] Unseeded In-Kernel Randomness Fixes Jason A. Donenfeld 2017-06-07 23:21 ` [PATCH v5 01/13] random: invalidate batched entropy after crng init Jason A. Donenfeld -- strict thread matches above, loose matches on Subject: below -- 2017-06-07 23:25 [PATCH v5 00/13] Unseeded In-Kernel Randomness Fixes Jason A. Donenfeld 2017-06-07 23:25 ` [PATCH v5 01/13] random: invalidate batched entropy after crng init Jason A. Donenfeld 2017-06-14 19:28 ` Sebastian Andrzej Siewior 2017-06-14 22:33 ` Jason A. Donenfeld 2017-06-16 8:31 ` Sebastian Andrzej Siewior 2017-06-16 12:12 ` Jason A. Donenfeld 2017-06-16 14:36 ` Sebastian Andrzej Siewior
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox