* [PATCH 00/14] random: rework reseeding
@ 2013-12-15 2:00 Greg Price
2013-12-15 2:00 ` [PATCH 01/14] random: fix signedness bug Greg Price
` (13 more replies)
0 siblings, 14 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:00 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel, H. Peter Anvin
Hi Ted, hi all,
This series reworks the way we handle reseeding the nonblocking pool,
which supplies /dev/urandom and the kernel's internal randomness
needs. The most important change is to make sure that the input
entropy always comes in large chunks, what we've called a
"catastrophic reseed", rather than a few bits at a time with the
possibility of producing output after every few bits. If we do the
latter, we risk that an attacker could see the output (e.g. by
watching us use it, or by constantly reading /dev/urandom), and then
brute-force the few bits of entropy before each output in turn.
Patches 1-9 prepare us to do this while keeping the benefit of 3.13's
advances in getting entropy into the nonblocking pool quickly at boot,
by making several changes to the workings of xfer_secondary_pool() and
account(). Then patch 10 accomplishes the goal by sending all routine
input through the input pool, so that our normal mechanisms for
catastrophic reseed always apply.
Patches 11-13 change the accounting for the 'initialized' flag to
match, so that it gives credit only for a single large reseed (of
128 bits, by default), rather than many reseeds adding up to 129 bits.
This is the flag that means we no longer warn about insufficient
entropy, we allow /dev/random to consume entropy, and other changes.
Patch 14 adds an extra stage after setting 'initialized', where we go
for still larger reseeds, of up to 512 bits estimated entropy by
default. This isn't integral to achieving catastrophic reseeds, but
it serves as a hedge against situations where our entropy estimates
are too high.
After the whole series, our behavior at boot is to seed with whatever
we have when first asked for random bytes, then hold out for seeds of
doubling size until we reach the target (by default 512b estimated.)
Until we first reach the minimum reseed size (128b by default), all
input collected is exclusively for the nonblocking pool and
/dev/random readers must wait.
Cheers,
Greg
Greg Price (14):
random: fix signedness bug
random: fix a (harmless) overflow
random: reserve for /dev/random only once /dev/urandom seeded
random: accept small seeds early on
random: move transfer accounting into account() helper
random: separate quantity of bytes extracted and entropy to credit
random: exploit any extra entropy too when reseeding
random: rate-limit reseeding only after properly seeded
random: reserve entropy for nonblocking pool early on
random: direct all routine input via input pool
random: separate entropy since auto-push from entropy_total
random: separate minimum reseed size from minimum /dev/random read
random: count only catastrophic reseeds for initialization
random: target giant reseeds, to be conservative
drivers/char/random.c | 198 ++++++++++++++++++++++++++++--------------
include/trace/events/random.h | 27 +++---
2 files changed, 150 insertions(+), 75 deletions(-)
--
1.8.3.2
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 01/14] random: fix signedness bug
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
@ 2013-12-15 2:00 ` Greg Price
2013-12-15 2:00 ` [PATCH 02/14] random: fix a (harmless) overflow Greg Price
` (12 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:00 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
Negative numbers and size_t don't mix. When the total entropy
available was less than 'reserved', we would fail to enforce any limit
at all. Fix that. We never care how negative have_bytes - reserved
is, so just flatten it to zero if negative.
This behavior entered in 987cd8c30 "random: simplify accounting code"
a few commits ago. Before that, for a long time we would compare
have_bytes - reserved (or equivalent) to ibytes or store it into ibytes,
but only inside a condition that guaranteed it wasn't negative.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 8cc7d6515..1dd5f2634 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -977,7 +977,8 @@ retry:
ibytes = nbytes;
/* If limited, never pull more than available */
if (r->limit)
- ibytes = min_t(size_t, ibytes, have_bytes - reserved);
+ ibytes = min_t(size_t, ibytes,
+ max(0, have_bytes - reserved));
if (ibytes < min)
ibytes = 0;
entropy_count = max_t(int, 0,
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 02/14] random: fix a (harmless) overflow
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
2013-12-15 2:00 ` [PATCH 01/14] random: fix signedness bug Greg Price
@ 2013-12-15 2:00 ` Greg Price
2013-12-15 2:01 ` [PATCH 03/14] random: reserve for /dev/random only once /dev/urandom seeded Greg Price
` (11 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:00 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
This overflow is harmless except to think about, but it's best
to fix it. If userspace does a giant read from /dev/urandom,
bigger than INT_MAX, then that size gets passed straight
through extract_entropy_user and xfer_secondary_pool to
_xfer_secondary_pool as nbytes, and we would store it into
bytes, which is an int. The result could be negative.
The consequence is pretty small -- we would pull only the minimum
amount of entropy, rather than as much as we could up to the size
of the output pool, and this is urandom so that's fine. But the
code is a little easier to read if we make it clear that overflow
isn't an issue. Also we might be less likely to make mistakes like
the one fixed in the previous commit.
As a bonus, give a name to the minimum number of bytes to pull,
which we use twice.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 1dd5f2634..92d9f6862 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -922,21 +922,20 @@ static void xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
{
- __u32 tmp[OUTPUT_POOL_WORDS];
+ __u32 tmp[OUTPUT_POOL_WORDS];
+ int bytes, min_bytes;
/* For /dev/random's pool, always leave two wakeups' worth */
int rsvd_bytes = r->limit ? 0 : random_read_wakeup_bits / 4;
- int bytes = nbytes;
/* pull at least as much as a wakeup */
- bytes = max_t(int, bytes, random_read_wakeup_bits / 8);
+ min_bytes = random_read_wakeup_bits / 8;
/* but never more than the buffer size */
- bytes = min_t(int, bytes, sizeof(tmp));
+ bytes = min(sizeof(tmp), max_t(size_t, min_bytes, nbytes));
trace_xfer_secondary_pool(r->name, bytes * 8, nbytes * 8,
ENTROPY_BITS(r), ENTROPY_BITS(r->pull));
- bytes = extract_entropy(r->pull, tmp, bytes,
- random_read_wakeup_bits / 8, rsvd_bytes);
+ bytes = extract_entropy(r->pull, tmp, bytes, min_bytes, rsvd_bytes);
mix_pool_bytes(r, tmp, bytes, NULL);
credit_entropy_bits(r, bytes*8);
}
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 03/14] random: reserve for /dev/random only once /dev/urandom seeded
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
2013-12-15 2:00 ` [PATCH 01/14] random: fix signedness bug Greg Price
2013-12-15 2:00 ` [PATCH 02/14] random: fix a (harmless) overflow Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 04/14] random: accept small seeds early on Greg Price
` (10 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
Early in boot, we really want to make sure the nonblocking pool (for
/dev/urandom and the kernel's own use) gets an adequate amount of
entropy ASAP. Anyone reading /dev/random is prepared to wait
potentially a long time anyway, so delaying them a little bit more at
boot until /dev/urandom is seeded is no big deal. This logic still
ensures that /dev/random readers won't starve indefinitely.
At present most input goes directly to the nonblocking pool early on
anyway, but this helps put us in a position to change that.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 92d9f6862..bf7fedadd 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -923,19 +923,21 @@ static void xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
{
__u32 tmp[OUTPUT_POOL_WORDS];
- int bytes, min_bytes;
-
- /* For /dev/random's pool, always leave two wakeups' worth */
- int rsvd_bytes = r->limit ? 0 : random_read_wakeup_bits / 4;
+ int bytes, min_bytes, reserved_bytes;
/* pull at least as much as a wakeup */
min_bytes = random_read_wakeup_bits / 8;
/* but never more than the buffer size */
bytes = min(sizeof(tmp), max_t(size_t, min_bytes, nbytes));
+ /* reserve some for /dev/random's pool, unless we really need it */
+ reserved_bytes = 0;
+ if (!r->limit && r->initialized)
+ reserved_bytes = 2 * (random_read_wakeup_bits / 8);
+
trace_xfer_secondary_pool(r->name, bytes * 8, nbytes * 8,
ENTROPY_BITS(r), ENTROPY_BITS(r->pull));
- bytes = extract_entropy(r->pull, tmp, bytes, min_bytes, rsvd_bytes);
+ bytes = extract_entropy(r->pull, tmp, bytes, min_bytes, reserved_bytes);
mix_pool_bytes(r, tmp, bytes, NULL);
credit_entropy_bits(r, bytes*8);
}
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 04/14] random: accept small seeds early on
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (2 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 03/14] random: reserve for /dev/random only once /dev/urandom seeded Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 05/14] random: move transfer accounting into account() helper Greg Price
` (9 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
Early in boot, we want to get /dev/urandom (and the kernel's
internal randomness source) adequately seeded ASAP. If we're
desperately short of entropy and are asked to produce output,
we're better off getting, say, 16 bits now and 32 bits next time
rather than holding out for a whole 64-bit reseed while producing
output from virtually no entropy.
At present most input goes directly to the nonblocking pool early on
anyway, but this helps put us in a position to change that.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index bf7fedadd..9f24f6468 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -925,12 +925,21 @@ static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
__u32 tmp[OUTPUT_POOL_WORDS];
int bytes, min_bytes, reserved_bytes;
- /* pull at least as much as a wakeup */
- min_bytes = random_read_wakeup_bits / 8;
- /* but never more than the buffer size */
+ /* Try to pull a full wakeup's worth if we might have just woken up
+ * for it, and a full reseed's worth (which is controlled by the same
+ * parameter) for the nonblocking pool... */
+ if (r == &blocking_pool || r->initialized) {
+ min_bytes = random_read_wakeup_bits / 8;
+ } else {
+ /* ... except if we're hardly seeded at all, we'll settle for
+ * enough to double what we have ... */
+ min_bytes = min(random_read_wakeup_bits / 8,
+ (r->entropy_total+7) / 8);
+ }
+ /* ... and in any event no more than our (giant) buffer holds. */
bytes = min(sizeof(tmp), max_t(size_t, min_bytes, nbytes));
- /* reserve some for /dev/random's pool, unless we really need it */
+ /* Reserve some for /dev/random's pool, unless we really need it. */
reserved_bytes = 0;
if (!r->limit && r->initialized)
reserved_bytes = 2 * (random_read_wakeup_bits / 8);
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 05/14] random: move transfer accounting into account() helper
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (3 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 04/14] random: accept small seeds early on Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 06/14] random: separate quantity of bytes extracted and entropy to credit Greg Price
` (8 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
This brings the logic that determines "min" and "reserved" next to
the closely related logic that uses it, in account(), and reduces
the number of different points involved in communicating "min" and
"reserved" from one place to the other.
This will be particularly helpful in the next commit, where we add
another parameter to account() and extract_entropy().
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 61 ++++++++++++++++++++++++++++-----------------------
1 file changed, 33 insertions(+), 28 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 9f24f6468..a624262e8 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -896,7 +896,7 @@ void add_disk_randomness(struct gendisk *disk)
*********************************************************************/
static ssize_t extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, int min, int rsvd);
+ size_t nbytes, struct entropy_store *dest);
/*
* This utility inline function is responsible for transferring entropy
@@ -920,33 +920,36 @@ static void xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
_xfer_secondary_pool(r, nbytes);
}
-static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
+static void account_xfer(struct entropy_store *dest, int nbytes,
+ int *min_bytes, int *reserved_bytes)
{
- __u32 tmp[OUTPUT_POOL_WORDS];
- int bytes, min_bytes, reserved_bytes;
-
/* Try to pull a full wakeup's worth if we might have just woken up
* for it, and a full reseed's worth (which is controlled by the same
* parameter) for the nonblocking pool... */
- if (r == &blocking_pool || r->initialized) {
- min_bytes = random_read_wakeup_bits / 8;
+ if (dest == &blocking_pool || dest->initialized) {
+ *min_bytes = random_read_wakeup_bits / 8;
} else {
/* ... except if we're hardly seeded at all, we'll settle for
- * enough to double what we have ... */
- min_bytes = min(random_read_wakeup_bits / 8,
- (r->entropy_total+7) / 8);
+ * enough to double what we have. */
+ *min_bytes = min(random_read_wakeup_bits / 8,
+ (dest->entropy_total+7) / 8);
}
- /* ... and in any event no more than our (giant) buffer holds. */
- bytes = min(sizeof(tmp), max_t(size_t, min_bytes, nbytes));
/* Reserve some for /dev/random's pool, unless we really need it. */
- reserved_bytes = 0;
- if (!r->limit && r->initialized)
- reserved_bytes = 2 * (random_read_wakeup_bits / 8);
+ *reserved_bytes = 0;
+ if (!dest->limit && dest->initialized)
+ *reserved_bytes = 2 * (random_read_wakeup_bits / 8);
+}
+static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
+{
+ __u32 tmp[OUTPUT_POOL_WORDS];
+ int bytes;
+
+ bytes = min_t(int, nbytes, sizeof(tmp));
trace_xfer_secondary_pool(r->name, bytes * 8, nbytes * 8,
ENTROPY_BITS(r), ENTROPY_BITS(r->pull));
- bytes = extract_entropy(r->pull, tmp, bytes, min_bytes, reserved_bytes);
+ bytes = extract_entropy(r->pull, tmp, bytes, r);
mix_pool_bytes(r, tmp, bytes, NULL);
credit_entropy_bits(r, bytes*8);
}
@@ -971,13 +974,17 @@ static void push_to_pool(struct work_struct *work)
* This function decides how many bytes to actually take from the
* given pool, and also debits the entropy count accordingly.
*/
-static size_t account(struct entropy_store *r, size_t nbytes, int min,
- int reserved)
+static size_t account(struct entropy_store *r, size_t nbytes,
+ struct entropy_store *dest)
{
- int have_bytes;
+ int have_bytes, min, reserved;
int entropy_count, orig;
size_t ibytes;
+ min = reserved = 0;
+ if (dest != NULL)
+ account_xfer(dest, nbytes, &min, &reserved);
+
BUG_ON(r->entropy_count > r->poolinfo->poolfracbits);
/* Can we pull enough? */
@@ -1077,13 +1084,11 @@ static void extract_buf(struct entropy_store *r, __u8 *out)
* This function extracts randomness from the "entropy pool", and
* returns it in a buffer.
*
- * The min parameter specifies the minimum amount we can pull before
- * failing to avoid races that defeat catastrophic reseeding while the
- * reserved parameter indicates how much entropy we must leave in the
- * pool after each pull to avoid starving other readers.
+ * The 'dest' parameter identifies the pool the entropy is to be used for,
+ * or is NULL if it's not to be used in another pool.
*/
static ssize_t extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, int min, int reserved)
+ size_t nbytes, struct entropy_store *dest)
{
ssize_t ret = 0, i;
__u8 tmp[EXTRACT_SIZE];
@@ -1107,7 +1112,7 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf,
trace_extract_entropy(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
xfer_secondary_pool(r, nbytes);
- nbytes = account(r, nbytes, min, reserved);
+ nbytes = account(r, nbytes, dest);
while (nbytes) {
extract_buf(r, tmp);
@@ -1144,7 +1149,7 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
xfer_secondary_pool(r, nbytes);
- nbytes = account(r, nbytes, 0, 0);
+ nbytes = account(r, nbytes, NULL);
while (nbytes) {
if (need_resched()) {
@@ -1191,7 +1196,7 @@ void get_random_bytes(void *buf, int nbytes)
nonblocking_pool.entropy_total);
#endif
trace_get_random_bytes(nbytes, _RET_IP_);
- extract_entropy(&nonblocking_pool, buf, nbytes, 0, 0);
+ extract_entropy(&nonblocking_pool, buf, nbytes, NULL);
}
EXPORT_SYMBOL(get_random_bytes);
@@ -1223,7 +1228,7 @@ void get_random_bytes_arch(void *buf, int nbytes)
}
if (nbytes)
- extract_entropy(&nonblocking_pool, p, nbytes, 0, 0);
+ extract_entropy(&nonblocking_pool, p, nbytes, NULL);
}
EXPORT_SYMBOL(get_random_bytes_arch);
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 06/14] random: separate quantity of bytes extracted and entropy to credit
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (4 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 05/14] random: move transfer accounting into account() helper Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 07/14] random: exploit any extra entropy too when reseeding Greg Price
` (7 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
The account() function serves two purposes: it decides how many
bytes of data we allow ourselves to extract from a pool (and as a
corollary how much to debit that pool's entropy estimate), and
when the extraction is for the purpose of transferring to another
pool it also decides how many bits of estimated entropy we should
credit to the other pool.
Introduce an output parameter to tell the caller the one value
separately from the other. The number of bits to credit is not
used in most callers, so do nothing with it there.
In isolation this would be useless abstraction as the values are
interchangeable, but the next commit will make them differ.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index a624262e8..c11281551 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -896,7 +896,8 @@ void add_disk_randomness(struct gendisk *disk)
*********************************************************************/
static ssize_t extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, struct entropy_store *dest);
+ size_t nbytes, struct entropy_store *dest,
+ int *credit_bits);
/*
* This utility inline function is responsible for transferring entropy
@@ -944,14 +945,14 @@ static void account_xfer(struct entropy_store *dest, int nbytes,
static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
{
__u32 tmp[OUTPUT_POOL_WORDS];
- int bytes;
+ int bytes, credit_bits;
bytes = min_t(int, nbytes, sizeof(tmp));
trace_xfer_secondary_pool(r->name, bytes * 8, nbytes * 8,
ENTROPY_BITS(r), ENTROPY_BITS(r->pull));
- bytes = extract_entropy(r->pull, tmp, bytes, r);
+ bytes = extract_entropy(r->pull, tmp, bytes, r, &credit_bits);
mix_pool_bytes(r, tmp, bytes, NULL);
- credit_entropy_bits(r, bytes*8);
+ credit_entropy_bits(r, credit_bits);
}
/*
@@ -975,7 +976,7 @@ static void push_to_pool(struct work_struct *work)
* given pool, and also debits the entropy count accordingly.
*/
static size_t account(struct entropy_store *r, size_t nbytes,
- struct entropy_store *dest)
+ struct entropy_store *dest, int *credit_bits)
{
int have_bytes, min, reserved;
int entropy_count, orig;
@@ -998,6 +999,8 @@ retry:
max(0, have_bytes - reserved));
if (ibytes < min)
ibytes = 0;
+ if (credit_bits != NULL)
+ *credit_bits = ibytes * 8;
entropy_count = max_t(int, 0,
entropy_count - (ibytes << (ENTROPY_SHIFT + 3)));
if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
@@ -1088,7 +1091,8 @@ static void extract_buf(struct entropy_store *r, __u8 *out)
* or is NULL if it's not to be used in another pool.
*/
static ssize_t extract_entropy(struct entropy_store *r, void *buf,
- size_t nbytes, struct entropy_store *dest)
+ size_t nbytes, struct entropy_store *dest,
+ int *credit_bits)
{
ssize_t ret = 0, i;
__u8 tmp[EXTRACT_SIZE];
@@ -1112,7 +1116,7 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf,
trace_extract_entropy(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
xfer_secondary_pool(r, nbytes);
- nbytes = account(r, nbytes, dest);
+ nbytes = account(r, nbytes, dest, credit_bits);
while (nbytes) {
extract_buf(r, tmp);
@@ -1149,7 +1153,7 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf,
trace_extract_entropy_user(r->name, nbytes, ENTROPY_BITS(r), _RET_IP_);
xfer_secondary_pool(r, nbytes);
- nbytes = account(r, nbytes, NULL);
+ nbytes = account(r, nbytes, NULL, NULL);
while (nbytes) {
if (need_resched()) {
@@ -1196,7 +1200,7 @@ void get_random_bytes(void *buf, int nbytes)
nonblocking_pool.entropy_total);
#endif
trace_get_random_bytes(nbytes, _RET_IP_);
- extract_entropy(&nonblocking_pool, buf, nbytes, NULL);
+ extract_entropy(&nonblocking_pool, buf, nbytes, NULL, NULL);
}
EXPORT_SYMBOL(get_random_bytes);
@@ -1228,7 +1232,7 @@ void get_random_bytes_arch(void *buf, int nbytes)
}
if (nbytes)
- extract_entropy(&nonblocking_pool, p, nbytes, NULL);
+ extract_entropy(&nonblocking_pool, p, nbytes, NULL, NULL);
}
EXPORT_SYMBOL(get_random_bytes_arch);
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 07/14] random: exploit any extra entropy too when reseeding
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (5 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 06/14] random: separate quantity of bytes extracted and entropy to credit Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 08/14] random: rate-limit reseeding only after properly seeded Greg Price
` (6 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
When extracting from the input pool to feed one of the output
pools, extracting more bytes can't hurt the reseed, and it can
help if there happens to be more entropy than we estimated. We
deliberately try to be conservative in our estimates -- for
example, mixing in the cycle counter on each event but estimating
based on the low-resolution clock -- so this situation is likely.
For example, the authors of http://eprint.iacr.org/2012/251.pdf
found in a multi-week run on a desktop that the actual entropy
from add_input_randomness was much higher than our estimates,
at 9.69 bits min-entropy per event from the cycle counters alone
vs. 1.85 bits estimated.
The only reason to hold back is that we have to debit the input
pool's entropy estimate for every byte we extract, which may delay
us the next time we want to extract from the input pool. But if
we're already leaving the pool practically empty, this isn't much
of a cost. So go ahead and suck up two full extractions, 160 bits.
If we have even more than that and didn't know it, this should
still be a good solid seed.
We just have to make sure not to give ourselves credit for more
entropy than our sober estimates allow. The credit_bits output
parameter takes care of that.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index c11281551..c2428ecb2 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1001,6 +1001,13 @@ retry:
ibytes = 0;
if (credit_bits != NULL)
*credit_bits = ibytes * 8;
+ if (dest != NULL && ibytes && ibytes == have_bytes) {
+ /* When a reseed drains the pool, we might as well
+ * suck up any underestimated entropy as well as what
+ * we estimate is there. */
+ WARN_ON(credit_bits == NULL);
+ ibytes = max_t(size_t, ibytes, 2*EXTRACT_SIZE);
+ }
entropy_count = max_t(int, 0,
entropy_count - (ibytes << (ENTROPY_SHIFT + 3)));
if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 08/14] random: rate-limit reseeding only after properly seeded
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (6 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 07/14] random: exploit any extra entropy too when reseeding Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 09/14] random: reserve entropy for nonblocking pool early on Greg Price
` (5 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
Until then, we need all the entropy we can get. The "min_bytes"
logic takes care of making these reseeds catastrophic, and
reserving entropy for /dev/random isn't a priority in early boot.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index c2428ecb2..f55365696 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -907,7 +907,7 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf,
static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes);
static void xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
{
- if (r->limit == 0 && random_min_urandom_seed) {
+ if (r->limit == 0 && r->initialized && random_min_urandom_seed) {
unsigned long now = jiffies;
if (time_before(now,
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 09/14] random: reserve entropy for nonblocking pool early on
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (7 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 08/14] random: rate-limit reseeding only after properly seeded Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 10/14] random: direct all routine input via input pool Greg Price
` (4 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
While booting, our priority is to get the nonblocking pool (which
supplies /dev/urandom and the kernel's internal randomness
consumption) initialized soon. If someone reads from /dev/random,
let them wait until we've either done that, or have enough entropy
to serve them and also do that.
This adds a wrinkle to determining when we're ready for someone to
read from /dev/random, so factor that out.
At present most input goes directly to the nonblocking pool early
on anyway, but this puts us in a position to change that.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 25 +++++++++++++++++++------
1 file changed, 19 insertions(+), 6 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index f55365696..58e3e81d4 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -586,6 +586,17 @@ static void fast_mix(struct fast_pool *f, __u32 input[4])
f->count++;
}
+static int
+random_readable(int input_entropy_bits)
+{
+ /* We need enough bits to wake up for ... */
+ int thresh = random_read_wakeup_bits;
+ if (!nonblocking_pool.initialized)
+ /* ... that aren't reserved for the nonblocking pool. */
+ thresh += random_read_wakeup_bits;
+ return input_entropy_bits >= thresh;
+}
+
/*
* Credit (or debit) the entropy store with n bits of entropy.
* Use credit_entropy_bits_safe() if the value comes from userspace
@@ -669,7 +680,7 @@ retry:
int entropy_bits = entropy_count >> ENTROPY_SHIFT;
/* should we wake readers? */
- if (entropy_bits >= random_read_wakeup_bits) {
+ if (random_readable(entropy_bits)) {
wake_up_interruptible(&random_read_wait);
kill_fasync(&fasync, SIGIO, POLL_IN);
}
@@ -936,9 +947,12 @@ static void account_xfer(struct entropy_store *dest, int nbytes,
(dest->entropy_total+7) / 8);
}
- /* Reserve some for /dev/random's pool, unless we really need it. */
+ /* Reserve a reseed's worth for the nonblocking pool early on
+ * when we really need it; later, reserve some for /dev/random */
*reserved_bytes = 0;
- if (!dest->limit && dest->initialized)
+ if (dest == &blocking_pool && !nonblocking_pool.initialized)
+ *reserved_bytes = random_read_wakeup_bits / 8;
+ else if (dest == &nonblocking_pool && dest->initialized)
*reserved_bytes = 2 * (random_read_wakeup_bits / 8);
}
@@ -1329,8 +1343,7 @@ random_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
return -EAGAIN;
wait_event_interruptible(random_read_wait,
- ENTROPY_BITS(&input_pool) >=
- random_read_wakeup_bits);
+ random_readable(ENTROPY_BITS(&input_pool)));
if (signal_pending(current))
return -ERESTARTSYS;
}
@@ -1361,7 +1374,7 @@ random_poll(struct file *file, poll_table * wait)
poll_wait(file, &random_read_wait, wait);
poll_wait(file, &random_write_wait, wait);
mask = 0;
- if (ENTROPY_BITS(&input_pool) >= random_read_wakeup_bits)
+ if (random_readable(ENTROPY_BITS(&input_pool)))
mask |= POLLIN | POLLRDNORM;
if (ENTROPY_BITS(&input_pool) < random_write_wakeup_bits)
mask |= POLLOUT | POLLWRNORM;
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 10/14] random: direct all routine input via input pool
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (8 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 09/14] random: reserve entropy for nonblocking pool early on Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 11/14] random: separate entropy since auto-push from entropy_total Greg Price
` (3 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
This lets us control when the nonblocking pool is reseeded from
input, even early in boot. Our normal reseed mechanisms then
ensure that the input is used in "catastrophic reseeds",
high-entropy blobs added all at once with no possibility of output
between one part of the input and another. If we alternate
reseed, output, reseed, output, etc., and an attacker sees the
output, then they can brute-force one reseed at a time, so that
our randomness is only slightly better than the single best reseed.
The preceding commits should ensure that until we're initialized,
we get entropy into the nonblocking pool ASAP without rate-limiting,
or sending any to the blocking pool, or losing any extra entropy
beyond our estimates, limited only by the need to batch it up into
large reseeds. This should accomplish the important job of
getting us quickly seeded that was previously accomplished by
sending the input straight to the nonblocking pool early on.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 58e3e81d4..ea389723f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -753,11 +753,6 @@ void add_device_randomness(const void *buf, unsigned int size)
_mix_pool_bytes(&input_pool, buf, size, NULL);
_mix_pool_bytes(&input_pool, &time, sizeof(time), NULL);
spin_unlock_irqrestore(&input_pool.lock, flags);
-
- spin_lock_irqsave(&nonblocking_pool.lock, flags);
- _mix_pool_bytes(&nonblocking_pool, buf, size, NULL);
- _mix_pool_bytes(&nonblocking_pool, &time, sizeof(time), NULL);
- spin_unlock_irqrestore(&nonblocking_pool.lock, flags);
}
EXPORT_SYMBOL(add_device_randomness);
@@ -775,7 +770,6 @@ static struct timer_rand_state input_timer_state = INIT_TIMER_RAND_STATE;
*/
static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
{
- struct entropy_store *r;
struct {
long jiffies;
unsigned cycles;
@@ -788,8 +782,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
sample.jiffies = jiffies;
sample.cycles = random_get_entropy();
sample.num = num;
- r = nonblocking_pool.initialized ? &input_pool : &nonblocking_pool;
- mix_pool_bytes(r, &sample, sizeof(sample), NULL);
+ mix_pool_bytes(&input_pool, &sample, sizeof(sample), NULL);
/*
* Calculate number of bits of randomness we probably added.
@@ -823,7 +816,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
* Round down by 1 bit on general principles,
* and limit entropy entimate to 12 bits.
*/
- credit_entropy_bits(r, min_t(int, fls(delta>>1), 11));
+ credit_entropy_bits(&input_pool, min_t(int, fls(delta>>1), 11));
}
preempt_enable();
}
@@ -848,7 +841,6 @@ static DEFINE_PER_CPU(struct fast_pool, irq_randomness);
void add_interrupt_randomness(int irq, int irq_flags)
{
- struct entropy_store *r;
struct fast_pool *fast_pool = &__get_cpu_var(irq_randomness);
struct pt_regs *regs = get_irq_regs();
unsigned long now = jiffies;
@@ -871,8 +863,9 @@ void add_interrupt_randomness(int irq, int irq_flags)
fast_pool->last = now;
- r = nonblocking_pool.initialized ? &input_pool : &nonblocking_pool;
- __mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool), NULL);
+ __mix_pool_bytes(&input_pool,
+ &fast_pool->pool, sizeof(fast_pool->pool),
+ NULL);
/*
* If we don't have a valid cycle counter, and we see
* back-to-back timer interrupts, then skip giving credit for
@@ -886,7 +879,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
} else
fast_pool->last_timer_intr = 0;
}
- credit_entropy_bits(r, 1);
+ credit_entropy_bits(&input_pool, 1);
}
#ifdef CONFIG_BLOCK
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 11/14] random: separate entropy since auto-push from entropy_total
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (9 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 10/14] random: direct all routine input via input pool Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 12/14] random: separate minimum reseed size from minimum /dev/random read Greg Price
` (2 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
We're using and updating the entropy_total field for different
purposes on input_pool and nonblocking_pool, only one of which
matches the name. This makes it hard to understand what the field
means.
Separate the computation on input_pool, which is of entropy since
the last auto-push, from the computation on nonblocking_pool.
Also compute 'initialized' only for nonblocking_pool, which is the
only place where the concept really makes sense.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 17 +++++++++--------
include/trace/events/random.h | 24 ++++++++++++++----------
2 files changed, 23 insertions(+), 18 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index ea389723f..1f9c69662 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -425,6 +425,7 @@ struct entropy_store {
unsigned short add_ptr;
unsigned short input_rotate;
int entropy_count;
+ int entropy_since_push;
int entropy_total;
unsigned int initialized:1;
unsigned int limit:1;
@@ -662,11 +663,10 @@ retry:
if (cmpxchg(&r->entropy_count, orig, entropy_count) != orig)
goto retry;
- r->entropy_total += nbits;
- if (!r->initialized && r->entropy_total > 128) {
- r->initialized = 1;
- r->entropy_total = 0;
- if (r == &nonblocking_pool) {
+ if (r == &nonblocking_pool) {
+ r->entropy_total += nbits;
+ if (!r->initialized && r->entropy_total > 128) {
+ r->initialized = 1;
prandom_reseed_late();
pr_notice("random: %s pool is initialized\n", r->name);
}
@@ -674,6 +674,7 @@ retry:
trace_credit_entropy_bits(r->name, nbits,
entropy_count >> ENTROPY_SHIFT,
+ r->entropy_since_push,
r->entropy_total, _RET_IP_);
if (r == &input_pool) {
@@ -689,9 +690,9 @@ retry:
* forth between them, until the output pools are 75%
* full.
*/
+ r->entropy_since_push += nbits;
if (entropy_bits > random_write_wakeup_bits &&
- r->initialized &&
- r->entropy_total >= 2*random_read_wakeup_bits) {
+ r->entropy_since_push >= 2*random_read_wakeup_bits) {
static struct entropy_store *last = &blocking_pool;
struct entropy_store *other = &blocking_pool;
@@ -703,7 +704,7 @@ retry:
if (last->entropy_count <=
3 * last->poolinfo->poolfracbits / 4) {
schedule_work(&last->push_work);
- r->entropy_total = 0;
+ r->entropy_since_push = 0;
}
}
}
diff --git a/include/trace/events/random.h b/include/trace/events/random.h
index 805af6db4..4edf5ceb5 100644
--- a/include/trace/events/random.h
+++ b/include/trace/events/random.h
@@ -61,29 +61,33 @@ DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock,
TRACE_EVENT(credit_entropy_bits,
TP_PROTO(const char *pool_name, int bits, int entropy_count,
- int entropy_total, unsigned long IP),
+ int entropy_since_push, int entropy_total, unsigned long IP),
- TP_ARGS(pool_name, bits, entropy_count, entropy_total, IP),
+ TP_ARGS(pool_name, bits, entropy_count, entropy_since_push,
+ entropy_total, IP),
TP_STRUCT__entry(
__field( const char *, pool_name )
__field( int, bits )
__field( int, entropy_count )
+ __field( int, entropy_since_push )
__field( int, entropy_total )
__field(unsigned long, IP )
),
TP_fast_assign(
- __entry->pool_name = pool_name;
- __entry->bits = bits;
- __entry->entropy_count = entropy_count;
- __entry->entropy_total = entropy_total;
- __entry->IP = IP;
+ __entry->pool_name = pool_name;
+ __entry->bits = bits;
+ __entry->entropy_count = entropy_count;
+ __entry->entropy_since_push = entropy_since_push;
+ __entry->entropy_total = entropy_total;
+ __entry->IP = IP;
),
- TP_printk("%s pool: bits %d entropy_count %d entropy_total %d "
- "caller %pF", __entry->pool_name, __entry->bits,
- __entry->entropy_count, __entry->entropy_total,
+ TP_printk("%s pool: bits %d entropy_count %d entropy_since_push %d "
+ "entropy_total %d caller %pF", __entry->pool_name,
+ __entry->bits, __entry->entropy_count,
+ __entry->entropy_since_push, __entry->entropy_total,
(void *)__entry->IP)
);
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 12/14] random: separate minimum reseed size from minimum /dev/random read
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (10 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 11/14] random: separate entropy since auto-push from entropy_total Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:01 ` [PATCH 13/14] random: count only catastrophic reseeds for initialization Greg Price
2013-12-15 2:02 ` [PATCH 14/14] random: target giant reseeds, to be conservative Greg Price
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
We've used random_read_wakeup_bits for two quite different purposes
that may be best with different values. The minimum number of bits
to wake up a blocked /dev/random reader has long been 64 by
default, and users may want to keep it there. The minimum number
of bits in a seed for /dev/urandom and the kernel's general use, on
the other hand, should be at least 128 for good commercial security
and users may want it higher.
Make a new parameter for the minimum size of a reseed, and make it
128 by default.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 46 ++++++++++++++++++++++++++++++++--------------
1 file changed, 32 insertions(+), 14 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 1f9c69662..b354fd15f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -292,8 +292,14 @@
#define ENTROPY_BITS(r) ((r)->entropy_count >> ENTROPY_SHIFT)
/*
+ * The minimum number of bits of estimated entropy to use in a reseed
+ * of the main output pool.
+ */
+static int min_reseed_bits = 128;
+
+/*
* The minimum number of bits of entropy before we wake up a read on
- * /dev/random. Should be enough to do a significant reseed.
+ * /dev/random.
*/
static int random_read_wakeup_bits = 64;
@@ -594,7 +600,7 @@ random_readable(int input_entropy_bits)
int thresh = random_read_wakeup_bits;
if (!nonblocking_pool.initialized)
/* ... that aren't reserved for the nonblocking pool. */
- thresh += random_read_wakeup_bits;
+ thresh += min_reseed_bits;
return input_entropy_bits >= thresh;
}
@@ -665,7 +671,7 @@ retry:
if (r == &nonblocking_pool) {
r->entropy_total += nbits;
- if (!r->initialized && r->entropy_total > 128) {
+ if (!r->initialized && r->entropy_total >= min_reseed_bits) {
r->initialized = 1;
prandom_reseed_late();
pr_notice("random: %s pool is initialized\n", r->name);
@@ -692,7 +698,7 @@ retry:
*/
r->entropy_since_push += nbits;
if (entropy_bits > random_write_wakeup_bits &&
- r->entropy_since_push >= 2*random_read_wakeup_bits) {
+ r->entropy_since_push >= min_reseed_bits) {
static struct entropy_store *last = &blocking_pool;
struct entropy_store *other = &blocking_pool;
@@ -929,15 +935,15 @@ static void xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
static void account_xfer(struct entropy_store *dest, int nbytes,
int *min_bytes, int *reserved_bytes)
{
- /* Try to pull a full wakeup's worth if we might have just woken up
- * for it, and a full reseed's worth (which is controlled by the same
- * parameter) for the nonblocking pool... */
- if (dest == &blocking_pool || dest->initialized) {
+ /* Try to pull a full wakeup's worth if we might have just
+ * woken up for it... */
+ if (dest == &blocking_pool) {
*min_bytes = random_read_wakeup_bits / 8;
} else {
- /* ... except if we're hardly seeded at all, we'll settle for
- * enough to double what we have. */
- *min_bytes = min(random_read_wakeup_bits / 8,
+ /* ... or a full reseed's worth for the nonblocking
+ * pool, except if we're hardly seeded at all, we'll
+ * settle for enough to double what we have. */
+ *min_bytes = min(min_reseed_bits / 8,
(dest->entropy_total+7) / 8);
}
@@ -945,7 +951,7 @@ static void account_xfer(struct entropy_store *dest, int nbytes,
* when we really need it; later, reserve some for /dev/random */
*reserved_bytes = 0;
if (dest == &blocking_pool && !nonblocking_pool.initialized)
- *reserved_bytes = random_read_wakeup_bits / 8;
+ *reserved_bytes = min_reseed_bits / 8;
else if (dest == &nonblocking_pool && dest->initialized)
*reserved_bytes = 2 * (random_read_wakeup_bits / 8);
}
@@ -974,7 +980,7 @@ static void push_to_pool(struct work_struct *work)
struct entropy_store *r = container_of(work, struct entropy_store,
push_work);
BUG_ON(!r);
- _xfer_secondary_pool(r, random_read_wakeup_bits/8);
+ _xfer_secondary_pool(r, min_reseed_bits/8);
trace_push_to_pool(r->name, r->entropy_count >> ENTROPY_SHIFT,
r->pull->entropy_count >> ENTROPY_SHIFT);
}
@@ -1516,8 +1522,11 @@ EXPORT_SYMBOL(generate_random_uuid);
#include <linux/sysctl.h>
-static int min_read_thresh = 8, min_write_thresh;
+static int min_min_reseed_bits = 32;
+static int max_min_reseed_bits = OUTPUT_POOL_WORDS * 32;
+static int min_read_thresh = 8;
static int max_read_thresh = OUTPUT_POOL_WORDS * 32;
+static int min_write_thresh;
static int max_write_thresh = INPUT_POOL_WORDS * 32;
static char sysctl_bootid[16];
@@ -1592,6 +1601,15 @@ struct ctl_table random_table[] = {
.data = &input_pool.entropy_count,
},
{
+ .procname = "min_reseed_bits",
+ .data = &min_reseed_bits,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &min_min_reseed_bits,
+ .extra2 = &max_min_reseed_bits,
+ },
+ {
.procname = "read_wakeup_threshold",
.data = &random_read_wakeup_bits,
.maxlen = sizeof(int),
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 13/14] random: count only catastrophic reseeds for initialization
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (11 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 12/14] random: separate minimum reseed size from minimum /dev/random read Greg Price
@ 2013-12-15 2:01 ` Greg Price
2013-12-15 2:02 ` [PATCH 14/14] random: target giant reseeds, to be conservative Greg Price
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:01 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
In the earlier commit "random: direct all routine input via input pool",
we made sure that input comes in large reseeds which should be big
enough to prevent an attacker from brute-forcing any one of them.
This is important because a succession of small reseeds, if
interspersed with output an attacker sees, could be brute-forced one
by one. Now update our accounting accordingly so that we track the
largest single reseed, rather than the total of potentially small
reseeds, and call ourselves initialized only with one large reseed.
This shouldn't make much difference with the current code, as we
don't make repeated small reseeds anyway, but it's best to be clear.
Rename entropy_total to seed_entropy_bits to reflect its new function.
While touching the not-yet-initialized warnings, checkpatch complains
about printk(KERN_NOTICE, ...), so switch to pr_notice etc.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 24 ++++++++++++------------
include/trace/events/random.h | 13 +++++++------
2 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index b354fd15f..855e401e5 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -432,7 +432,7 @@ struct entropy_store {
unsigned short input_rotate;
int entropy_count;
int entropy_since_push;
- int entropy_total;
+ int seed_entropy_bits;
unsigned int initialized:1;
unsigned int limit:1;
unsigned int last_data_init:1;
@@ -670,8 +670,9 @@ retry:
goto retry;
if (r == &nonblocking_pool) {
- r->entropy_total += nbits;
- if (!r->initialized && r->entropy_total >= min_reseed_bits) {
+ r->seed_entropy_bits = max(nbits, r->seed_entropy_bits);
+ if (!r->initialized &&
+ r->seed_entropy_bits >= min_reseed_bits) {
r->initialized = 1;
prandom_reseed_late();
pr_notice("random: %s pool is initialized\n", r->name);
@@ -681,7 +682,7 @@ retry:
trace_credit_entropy_bits(r->name, nbits,
entropy_count >> ENTROPY_SHIFT,
r->entropy_since_push,
- r->entropy_total, _RET_IP_);
+ r->seed_entropy_bits, _RET_IP_);
if (r == &input_pool) {
int entropy_bits = entropy_count >> ENTROPY_SHIFT;
@@ -944,7 +945,7 @@ static void account_xfer(struct entropy_store *dest, int nbytes,
* pool, except if we're hardly seeded at all, we'll
* settle for enough to double what we have. */
*min_bytes = min(min_reseed_bits / 8,
- (dest->entropy_total+7) / 8);
+ (2*dest->seed_entropy_bits + 7) / 8);
}
/* Reserve a reseed's worth for the nonblocking pool early on
@@ -1215,10 +1216,9 @@ void get_random_bytes(void *buf, int nbytes)
{
#if DEBUG_RANDOM_BOOT > 0
if (unlikely(nonblocking_pool.initialized == 0))
- printk(KERN_NOTICE "random: %pF get_random_bytes called "
- "with %d bits of entropy available\n",
- (void *) _RET_IP_,
- nonblocking_pool.entropy_total);
+ pr_notice(
+ "random: %pF get_random_bytes called with only %d bits of seed entropy available\n",
+ (void *) _RET_IP_, nonblocking_pool.seed_entropy_bits);
#endif
trace_get_random_bytes(nbytes, _RET_IP_);
extract_entropy(&nonblocking_pool, buf, nbytes, NULL, NULL);
@@ -1355,9 +1355,9 @@ urandom_read(struct file *file, char __user *buf, size_t nbytes, loff_t *ppos)
int ret;
if (unlikely(nonblocking_pool.initialized == 0))
- printk_once(KERN_NOTICE "random: %s urandom read "
- "with %d bits of entropy available\n",
- current->comm, nonblocking_pool.entropy_total);
+ pr_notice_once(
+ "random: %s urandom read with only %d bits of seed entropy available\n",
+ current->comm, nonblocking_pool.seed_entropy_bits);
ret = extract_entropy_user(&nonblocking_pool, buf, nbytes);
diff --git a/include/trace/events/random.h b/include/trace/events/random.h
index 4edf5ceb5..d07a80146 100644
--- a/include/trace/events/random.h
+++ b/include/trace/events/random.h
@@ -61,17 +61,18 @@ DEFINE_EVENT(random__mix_pool_bytes, mix_pool_bytes_nolock,
TRACE_EVENT(credit_entropy_bits,
TP_PROTO(const char *pool_name, int bits, int entropy_count,
- int entropy_since_push, int entropy_total, unsigned long IP),
+ int entropy_since_push, int seed_entropy_bits,
+ unsigned long IP),
TP_ARGS(pool_name, bits, entropy_count, entropy_since_push,
- entropy_total, IP),
+ seed_entropy_bits, IP),
TP_STRUCT__entry(
__field( const char *, pool_name )
__field( int, bits )
__field( int, entropy_count )
__field( int, entropy_since_push )
- __field( int, entropy_total )
+ __field( int, seed_entropy_bits )
__field(unsigned long, IP )
),
@@ -80,14 +81,14 @@ TRACE_EVENT(credit_entropy_bits,
__entry->bits = bits;
__entry->entropy_count = entropy_count;
__entry->entropy_since_push = entropy_since_push;
- __entry->entropy_total = entropy_total;
+ __entry->seed_entropy_bits = seed_entropy_bits;
__entry->IP = IP;
),
TP_printk("%s pool: bits %d entropy_count %d entropy_since_push %d "
- "entropy_total %d caller %pF", __entry->pool_name,
+ "seed_entropy_bits %d caller %pF", __entry->pool_name,
__entry->bits, __entry->entropy_count,
- __entry->entropy_since_push, __entry->entropy_total,
+ __entry->entropy_since_push, __entry->seed_entropy_bits,
(void *)__entry->IP)
);
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 14/14] random: target giant reseeds, to be conservative
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
` (12 preceding siblings ...)
2013-12-15 2:01 ` [PATCH 13/14] random: count only catastrophic reseeds for initialization Greg Price
@ 2013-12-15 2:02 ` Greg Price
13 siblings, 0 replies; 15+ messages in thread
From: Greg Price @ 2013-12-15 2:02 UTC (permalink / raw)
To: Theodore Ts'o; +Cc: linux-kernel
A 128-bit seed provides reasonable security. We don't consider
ourselves initialized until we get a seed which we estimate has
entropy min_reseed_bits, by default 128. Our entropy estimates
are generally conservative (see e.g. the empirical analysis in
http://eprint.iacr.org/2012/251.pdf), but entropy estimation is
unavoidably heuristic and there may be circumstances where they
are too optimistic.
To hedge against this risk, even after getting a seed of minimum
size we continue taking bigger reseeds until we reach by default
512 bits of estimated entropy per reseed. Hopefully it should be
difficult to make our entropy estimates a factor of 4 too high.
As a bonus, when the estimates are good, this gives us seeds which
can't be brute-forced within the universe under the known laws of
physics, which ought to really be enough for anybody.
This hedging addresses the same issue that motivates systems like
Fortuna. Our change doesn't go as far in that direction as Fortuna,
but it's much simpler.
The cost is that reseeds will happen about four times (by default)
less often. This is not really a critical issue, as frequent reseeds
mainly help us "recover" if someone glimpses the internal state --
which is largely an academic question, given what an attacker who can
read kernel memory is usually able to do. We still take a
regular-sized seed up front so as not to delay getting initialized.
Signed-off-by: Greg Price <price@mit.edu>
---
drivers/char/random.c | 38 ++++++++++++++++++++++++++++----------
1 file changed, 28 insertions(+), 10 deletions(-)
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 855e401e5..79aee65fe 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -293,11 +293,20 @@
/*
* The minimum number of bits of estimated entropy to use in a reseed
- * of the main output pool.
+ * of the main output pool (for /dev/urandom and the kernel's internal
+ * use) before considering it secure.
*/
static int min_reseed_bits = 128;
/*
+ * The number of bits of estimated entropy to use in a reseed of the
+ * main output pool in the steady state. If this is larger than
+ * min_reseed_bits, then it serves as a hedge against situations where
+ * our entropy estimates are for whatever reason too optimistic.
+ */
+static int target_reseed_bits = 512;
+
+/*
* The minimum number of bits of entropy before we wake up a read on
* /dev/random.
*/
@@ -699,7 +708,7 @@ retry:
*/
r->entropy_since_push += nbits;
if (entropy_bits > random_write_wakeup_bits &&
- r->entropy_since_push >= min_reseed_bits) {
+ r->entropy_since_push >= target_reseed_bits) {
static struct entropy_store *last = &blocking_pool;
struct entropy_store *other = &blocking_pool;
@@ -942,9 +951,9 @@ static void account_xfer(struct entropy_store *dest, int nbytes,
*min_bytes = random_read_wakeup_bits / 8;
} else {
/* ... or a full reseed's worth for the nonblocking
- * pool, except if we're hardly seeded at all, we'll
- * settle for enough to double what we have. */
- *min_bytes = min(min_reseed_bits / 8,
+ * pool, except early on we'll settle for enough to
+ * double what we have. */
+ *min_bytes = min(target_reseed_bits / 8,
(2*dest->seed_entropy_bits + 7) / 8);
}
@@ -981,7 +990,7 @@ static void push_to_pool(struct work_struct *work)
struct entropy_store *r = container_of(work, struct entropy_store,
push_work);
BUG_ON(!r);
- _xfer_secondary_pool(r, min_reseed_bits/8);
+ _xfer_secondary_pool(r, target_reseed_bits/8);
trace_push_to_pool(r->name, r->entropy_count >> ENTROPY_SHIFT,
r->pull->entropy_count >> ENTROPY_SHIFT);
}
@@ -1522,8 +1531,8 @@ EXPORT_SYMBOL(generate_random_uuid);
#include <linux/sysctl.h>
-static int min_min_reseed_bits = 32;
-static int max_min_reseed_bits = OUTPUT_POOL_WORDS * 32;
+static int hard_min_reseed_bits = 32;
+static int max_reseed_bits = OUTPUT_POOL_WORDS * 32;
static int min_read_thresh = 8;
static int max_read_thresh = OUTPUT_POOL_WORDS * 32;
static int min_write_thresh;
@@ -1606,8 +1615,17 @@ struct ctl_table random_table[] = {
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec_minmax,
- .extra1 = &min_min_reseed_bits,
- .extra2 = &max_min_reseed_bits,
+ .extra1 = &hard_min_reseed_bits,
+ .extra2 = &target_reseed_bits,
+ },
+ {
+ .procname = "target_reseed_bits",
+ .data = &target_reseed_bits,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &min_reseed_bits,
+ .extra2 = &max_reseed_bits,
},
{
.procname = "read_wakeup_threshold",
--
1.8.3.2
^ permalink raw reply related [flat|nested] 15+ messages in thread
end of thread, other threads:[~2013-12-15 2:02 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-15 2:00 [PATCH 00/14] random: rework reseeding Greg Price
2013-12-15 2:00 ` [PATCH 01/14] random: fix signedness bug Greg Price
2013-12-15 2:00 ` [PATCH 02/14] random: fix a (harmless) overflow Greg Price
2013-12-15 2:01 ` [PATCH 03/14] random: reserve for /dev/random only once /dev/urandom seeded Greg Price
2013-12-15 2:01 ` [PATCH 04/14] random: accept small seeds early on Greg Price
2013-12-15 2:01 ` [PATCH 05/14] random: move transfer accounting into account() helper Greg Price
2013-12-15 2:01 ` [PATCH 06/14] random: separate quantity of bytes extracted and entropy to credit Greg Price
2013-12-15 2:01 ` [PATCH 07/14] random: exploit any extra entropy too when reseeding Greg Price
2013-12-15 2:01 ` [PATCH 08/14] random: rate-limit reseeding only after properly seeded Greg Price
2013-12-15 2:01 ` [PATCH 09/14] random: reserve entropy for nonblocking pool early on Greg Price
2013-12-15 2:01 ` [PATCH 10/14] random: direct all routine input via input pool Greg Price
2013-12-15 2:01 ` [PATCH 11/14] random: separate entropy since auto-push from entropy_total Greg Price
2013-12-15 2:01 ` [PATCH 12/14] random: separate minimum reseed size from minimum /dev/random read Greg Price
2013-12-15 2:01 ` [PATCH 13/14] random: count only catastrophic reseeds for initialization Greg Price
2013-12-15 2:02 ` [PATCH 14/14] random: target giant reseeds, to be conservative Greg Price
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox