public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH-v2 1/4] random: always update the entropy pool under the spinlock
@ 2014-06-14  7:15 Theodore Ts'o
  2014-06-14  7:15 ` [PATCH-v2 2/4] random: remove unneeded hash of a portion of the entropy pool Theodore Ts'o
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Theodore Ts'o @ 2014-06-14  7:15 UTC (permalink / raw)
  To: Linux Kernel Developers List; +Cc: Theodore Ts'o, George Spelvin

Instead of using lockless techniques introduced in commit
902c098a3663, use spin_trylock to try to grab entropy pool's lock.  If
we can't get the lock, then just try again on the next interrupt.

Based on discussions with George Spelvin.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: George Spelvin <linux@horizon.com>
---
 drivers/char/random.c | 44 +++++++++++++++++++++++---------------------
 1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 102c50d..01538b4 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -495,9 +495,8 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in,
 	tap4 = r->poolinfo->tap4;
 	tap5 = r->poolinfo->tap5;
 
-	smp_rmb();
-	input_rotate = ACCESS_ONCE(r->input_rotate);
-	i = ACCESS_ONCE(r->add_ptr);
+	input_rotate = r->input_rotate;
+	i = r->add_ptr;
 
 	/* mix one byte at a time to simplify size handling and churn faster */
 	while (nbytes--) {
@@ -524,9 +523,8 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in,
 		input_rotate = (input_rotate + (i ? 7 : 14)) & 31;
 	}
 
-	ACCESS_ONCE(r->input_rotate) = input_rotate;
-	ACCESS_ONCE(r->add_ptr) = i;
-	smp_wmb();
+	r->input_rotate = input_rotate;
+	r->add_ptr = i;
 
 	if (out)
 		for (j = 0; j < 16; j++)
@@ -845,7 +843,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
 	__u32			input[4], c_high, j_high;
 	__u64			ip;
 	unsigned long		seed;
-	int			credit;
+	int			credit = 0;
 
 	c_high = (sizeof(cycles) > 4) ? cycles >> 32 : 0;
 	j_high = (sizeof(now) > 4) ? now >> 32 : 0;
@@ -860,36 +858,40 @@ void add_interrupt_randomness(int irq, int irq_flags)
 	if ((fast_pool->count & 63) && !time_after(now, fast_pool->last + HZ))
 		return;
 
-	fast_pool->last = now;
-
 	r = nonblocking_pool.initialized ? &input_pool : &nonblocking_pool;
+	if (!spin_trylock(&r->lock)) {
+		fast_pool->count--;
+		return;
+	}
+	fast_pool->last = now;
 	__mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool), NULL);
 
 	/*
+	 * If we have architectural seed generator, produce a seed and
+	 * add it to the pool.  For the sake of paranoia count it as
+	 * 50% entropic.
+	 */
+	if (arch_get_random_seed_long(&seed)) {
+		__mix_pool_bytes(r, &seed, sizeof(seed), NULL);
+		credit += sizeof(seed) * 4;
+	}
+	spin_unlock(&r->lock);
+
+	/*
 	 * If we don't have a valid cycle counter, and we see
 	 * back-to-back timer interrupts, then skip giving credit for
 	 * any entropy, otherwise credit 1 bit.
 	 */
-	credit = 1;
+	credit++;
 	if (cycles == 0) {
 		if (irq_flags & __IRQF_TIMER) {
 			if (fast_pool->last_timer_intr)
-				credit = 0;
+				credit--;
 			fast_pool->last_timer_intr = 1;
 		} else
 			fast_pool->last_timer_intr = 0;
 	}
 
-	/*
-	 * If we have architectural seed generator, produce a seed and
-	 * add it to the pool.  For the sake of paranoia count it as
-	 * 50% entropic.
-	 */
-	if (arch_get_random_seed_long(&seed)) {
-		__mix_pool_bytes(r, &seed, sizeof(seed), NULL);
-		credit += sizeof(seed) * 4;
-	}
-
 	credit_entropy_bits(r, credit);
 }
 
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH-v2 2/4] random: remove unneeded hash of a portion of the entropy pool
  2014-06-14  7:15 [PATCH-v2 1/4] random: always update the entropy pool under the spinlock Theodore Ts'o
@ 2014-06-14  7:15 ` Theodore Ts'o
  2014-06-14  7:15 ` [PATCH-v2 3/4] random: only update the last_pulled time if we actually transferred entropy Theodore Ts'o
  2014-06-14  7:15 ` [PATCH-v2 4/4] random: clean up interrupt entropy accounting for archs w/o cycle counters Theodore Ts'o
  2 siblings, 0 replies; 7+ messages in thread
From: Theodore Ts'o @ 2014-06-14  7:15 UTC (permalink / raw)
  To: Linux Kernel Developers List; +Cc: Theodore Ts'o, George Spelvin

We previously extracted a portion of the entropy pool in
mix_pool_bytes() and hashed it in to avoid racing CPU's from returning
duplicate random values.  Now that we are using a spinlock to prevent
this from happening, this is no longer necessary.  So remove it, to
simplify the code a bit.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: George Spelvin <linux@horizon.com>
---
 drivers/char/random.c | 51 ++++++++++++++++++++-------------------------------
 1 file changed, 20 insertions(+), 31 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 01538b4..a74c92a 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -481,9 +481,9 @@ static __u32 const twist_table[8] = {
  * the entropy is concentrated in the low-order bits.
  */
 static void _mix_pool_bytes(struct entropy_store *r, const void *in,
-			    int nbytes, __u8 out[64])
+			    int nbytes)
 {
-	unsigned long i, j, tap1, tap2, tap3, tap4, tap5;
+	unsigned long i, tap1, tap2, tap3, tap4, tap5;
 	int input_rotate;
 	int wordmask = r->poolinfo->poolwords - 1;
 	const char *bytes = in;
@@ -525,27 +525,23 @@ static void _mix_pool_bytes(struct entropy_store *r, const void *in,
 
 	r->input_rotate = input_rotate;
 	r->add_ptr = i;
-
-	if (out)
-		for (j = 0; j < 16; j++)
-			((__u32 *)out)[j] = r->pool[(i - j) & wordmask];
 }
 
 static void __mix_pool_bytes(struct entropy_store *r, const void *in,
-			     int nbytes, __u8 out[64])
+			     int nbytes)
 {
 	trace_mix_pool_bytes_nolock(r->name, nbytes, _RET_IP_);
-	_mix_pool_bytes(r, in, nbytes, out);
+	_mix_pool_bytes(r, in, nbytes);
 }
 
 static void mix_pool_bytes(struct entropy_store *r, const void *in,
-			   int nbytes, __u8 out[64])
+			   int nbytes)
 {
 	unsigned long flags;
 
 	trace_mix_pool_bytes(r->name, nbytes, _RET_IP_);
 	spin_lock_irqsave(&r->lock, flags);
-	_mix_pool_bytes(r, in, nbytes, out);
+	_mix_pool_bytes(r, in, nbytes);
 	spin_unlock_irqrestore(&r->lock, flags);
 }
 
@@ -737,13 +733,13 @@ void add_device_randomness(const void *buf, unsigned int size)
 
 	trace_add_device_randomness(size, _RET_IP_);
 	spin_lock_irqsave(&input_pool.lock, flags);
-	_mix_pool_bytes(&input_pool, buf, size, NULL);
-	_mix_pool_bytes(&input_pool, &time, sizeof(time), NULL);
+	_mix_pool_bytes(&input_pool, buf, size);
+	_mix_pool_bytes(&input_pool, &time, sizeof(time));
 	spin_unlock_irqrestore(&input_pool.lock, flags);
 
 	spin_lock_irqsave(&nonblocking_pool.lock, flags);
-	_mix_pool_bytes(&nonblocking_pool, buf, size, NULL);
-	_mix_pool_bytes(&nonblocking_pool, &time, sizeof(time), NULL);
+	_mix_pool_bytes(&nonblocking_pool, buf, size);
+	_mix_pool_bytes(&nonblocking_pool, &time, sizeof(time));
 	spin_unlock_irqrestore(&nonblocking_pool.lock, flags);
 }
 EXPORT_SYMBOL(add_device_randomness);
@@ -776,7 +772,7 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
 	sample.cycles = random_get_entropy();
 	sample.num = num;
 	r = nonblocking_pool.initialized ? &input_pool : &nonblocking_pool;
-	mix_pool_bytes(r, &sample, sizeof(sample), NULL);
+	mix_pool_bytes(r, &sample, sizeof(sample));
 
 	/*
 	 * Calculate number of bits of randomness we probably added.
@@ -864,7 +860,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
 		return;
 	}
 	fast_pool->last = now;
-	__mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool), NULL);
+	__mix_pool_bytes(r, &fast_pool->pool, sizeof(fast_pool->pool));
 
 	/*
 	 * If we have architectural seed generator, produce a seed and
@@ -872,7 +868,7 @@ void add_interrupt_randomness(int irq, int irq_flags)
 	 * 50% entropic.
 	 */
 	if (arch_get_random_seed_long(&seed)) {
-		__mix_pool_bytes(r, &seed, sizeof(seed), NULL);
+		__mix_pool_bytes(r, &seed, sizeof(seed));
 		credit += sizeof(seed) * 4;
 	}
 	spin_unlock(&r->lock);
@@ -954,7 +950,7 @@ static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
 				  ENTROPY_BITS(r), ENTROPY_BITS(r->pull));
 	bytes = extract_entropy(r->pull, tmp, bytes,
 				random_read_wakeup_bits / 8, rsvd_bytes);
-	mix_pool_bytes(r, tmp, bytes, NULL);
+	mix_pool_bytes(r, tmp, bytes);
 	credit_entropy_bits(r, bytes*8);
 }
 
@@ -1029,7 +1025,6 @@ static void extract_buf(struct entropy_store *r, __u8 *out)
 		unsigned long l[LONGS(20)];
 	} hash;
 	__u32 workspace[SHA_WORKSPACE_WORDS];
-	__u8 extract[64];
 	unsigned long flags;
 
 	/*
@@ -1058,15 +1053,9 @@ static void extract_buf(struct entropy_store *r, __u8 *out)
 	 * brute-forcing the feedback as hard as brute-forcing the
 	 * hash.
 	 */
-	__mix_pool_bytes(r, hash.w, sizeof(hash.w), extract);
+	__mix_pool_bytes(r, hash.w, sizeof(hash.w));
 	spin_unlock_irqrestore(&r->lock, flags);
 
-	/*
-	 * To avoid duplicates, we atomically extract a portion of the
-	 * pool while mixing, and hash one final time.
-	 */
-	sha_transform(hash.w, extract, workspace);
-	memset(extract, 0, sizeof(extract));
 	memset(workspace, 0, sizeof(workspace));
 
 	/*
@@ -1253,14 +1242,14 @@ static void init_std_data(struct entropy_store *r)
 	unsigned long rv;
 
 	r->last_pulled = jiffies;
-	mix_pool_bytes(r, &now, sizeof(now), NULL);
+	mix_pool_bytes(r, &now, sizeof(now));
 	for (i = r->poolinfo->poolbytes; i > 0; i -= sizeof(rv)) {
 		if (!arch_get_random_seed_long(&rv) &&
 		    !arch_get_random_long(&rv))
 			rv = random_get_entropy();
-		mix_pool_bytes(r, &rv, sizeof(rv), NULL);
+		mix_pool_bytes(r, &rv, sizeof(rv));
 	}
-	mix_pool_bytes(r, utsname(), sizeof(*(utsname())), NULL);
+	mix_pool_bytes(r, utsname(), sizeof(*(utsname())));
 }
 
 /*
@@ -1323,7 +1312,7 @@ static int arch_random_refill(void)
 	if (n) {
 		unsigned int rand_bytes = n * sizeof(unsigned long);
 
-		mix_pool_bytes(&input_pool, buf, rand_bytes, NULL);
+		mix_pool_bytes(&input_pool, buf, rand_bytes);
 		credit_entropy_bits(&input_pool, rand_bytes*4);
 	}
 
@@ -1413,7 +1402,7 @@ write_pool(struct entropy_store *r, const char __user *buffer, size_t count)
 		count -= bytes;
 		p += bytes;
 
-		mix_pool_bytes(r, buf, bytes, NULL);
+		mix_pool_bytes(r, buf, bytes);
 		cond_resched();
 	}
 
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH-v2 3/4] random: only update the last_pulled time if we actually transferred entropy
  2014-06-14  7:15 [PATCH-v2 1/4] random: always update the entropy pool under the spinlock Theodore Ts'o
  2014-06-14  7:15 ` [PATCH-v2 2/4] random: remove unneeded hash of a portion of the entropy pool Theodore Ts'o
@ 2014-06-14  7:15 ` Theodore Ts'o
  2014-06-14  7:15 ` [PATCH-v2 4/4] random: clean up interrupt entropy accounting for archs w/o cycle counters Theodore Ts'o
  2 siblings, 0 replies; 7+ messages in thread
From: Theodore Ts'o @ 2014-06-14  7:15 UTC (permalink / raw)
  To: Linux Kernel Developers List; +Cc: Theodore Ts'o, George Spelvin

In xfer_secondary_pull(), check to make sure we need to pull from the
secondary pool before checking and potentially updating the
last_pulled time.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: George Spelvin <linux@horizon.com>
---
 drivers/char/random.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index a74c92a..9a59101 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -919,6 +919,11 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf,
 static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes);
 static void xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
 {
+	if (!r->pull ||
+	    r->entropy_count >= (nbytes << (ENTROPY_SHIFT + 3)) ||
+	    r->entropy_count > r->poolinfo->poolfracbits)
+		return;
+
 	if (r->limit == 0 && random_min_urandom_seed) {
 		unsigned long now = jiffies;
 
@@ -927,10 +932,8 @@ static void xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
 			return;
 		r->last_pulled = now;
 	}
-	if (r->pull &&
-	    r->entropy_count < (nbytes << (ENTROPY_SHIFT + 3)) &&
-	    r->entropy_count < r->poolinfo->poolfracbits)
-		_xfer_secondary_pool(r, nbytes);
+
+	_xfer_secondary_pool(r, nbytes);
 }
 
 static void _xfer_secondary_pool(struct entropy_store *r, size_t nbytes)
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH-v2 4/4] random: clean up interrupt entropy accounting for archs w/o cycle counters
  2014-06-14  7:15 [PATCH-v2 1/4] random: always update the entropy pool under the spinlock Theodore Ts'o
  2014-06-14  7:15 ` [PATCH-v2 2/4] random: remove unneeded hash of a portion of the entropy pool Theodore Ts'o
  2014-06-14  7:15 ` [PATCH-v2 3/4] random: only update the last_pulled time if we actually transferred entropy Theodore Ts'o
@ 2014-06-14  7:15 ` Theodore Ts'o
  2014-06-14  7:28   ` George Spelvin
  2 siblings, 1 reply; 7+ messages in thread
From: Theodore Ts'o @ 2014-06-14  7:15 UTC (permalink / raw)
  To: Linux Kernel Developers List; +Cc: Theodore Ts'o, George Spelvin

For architectures that don't have cycle counters, the algorithm for
deciding when to avoid giving entropy credit due to back-to-back timer
interrupts didn't make any sense, since we were checking every 64
interrupts.  Change it so that we only give an entropy credit if the
majority of the interrupts are not based on the timer.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: George Spelvin <linux@horizon.com>
---
 drivers/char/random.c | 26 ++++++++++++--------------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 9a59101..60eecfc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -548,9 +548,9 @@ static void mix_pool_bytes(struct entropy_store *r, const void *in,
 struct fast_pool {
 	__u32		pool[4];
 	unsigned long	last;
-	unsigned short	count;
+	unsigned char	count;
+	unsigned char	notimer_count;
 	unsigned char	rotate;
-	unsigned char	last_timer_intr;
 };
 
 /*
@@ -850,6 +850,8 @@ void add_interrupt_randomness(int irq, int irq_flags)
 	input[3] = ip >> 32;
 
 	fast_mix(fast_pool, input);
+	if ((irq_flags & __IRQF_TIMER) == 0)
+		fast_pool->notimer_count++;
 
 	if ((fast_pool->count & 63) && !time_after(now, fast_pool->last + HZ))
 		return;
@@ -874,19 +876,15 @@ void add_interrupt_randomness(int irq, int irq_flags)
 	spin_unlock(&r->lock);
 
 	/*
-	 * If we don't have a valid cycle counter, and we see
-	 * back-to-back timer interrupts, then skip giving credit for
-	 * any entropy, otherwise credit 1 bit.
+	 * If we have a valid cycle counter or if the majority of
+	 * interrupts collected were non-timer interrupts, then give
+	 * an entropy credit of 1 bit.  Yes, this is being very
+	 * conservative.
 	 */
-	credit++;
-	if (cycles == 0) {
-		if (irq_flags & __IRQF_TIMER) {
-			if (fast_pool->last_timer_intr)
-				credit--;
-			fast_pool->last_timer_intr = 1;
-		} else
-			fast_pool->last_timer_intr = 0;
-	}
+	if (cycles || (fast_pool->notimer_count >= 32))
+		credit++;
+
+	fast_pool->count = fast_pool->notimer_count = 0;
 
 	credit_entropy_bits(r, credit);
 }
-- 
2.0.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH-v2 4/4] random: clean up interrupt entropy accounting for archs w/o cycle counters
  2014-06-14  7:15 ` [PATCH-v2 4/4] random: clean up interrupt entropy accounting for archs w/o cycle counters Theodore Ts'o
@ 2014-06-14  7:28   ` George Spelvin
  2014-06-14 16:00     ` Theodore Ts'o
  0 siblings, 1 reply; 7+ messages in thread
From: George Spelvin @ 2014-06-14  7:28 UTC (permalink / raw)
  To: linux-kernel, tytso; +Cc: linux

+	if (cycles || (fast_pool->notimer_count >= 32))
+		credit++;

Ah, this addresses my concern about too few interrupts, too.  If the
(non-timer) interrupt rate is less than 32/second, you'll never get any
credit.

(If you want to support this mode of operation and still have a non-zero
credit rate, move the clear of notimer_count into this condition.  Then
you get 1 bit per 32 non-timer interrupts no matter how slow.)

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH-v2 4/4] random: clean up interrupt entropy accounting for archs w/o cycle counters
  2014-06-14  7:28   ` George Spelvin
@ 2014-06-14 16:00     ` Theodore Ts'o
  2014-06-14 16:23       ` George Spelvin
  0 siblings, 1 reply; 7+ messages in thread
From: Theodore Ts'o @ 2014-06-14 16:00 UTC (permalink / raw)
  To: George Spelvin; +Cc: linux-kernel

On Sat, Jun 14, 2014 at 03:28:49AM -0400, George Spelvin wrote:
> +	if (cycles || (fast_pool->notimer_count >= 32))
> +		credit++;
> 
> Ah, this addresses my concern about too few interrupts, too.  If the
> (non-timer) interrupt rate is less than 32/second, you'll never get any
> credit.

I'll want to measure the interrupt rate on things like a mobile
handset to see if this is a real problem or not, but the real question
is if you don't have a cycle counter, and the system is largely idle,
*and* all of the clocks are driven off of the same master oscillator,
how much entropy do you really get from measuring timing interrupts
where your time measurement has a granularity of 1/HZ seconds?  

Basically, at that point, you're getting most of your entorpy from
instruction_pointer(regs), and whatever the value of irq is --- and if
irq is mostly TIMER_IRQ, there's not much entropy there either.

Also note that the question is not whether the non-timer interrupt
rate is less than 32 seconds, but rather out of the last 64
interrupts, how many of the interrupts come from non-timer sources?
That's not the same thing, especially if you are running in tickless
mode, which most modern kernels for mobile handsets would want to do
for the obvious power savings reason.  Indeed the main concern on most
mobile handsets is that there aren't that many interrupts to begin
with, because they've been optimized out as much as possible.

The real answer is that ARM manufacuters have to get off their !@#!@?
duff and give us either a real clock cycle counter, or a real hardware
randum number generator, or both...

						- Ted

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH-v2 4/4] random: clean up interrupt entropy accounting for archs w/o cycle counters
  2014-06-14 16:00     ` Theodore Ts'o
@ 2014-06-14 16:23       ` George Spelvin
  0 siblings, 0 replies; 7+ messages in thread
From: George Spelvin @ 2014-06-14 16:23 UTC (permalink / raw)
  To: linux, tytso; +Cc: linux-kernel

I agree with your points, with one exception.  Which may be
me misunderstanding.

> Also note that the question is not whether the non-timer interrupt
> rate is less than 32 seconds, but rather out of the last 64
> interrupts, how many of the interrupts come from non-timer sources?
> That's not the same thing, especially if you are running in tickless
> mode, which most modern kernels for mobile handsets would want to do
> for the obvious power savings reason.  Indeed the main concern on most
> mobile handsets is that there aren't that many interrupts to begin
> with, because they've been optimized out as much as possible.

When you say "the question is", do you mean that's what you eant
the code to do?  Because that's not what it does right now.

The condition for not spilling is

	if ((fast_pool->count & 63) && !time_after(now, fast_pool->last + HZ))
		return;

In other words, spill if there have been 64 samples *or* 1 second since
the last spill.

> The real answer is that ARM manufacuters have to get off their !@#!@?
> duff and give us either a real clock cycle counter, or a real hardware
> randum number generator, or both...

I've thought of beating the RTC against the main oscillator.
But which I know a lot of SoCs have an battery-backed RTC, I don't
know how universal an RTC is.

The other nice source is an otherwise unused ADC.  Even if the input
is shorted out, there's lsbit noise.  but again, not everything has
an audio ADC.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2014-06-14 16:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-06-14  7:15 [PATCH-v2 1/4] random: always update the entropy pool under the spinlock Theodore Ts'o
2014-06-14  7:15 ` [PATCH-v2 2/4] random: remove unneeded hash of a portion of the entropy pool Theodore Ts'o
2014-06-14  7:15 ` [PATCH-v2 3/4] random: only update the last_pulled time if we actually transferred entropy Theodore Ts'o
2014-06-14  7:15 ` [PATCH-v2 4/4] random: clean up interrupt entropy accounting for archs w/o cycle counters Theodore Ts'o
2014-06-14  7:28   ` George Spelvin
2014-06-14 16:00     ` Theodore Ts'o
2014-06-14 16:23       ` George Spelvin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox