public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Entropy from disks
@ 2002-10-29 17:10 Oliver Xymoron
  2002-10-29 18:02 ` Chris Friesen
  0 siblings, 1 reply; 5+ messages in thread
From: Oliver Xymoron @ 2002-10-29 17:10 UTC (permalink / raw)
  To: Alexander Viro; +Cc: linux-kernel

I just noticed:

http://linux.bkbits.net:8080/linux-2.5/cset@1.858?nav=index.html|ChangeSet@-2d

Since you're touching the disk entropy code, please read this, which
is one of the few papers on the subject:

http://world.std.com/~dtd/random/forward.ps

The current Linux PRNG is playing fast and loose here, adding entropy
based on the resolution of the TSC, while the physical turbulence
processes that actually produce entropy are happening at a scale of
seconds. On a GHz processor, if it takes 4 microseconds to return a
disk result from on-disk cache, /dev/random will get a 12-bit credit.

My entropy patches had each entropy source (not just disks) allocate
its own state object, and declare its timing "granularity".

There's a further problem with disk timing samples that make them less
than useful in typical headless server use (ie where it matters): the
server does its best to reveal disk latency to clients, easily
measurable within the auto-correlating domain of disk turbulence.

-- 
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.." 

^ permalink raw reply	[flat|nested] 5+ messages in thread
* Re: Entropy from disks
@ 2002-11-06  0:27 Oliver Xymoron
  0 siblings, 0 replies; 5+ messages in thread
From: Oliver Xymoron @ 2002-11-06  0:27 UTC (permalink / raw)
  To: David Schwartz; +Cc: cfriesen, Alexander Viro, linux-kernel

On Tue, Nov 05, 2002 at 03:55:57PM -0800, David Schwartz wrote:
> 
> On Tue, 29 Oct 2002 13:02:39 -0500, Chris Friesen wrote:
> >Oliver Xymoron wrote:
> 
> >I'm not an expert in this field, so bear with me if I make any blunders
> >obvious to one trained in information theory.
> 
> >>The current Linux PRNG is playing fast and loose here, adding entropy
> >>based on the resolution of the TSC, while the physical turbulence
> >>processes that actually produce entropy are happening at a scale of
> >>seconds. On a GHz processor, if it takes 4 microseconds to return a
> >>disk result from on-disk cache, /dev/random will get a 12-bit credit.
> 
> >In the paper the accuracy of measurement is 1ms.  Current hardware has
> >tsc precision of nanoseconds, or about 6 orders of magnitude more
> >accuracy.  Doesn't this mean that we can pump in many more bits into the
> >algorithm and get out many more than the 100bits/min that the setup in
> >the paper acheives?
> 
> 	In theory, if there's any real physical randomness in a timing source, the 
> more accuracy you measure the timing to, the more bits you get.

Not if the timing source is clocked at a substantially slower speed
than the measurement clock and is phase locked. Which is the case.

-- 
 "Love the dolphins," she advised him. "Write by W.A.S.T.E.." 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2002-11-06  0:20 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-10-29 17:10 Entropy from disks Oliver Xymoron
2002-10-29 18:02 ` Chris Friesen
2002-10-29 19:08   ` Oliver Xymoron
2002-11-05 23:55   ` David Schwartz
  -- strict thread matches above, loose matches on Subject: below --
2002-11-06  0:27 Oliver Xymoron

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox