* Huge amount of randomness with cuse and "urandompar"
@ 2011-10-14 20:38 Michael Büsch
0 siblings, 0 replies; only message in thread
From: Michael Büsch @ 2011-10-14 20:38 UTC (permalink / raw)
To: lkml
Hi there,
from time to time I need huge amounts of reasonably strong random
data. I want to use /dev/urandom for this, as it seems to have received
enough testing and has various good sources of entropy and good
support by tools such as rngd.
However, /dev/urandom does not seem to benefit at all from multicore systems.
Large parts of the urandom read-codepath in the kernel seem to be completely
lockless, though. So my basic idea was to throw a lot of reader threads at it
while preserving the convenient character device interface. Which lead
to this proof-of-concept CUSE project:
http://bues.ch/gitweb?p=urandompar.git;a=summary
git://git.bues.ch/urandompar.git
So from a first look it doesn't seem too bad, performance wise.
It seems that it currently scales well enough up to 4 CPUs.
Here's a simple chart of a trivial benchmark on a 6-core CPU:
http://bues.ch/misc/urandomparscale.pdf
I didn't profile it with oprofile, yet, but my guess is that it
runs more and more into entropy_store spinlock contention with >4 CPUs.
This has to be verified, though.
An open question still is: What are the implications on the data quality?
Does this massive parallelism affect the urandom algorithms in any way?
Can the bottleneck, that prevents it from scaling properly, be fixed?
Searching for testers, comments and answers. :)
--
Greetings, Michael.
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2011-10-14 20:39 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-10-14 20:38 Huge amount of randomness with cuse and "urandompar" Michael Büsch
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox