From: Andrew Morton <akpm@digeo.com>
To: "Theodore Ts'o" <tytso@mit.edu>
Cc: lkml <linux-kernel@vger.kernel.org>
Subject: Re: the random driver
Date: Wed, 20 Nov 2002 11:35:52 -0800 [thread overview]
Message-ID: <3DDBE418.CDD874FF@digeo.com> (raw)
In-Reply-To: 20021120162757.GA1922@think.thunk.org
Theodore Ts'o wrote:
>
> On Tue, Nov 19, 2002 at 11:46:53PM -0800, Andrew Morton wrote:
> > a) It's racy. The head and tail pointers have no SMP protection
> > and a race will cause it to dump 128 already-processed items
> > back into the entropy pool.
>
> Yeah, that's a real problem. The random driver was never adequately
> or locked for SMP case. We also have a problem on the output side;
> two processes that read from /dev/random at the same time can get the
> exact same value. This is **bad**, especially if it is being used for
> UUID generation or for session key generation.
It was pointed out (alleged?) to me that the lack of input-side locking is
a feature - if the SMP race hits, it adds unpredicatability.
> ...
> > b) It's weird. What's up with this?
> >
> > batch_entropy_pool[2*batch_head] = a;
> > batch_entropy_pool[(2*batch_head) + 1] = b;
> >
> > It should be an array of 2-element structures.
>
> The entropy returned by the drivers is essentially just an arbitrary
> 64 bit value. It's treated as two 32 bit values so that we don't lose
> horribly given GCC's pathetic 64-bit code generator for the ia32
> platform.
heh, I see. Presumably u64 loads and stores would be OK though?
> > d) It's punting work up to process context which could be performed
> > right there in interrupt context.
>
> The idea was to trying to pacify the soft realtime nazi's that are
> stressing out over every single microsecond of interrupt latency.
> Realistically, it's about dozen memory memory cache misses, so it's
> not *that* bad. Originally though the batched work was being done in
> a bottom-half handler, so there wasn't a process context switch
> overhead. So perhaps we should rethink the design decision of
> deffering the work in the interests of reducing interrupt latency.
That would suit. If you go this way, the batching is probably
detrimental - it would increase peak latencies. Could do the
work direct in the interrupt handler or schedule a softirq.
I think what bit us in 2.5 was the HZ=1000 change - with HZ=100
the context switch rate would be lower. But yes, using a workqueue
here seems inappropriate.
The whole idea of scheduling the work on the calling CPU is a
little inappropriate in this case. I have one CPU working hard
and three idle. Yet the deferred work and all the context
switching is being performed on the busy CPU. hmm.
next prev parent reply other threads:[~2002-11-20 19:29 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-11-20 7:46 the random driver Andrew Morton
2002-11-20 8:13 ` Aaron Lehmann
2002-11-20 20:44 ` Oliver Xymoron
2002-11-20 12:04 ` Ingo Oeser
2002-11-20 16:27 ` Theodore Ts'o
2002-11-20 19:35 ` Andrew Morton [this message]
2002-11-20 20:42 ` Oliver Xymoron
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3DDBE418.CDD874FF@digeo.com \
--to=akpm@digeo.com \
--cc=linux-kernel@vger.kernel.org \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox