* [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
@ 2004-09-24 0:59 linux
2004-09-24 2:34 ` Jean-Luc Cooke
2004-09-29 17:10 ` [PROPOSAL/PATCH 2] " Jean-Luc Cooke
0 siblings, 2 replies; 28+ messages in thread
From: linux @ 2004-09-24 0:59 UTC (permalink / raw)
To: jlcooke; +Cc: linux-kernel
Fortuna is an attempt to avoid the need for entropy estimation.
It doesn't do a perfect job. And I don't think it's received enough
review to be "regarded as the state of the art".
Entropy estimation is very difficult, but not doing it leads to problems.
Bruce Schneier's "catastrophic reseeding" ideas have some merit. If,
for some reason, the state of your RNG pool has been captured, then
adding one bit of seed material doesn't hurt an attacker who can look
at the output and brute-force that bit.
Thus, once you've lost security, you never regain it. If you save up,
say, 128 bits of seed material and add it all at once, your attacker
can't brute-force it.
/dev/random tries to solve this my never letting anyone see more output
than there is seed material input. So regardless of the initial state
of the pool, an attacker can never get enough output to compute a unique
solution to the seed material question. (See "unicity distance".)
However, this requires knowing the entropy content of the input, which is
a hard thing to measure.
The while issue of catastrophic reseeding applies to output-larger-than-key
generators like something like /dev/urandom (that uses cryptographic
Here's an example of how Fortuna's design fails.
Suppose we have a source which produces 32-bit samples, which are
guaranteed to contain 1 bit of new entropy per sample. We should be
able to feed that into Fortuna and have a good RNG, right? Wrong.
Suppose that each time you sample the source, it adds one bit to a 32-bit
shift register, and gives you the result. So sample[0] shares 31 bits
with sample[1], 30 bits with sample[2], etc.
Now, suppose that we add samples to 32 buckets in round-robin order,
and dump bucket[i] into the pool every round 2^i rounds. Further,
assume that our attacker can query the pool's output and brute-force 32
bits of seed material. In the following, "+=" is some cryptographic
mixing primitive, not literal addition.
Pool: Initial state known to attacker (e.g. empty)
Buckets: Initial state known to attacker (e.g. empty)
bucket[0] += sample[0]; pool += bucket[0]
-> attacker can query the pool and brute-force compute sample[0].
bucket[1] += sample[1] (= sample[0] << 1 | sample[32] >> 31)
bucket[2] += sample[2] (= sample[0] << 2 | sample[32] >> 30)
...
bucket[31] += sample[31] (= sample[0] << 31 | sample[32] >> 1)
bucket[0] += sample[32]; pool += bucket[0]
-> attacker can query the pool and brute-force compute sample[32].
-> Attacker now knows sample[1] through sample[31]
-> Attacker now knows bucket[1] through bucket[31.
Note that the attacker now knows the value of sample[1] through sample[31] and
thus the state of all the buckets, and can continue tracking the pool's
state indefinitely:
bucket[1] += sample[33]; pool += bucket[1]
-> attacker can query the pool and brute-force compute sample[33].
etc.
This shift register behaviour should be obvious, but suppose that sample[i]
is put through an encryption (known to the attacker) before being presented.
You can't tell that you're being fed cooked data, but the attack works just the
same.
Now, this is, admittedly, a highly contrived example, but it shows that
Fortuna does not completely achieve its stated design goal of achieving
catastrophic reseeding after having received some contant times the
necessary entropy as seed material. Its round-robin structure makes it
vulnerable to serial correlations in the input seed material. If they're
bad enough, its security can be completely destroyed. What *are* the
requirements for it to be secure? I don't know.
All I know is that it hasn't been analyzed well enough to be a panacea.
(The other thing I don't care for is the limited size of the
entropy pools. I like the "big pool" approach. Yes, 256 bits is
enough if everything works okay, but why take chances? But that's a
philosophical/style/gut feel argument more than a really technical one.)
I confess I haven't dug into the /dev/{,u}random code lately. The various
problems with low-latency random numbers needed by the IP stack suggest
that perhaps a faster PRNG would be useful in-kernel. If so, there may
be a justification for an in-kernel PRNG fast enough to use for disk
overwriting or the like. (As people persist in using /dev/urandom for,
even though it's explicitly not designed for that.)
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-24 0:59 [PROPOSAL/PATCH] Fortuna PRNG in /dev/random linux @ 2004-09-24 2:34 ` Jean-Luc Cooke 2004-09-24 6:19 ` linux 2004-09-24 21:42 ` linux 2004-09-29 17:10 ` [PROPOSAL/PATCH 2] " Jean-Luc Cooke 1 sibling, 2 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-24 2:34 UTC (permalink / raw) To: linux; +Cc: linux-kernel, cryptoapi, jmorris, tytso "linux", The Fortuna patch I've submitted tried to achieve this "more than 256 bits per pool" by carrying forward the digest output to the next pool. Stock Fortuna does not carry forward digest output form previous iterations. reseed: reseedCount++; for (i=0..31) { if (2^i is a factor of reseedCount) { hash_final(pool[i], dgst); hash_init(pool[i]); hash_update(pool[i], dgst); // my addition ... } } ... Considering each pool has 256 bits of digest output, and there are 32 pools, this gives about 8192 bits for the pool size. Far greater then current design. If you extremely pessimistically consider the probability of drawing pool j is 1/2 that of {j-1}, then it's a 512bit RNG. But I'd like talk to your attack for a second. I'd argue that it is valid for the current /dev/random and Yarrow with entropy estimators as well. I agree that if the state is known by an active attacker, then a trickle of entropy into Fortuna compared to the output gathered by an attacker would make for an argument that "Fortuna doesn't have it right." And no matter what PRNG engine you but between the attack and the random sources, there is no solution other than accurate entropy measurement (*not* estimation). However, this places the security of the system in the hands of the entropy estimator. If it is too liberal, we have the nearly the same situation with Fortuna. As much as I rely on Ted's work everyday for the smooth running of my machine, I can't concede to the notion that Ted got it right. Fortuna, I'd argue reduces the attack on the PRNG to that of the base crypto primitives, the randomness of the events and the rate at which data is output by /dev/random. This holds true for the current /dev/random except: 1) crypto primitives are do not pass test vectors, and the input mixing function is linear. 2) The randomness of the events can only be estimated, their true randomness requires analysis of the hardware device itself... not Feasible considering all the possible IRQ sources, mice, and hard disks that Linux drives. 3) Following on (2) above, the output rate of /dev/random is directly related to the estimated randomness. If you have ideas on how to make a PRNG that can more closely tie output rate to input events and survive state compromise attacks (backtracking, forward secrecy, etc) then please drop anonymity and contact me at my email address. Perhaps a collaboration is possible. Cheers, JLC On Fri, Sep 24, 2004 at 12:59:38AM -0000, linux@horizon.com wrote: > Fortuna is an attempt to avoid the need for entropy estimation. > It doesn't do a perfect job. And I don't think it's received enough > review to be "regarded as the state of the art". > > Entropy estimation is very difficult, but not doing it leads to problems. > > Bruce Schneier's "catastrophic reseeding" ideas have some merit. If, > for some reason, the state of your RNG pool has been captured, then > adding one bit of seed material doesn't hurt an attacker who can look > at the output and brute-force that bit. > > Thus, once you've lost security, you never regain it. If you save up, > say, 128 bits of seed material and add it all at once, your attacker > can't brute-force it. > > /dev/random tries to solve this my never letting anyone see more output > than there is seed material input. So regardless of the initial state > of the pool, an attacker can never get enough output to compute a unique > solution to the seed material question. (See "unicity distance".) > > However, this requires knowing the entropy content of the input, which is > a hard thing to measure. > > The while issue of catastrophic reseeding applies to output-larger-than-key > generators like something like /dev/urandom (that uses cryptographic > > > Here's an example of how Fortuna's design fails. > > Suppose we have a source which produces 32-bit samples, which are > guaranteed to contain 1 bit of new entropy per sample. We should be > able to feed that into Fortuna and have a good RNG, right? Wrong. > > Suppose that each time you sample the source, it adds one bit to a 32-bit > shift register, and gives you the result. So sample[0] shares 31 bits > with sample[1], 30 bits with sample[2], etc. > > Now, suppose that we add samples to 32 buckets in round-robin order, > and dump bucket[i] into the pool every round 2^i rounds. Further, > assume that our attacker can query the pool's output and brute-force 32 > bits of seed material. In the following, "+=" is some cryptographic > mixing primitive, not literal addition. > > Pool: Initial state known to attacker (e.g. empty) > Buckets: Initial state known to attacker (e.g. empty) > bucket[0] += sample[0]; pool += bucket[0] > -> attacker can query the pool and brute-force compute sample[0]. > bucket[1] += sample[1] (= sample[0] << 1 | sample[32] >> 31) > bucket[2] += sample[2] (= sample[0] << 2 | sample[32] >> 30) > ... > bucket[31] += sample[31] (= sample[0] << 31 | sample[32] >> 1) > bucket[0] += sample[32]; pool += bucket[0] > -> attacker can query the pool and brute-force compute sample[32]. > -> Attacker now knows sample[1] through sample[31] > -> Attacker now knows bucket[1] through bucket[31. > > Note that the attacker now knows the value of sample[1] through sample[31] and > thus the state of all the buckets, and can continue tracking the pool's > state indefinitely: > > bucket[1] += sample[33]; pool += bucket[1] > -> attacker can query the pool and brute-force compute sample[33]. > etc. > > This shift register behaviour should be obvious, but suppose that sample[i] > is put through an encryption (known to the attacker) before being presented. > You can't tell that you're being fed cooked data, but the attack works just the > same. > > > Now, this is, admittedly, a highly contrived example, but it shows that > Fortuna does not completely achieve its stated design goal of achieving > catastrophic reseeding after having received some contant times the > necessary entropy as seed material. Its round-robin structure makes it > vulnerable to serial correlations in the input seed material. If they're > bad enough, its security can be completely destroyed. What *are* the > requirements for it to be secure? I don't know. > > All I know is that it hasn't been analyzed well enough to be a panacea. > > (The other thing I don't care for is the limited size of the > entropy pools. I like the "big pool" approach. Yes, 256 bits is > enough if everything works okay, but why take chances? But that's a > philosophical/style/gut feel argument more than a really technical one.) > > > I confess I haven't dug into the /dev/{,u}random code lately. The various > problems with low-latency random numbers needed by the IP stack suggest > that perhaps a faster PRNG would be useful in-kernel. If so, there may > be a justification for an in-kernel PRNG fast enough to use for disk > overwriting or the like. (As people persist in using /dev/urandom for, > even though it's explicitly not designed for that.) > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-24 2:34 ` Jean-Luc Cooke @ 2004-09-24 6:19 ` linux 2004-09-24 21:42 ` linux 1 sibling, 0 replies; 28+ messages in thread From: linux @ 2004-09-24 6:19 UTC (permalink / raw) To: jlcooke; +Cc: cryptoapi, jmorris, linux-kernel, tytso BTW, you write: > It is regarded in crypto circles as the current state-of-the-art > in cryptographically secure PRNGs. The question this brings to mind is: It is? Can you point me to a single third-party paper on the subject? There's nothing in the IACR preprint archive. Nor citeseer, The big difference between when /dev/random was designed and today: - USB is a broadcast bus, and a lot (timing, at least) can be sniffed by a small dongle. Wireless keyboards and mice are popular. That sort of user data probably shouldn't be trusted any more. (No harm mixing it in, just in case it is good, but accord it zero weight.) - Clock speeds are a *lot* higher (> 1 GHz) and the timestamp counter is almost universally available. Even an attacker with multiple antennas pointed at the computer is going to have a hard time figuring out on which tick of the clock an interrupt arrived even if they can see it. Thus, the least-significant bits of the TSC are useful entropy on *every* interrupt, timer included. For a fun exercise, install a kernel hack to capture the TSC on every timer interrupt. Run it for a while on an idle system (processor in the halt state, waiting for interrupts on a cycle-by-cycle basis). Take the resultant points, subtract the best-fit line, and throw out any outliers caused by delayed interrupts. Now do some statistical analysis of the residue. How much entropy do you have from the timer interrupt? Does it look random? How many lsbits can you take and still pass Marsaglia's DIEHARD suite? Do any patterns show up in an FFT? ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-24 2:34 ` Jean-Luc Cooke 2004-09-24 6:19 ` linux @ 2004-09-24 21:42 ` linux 2004-09-25 14:54 ` Jean-Luc Cooke 1 sibling, 1 reply; 28+ messages in thread From: linux @ 2004-09-24 21:42 UTC (permalink / raw) To: jlcooke; +Cc: cryptoapi, jmorris, linux-kernel, tytso > What if I told the SHA-1 implementation in random.c right now is weaker > than those hashs in terms of collisions? The lack of padding in the > implementation is the cause. HASH("a\0\0\0\0...") == HASH("a") There > are billions of other examples. EXCUSE me? You're a little unclear, so I don't want to be attacking strawmen of my own devising, but are you claiming the failure to do Merkle-Damgaard padding in the output mixing operation of /dev/random is a WEAKNESS? If true, this is a level of cluelessness incompatible with being trusted to design decent crypto. The entire purpose of Merkle-Damgaard padding (also know as Merkle-Damgaard strengthening) is to include the length in the data hashed, to make hashing variable-sized messages as secure as fixed-size messages. If what you are hashing is, by design, always fixed-length, this is completely unnecessary. If I were designing a protocol for message interchange, I might add the padding anyway, just to use pre-existing primitives easily, but for a 100% internal use like a PRNG, let's see... I can reduce code size AND implementation complexity AND run time without ANY security consequences, and there are no interoperability issues... I could argue it's a design flaw to *include* the padding. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-24 21:42 ` linux @ 2004-09-25 14:54 ` Jean-Luc Cooke 2004-09-25 18:43 ` Theodore Ts'o 2004-09-26 2:31 ` linux 0 siblings, 2 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-25 14:54 UTC (permalink / raw) To: linux; +Cc: jmorris, cryptoapi, tytso, linux-kernel On Fri, Sep 24, 2004 at 09:42:30PM -0000, linux@horizon.com wrote: > > What if I told the SHA-1 implementation in random.c right now is weaker > > than those hashs in terms of collisions? The lack of padding in the > > implementation is the cause. HASH("a\0\0\0\0...") == HASH("a") There > > are billions of other examples. > > EXCUSE me? ... > I could argue it's a design flaw to *include* the padding. I was trying to point out a flaw in Ted's logic. He said "we've recently discoverd these hashs are weak because we found collsions. Current /dev/random doesn't care about this." I certainly wasn't saying padding was a requirment. But I was trying to point out that the SHA-1 implementaion crrently in /dev/random by design is collision vulnerable. Collision resistance isn't a requirment for it's purposes obviously. Guess my pointing this out is a lost cause. JLC ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-25 14:54 ` Jean-Luc Cooke @ 2004-09-25 18:43 ` Theodore Ts'o 2004-09-26 1:42 ` Jean-Luc Cooke 2004-09-26 2:31 ` linux 1 sibling, 1 reply; 28+ messages in thread From: Theodore Ts'o @ 2004-09-25 18:43 UTC (permalink / raw) To: Jean-Luc Cooke; +Cc: linux, jmorris, cryptoapi, linux-kernel On Sat, Sep 25, 2004 at 10:54:44AM -0400, Jean-Luc Cooke wrote: > > I was trying to point out a flaw in Ted's logic. He said "we've recently > discoverd these hashs are weak because we found collsions. Current > /dev/random doesn't care about this." > > I certainly wasn't saying padding was a requirment. But I was trying to > point out that the SHA-1 implementaion crrently in /dev/random by design is > collision vulnerable. Collision resistance isn't a requirment for it's > purposes obviously. You still haven't shown the flaw in the logic. My point is that an over-reliance on crypto primitives is dangerous, especially given recent developments. Fortuna relies on the crypto primitives much more than /dev/random does. Ergo, if you consider weaknesses in crypto primitives to be a potential problem, then it might be reasonable to take a somewhat more jaundiced view towards Fortuna compared with other alternatives. Whether or not /dev/random performs the SHA finalization step (which adds the padding and the length to the hash) is completely and totally irrelevant to this particular line of reasoning. And actually, not doing the padding does not make the crypto hash vulnerable to collisions, as you claim. This is because in /dev/random, we are always using the full block size of the crypto hash. It is true that it is vulernable to extension attacks, but that's irrelevant to this particular usage of the SHA-1 round function. Whether or not we should trust the design of something as critical to the security of security applications as /dev/random to someone who fails to grasp the difference between these two rather basic issues is something I will leave to the others on LKML. - Ted ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-25 18:43 ` Theodore Ts'o @ 2004-09-26 1:42 ` Jean-Luc Cooke 2004-09-26 5:23 ` Theodore Ts'o 2004-09-26 6:46 ` linux 0 siblings, 2 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-26 1:42 UTC (permalink / raw) To: Theodore Ts'o, linux, jmorris, cryptoapi, linux-kernel On Sat, Sep 25, 2004 at 02:43:52PM -0400, Theodore Ts'o wrote: > You still haven't shown the flaw in the logic. My point is that an > over-reliance on crypto primitives is dangerous, especially given > recent developments. Fortuna relies on the crypto primitives much > more than /dev/random does. Ergo, if you consider weaknesses in > crypto primitives to be a potential problem, then it might be > reasonable to take a somewhat more jaundiced view towards Fortuna > compared with other alternatives. Correct me if I'm wrong here. You claimed that the collision techniques found for the UFN design hashs (sha0, md5, md5, haval, ripemd) demonstrated the need to not rely on hash algorithms for a RNG. Right? And I showed that the SHA-1 in random.c now can produce collisions. So, if your argument against the fallen UFN hashs above (SHA-1 is a UFN hash also btw. We can probably expect more annoucments from the crypto community in early 2005) should it not apply to SHA-1 in random.c? Or did I misunderstand you? Were you just mentioning the weakened algorithms as a "what if they were more serious discoveries? Wouldn't be be nice if we didn't rely on them?" ? The decision to place trust in a entropy estimation scheme vs. a crypto algorithm we have different views on. I can live with that. > Whether or not /dev/random performs the SHA finalization step (which > adds the padding and the length to the hash) is completely and totally > irrelevant to this particular line of reasoning. I "completly and totally" agree. I'm pointing out that no added padding makes me, the new guy reading your code, work harder to decide if it's a weakness. You shouldn't do that to people if you can avoid it. Just like you shouldn't obfuscate code, even if it doesn't "weaken" its implementation. It's just rude. Take the performance penalty to avoid scaring people away from a very important peice of the kernel. > ... Whether or not we should trust the design of something as > critical to the security of security applications as /dev/random to > someone who fails to grasp the difference between these two rather > basic issues is something I will leave to the others on LKML. ... biting my toung ... so hard it bleeds ... The quantitaive aspects of the two RNGs in question are not being discussed. It's the qualitative aspects we do not see eye to eye on. So I will no longer suggest replacing the status-quo. I'd like to submit a patch to let users chose at compile-time under Cryptographic options weither to drop in Fortuna. Ted, can we leave it at this? JLC ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-26 1:42 ` Jean-Luc Cooke @ 2004-09-26 5:23 ` Theodore Ts'o 2004-09-27 0:50 ` linux 2004-09-26 6:46 ` linux 1 sibling, 1 reply; 28+ messages in thread From: Theodore Ts'o @ 2004-09-26 5:23 UTC (permalink / raw) To: Jean-Luc Cooke; +Cc: linux, jmorris, cryptoapi, linux-kernel On Sat, Sep 25, 2004 at 09:42:18PM -0400, Jean-Luc Cooke wrote: > On Sat, Sep 25, 2004 at 02:43:52PM -0400, Theodore Ts'o wrote: > > You still haven't shown the flaw in the logic. My point is that an > > over-reliance on crypto primitives is dangerous, especially given > > recent developments. Fortuna relies on the crypto primitives much > > more than /dev/random does. Ergo, if you consider weaknesses in > > crypto primitives to be a potential problem, then it might be > > reasonable to take a somewhat more jaundiced view towards Fortuna > > compared with other alternatives. > > Correct me if I'm wrong here. > > You claimed that the collision techniques found for the UFN design hashs > (sha0, md5, md5, haval, ripemd) demonstrated the need to not rely on hash > algorithms for a RNG. Right? For Fortuna, correct. This is why I believe /dev/random's current design to be superior. > And I showed that the SHA-1 in random.c now can produce collisions. So, if > your argument against the fallen UFN hashs above (SHA-1 is a UFN hash also > btw. We can probably expect more annoucments from the crypto community in > early 2005) should it not apply to SHA-1 in random.c? (1) Your method of "producing collisions" assumed that /dev/random was of the form HASH("a\0\0\0...") == HASH("a) --- i.e., you were kvetching about the lack of padding. But we've already agreed that the padding argument isn't applicable for /dev/random, since it only hashes block-sizes at the same time. (2) Even if there were real collisions demonstrated in SHA-1's cryptographic core at some point in the future, it wouldn't harm the security of the algorithm, since /dev/random doesn't depend on SHA-1 being resistant against collisions. (Similarly, HMAC-MD5 is still safe for now since it also is designed such that the ability to find collisions do not harm its security. It's a matter of how you use the cryptographic primitives.) > Or did I misunderstand you? Were you just mentioning the weakened algorithms > as a "what if they were more serious discoveries? Wouldn't be be nice if we > didn't rely on them?" ? That's correct. It is my contention that Fortuna is brittle in this regard, especially in comparison to /dev/random current design. And you still haven't pointed out the logic flaw in any argument but your own. - Ted ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-26 5:23 ` Theodore Ts'o @ 2004-09-27 0:50 ` linux 2004-09-27 13:07 ` Jean-Luc Cooke 2004-09-27 14:23 ` Theodore Ts'o 0 siblings, 2 replies; 28+ messages in thread From: linux @ 2004-09-27 0:50 UTC (permalink / raw) To: jlcooke; +Cc: cryptoapi, jmorris, linux-kernel, linux, tytso >> This, I do not recall. I must have missed it. Will you please show me >> two inputs that, when fed to the SHA-1 in random.c, will produce >> identical output? > SHA-1 without padding, sure. > hash("a") = hash("a\0") = hash("a\0\0") = ... > hash("b") = hash("b\0") = hash("b\0\0") = ... > hash("c") = hash("c\0") = hash("c\0\0") = ... And how do I hash one byte with SHA-1 *without padding*? The only hashing code I can find in random.c works 64 bytes at a time. What are the other 63 bytes? (I agree that that *naive* padding leads to collisions, but random.c doesn't do ANY padding.) > I see. And in the -mm examples, is the code easily readable for other > os-MemMgt types? If no, then I guess random.c is not the exception and I > apologize. The Linux core -mm code is a fairly legendary piece of Heavy Wizardry. To paraphrase, "do not meddle in the affairs of /usr/src/linux/mm/, for it is subtle and quick to anger." There *are* people who understand it, and it *is* designed (not a decaying pile of old hacks that *nobody* understands how it works like some software), but it's also a remarkably steep learning curve. A basic overview isn't so hard to acquire, but the locking rules have subtle details. There are places where someone very good noticed that a given lock doesn't have to be taken on a fast path if you avoid doing certain things anywhere else that you'd think would be legal. And so if someone tries to add code to do the "obvious" thing, the lock-free fast path develops a race condition. And we all know what fun race conditions are to debug. Fortunately, some people see this as a challenge and Linux is blessed with some extremely skilled VM hackers. And some of them even write and publish books on the subject. But while a working VM system can be clear, making it go fast leads to a certain amount of tension with the clarity goal. > And the ring-buffer system which delays the expensive mixing stages untill a > a sort interrupt does a great job (current and my fortuna-patch). Difference > being, fortuna-patch appears to be 2x faster. Ooh, cool! Must play with to steal the speed benefits. Thank you! ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-27 0:50 ` linux @ 2004-09-27 13:07 ` Jean-Luc Cooke 2004-09-27 14:23 ` Theodore Ts'o 1 sibling, 0 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-27 13:07 UTC (permalink / raw) To: linux; +Cc: jmorris, cryptoapi, tytso, linux-kernel On Mon, Sep 27, 2004 at 12:50:33AM -0000, linux@horizon.com wrote: > > SHA-1 without padding, sure. > > > hash("a") = hash("a\0") = hash("a\0\0") = ... > > hash("b") = hash("b\0") = hash("b\0\0") = ... > > hash("c") = hash("c\0") = hash("c\0\0") = ... > > And how do I hash one byte with SHA-1 *without padding*? The only > hashing code I can find in random.c works 64 bytes at a time. > What are the other 63 bytes? > > (I agree that that *naive* padding leads to collisions, but random.c > doesn't do ANY padding.) And I guess it is my fault to assume "no padding" is naive padding. > > I see. And in the -mm examples, is the code easily readable for other > > os-MemMgt types? If no, then I guess random.c is not the exception and I > > apologize. > > The Linux core -mm code is a fairly legendary piece of Heavy Wizardry. > To paraphrase, "do not meddle in the affairs of /usr/src/linux/mm/, for > it is subtle and quick to anger." There *are* people who understand it, > and it *is* designed (not a decaying pile of old hacks that *nobody* > understands how it works like some software), but it's also a remarkably > steep learning curve. A basic overview isn't so hard to acquire, but the > locking rules have subtle details. There are places where someone very good > noticed that a given lock doesn't have to be taken on a fast path if you > avoid doing certain things anywhere else that you'd think would be legal. > > And so if someone tries to add code to do the "obvious" thing, the > lock-free fast path develops a race condition. And we all know what > fun race conditions are to debug. > > Fortunately, some people see this as a challenge and Linux is blessed with > some extremely skilled VM hackers. And some of them even write and publish > books on the subject. But while a working VM system can be clear, making it > go fast leads to a certain amount of tension with the clarity goal. Freightning ... but informative thank you. > > And the ring-buffer system which delays the expensive mixing stages untill a > > a sort interrupt does a great job (current and my fortuna-patch). Difference > > being, fortuna-patch appears to be 2x faster. > > Ooh, cool! Must play with to steal the speed benefits. Thank you! I'll have a patch for a "enable in crypto options" and "blocking with entropy estimation" random-fortuna.c patch this week. My fiance is out of town and there should be time to hack one up. JLC ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-27 0:50 ` linux 2004-09-27 13:07 ` Jean-Luc Cooke @ 2004-09-27 14:23 ` Theodore Ts'o 2004-09-27 14:42 ` Jean-Luc Cooke 1 sibling, 1 reply; 28+ messages in thread From: Theodore Ts'o @ 2004-09-27 14:23 UTC (permalink / raw) To: linux; +Cc: jlcooke, cryptoapi, jmorris, linux-kernel On Mon, Sep 27, 2004 at 12:50:33AM -0000, linux@horizon.com wrote: > > And the ring-buffer system which delays the expensive mixing stages untill a > > a sort interrupt does a great job (current and my fortuna-patch). Difference > > being, fortuna-patch appears to be 2x faster. > > Ooh, cool! Must play with to steal the speed benefits. Thank you! The speed benefits come from the fact that /dev/random is currently using a large pool to store entropy, and so we end up taking cache line misses as we access the memory. Worse yet, the cache lines are scattered across the memory (due to the how the LFSR works), and we're using/updating information from the pool 32 bits at a time. In contrast, in JLC's patch, each pool only has enough space for 256 bits of entropy (assuming the use of SHA-256), and said 256 bits are stored packed next to each other, so it can fetch the entire pool in one or two cache lines. This is somewhat fundamental to the philosophical question of whether you store a large amount of entropy, taking advantage of the fact that the kernel has easy access to hardware-generated entropy, or use tiny pools and put a greater faith in crypto primitives. So the bottom line is that while Fortuna's input mixing uses more CPU (ALU) resources, /dev/random is slower because of memory latency issue. On processors with Hyperthreading / SMT enabled (which seems to be the trend across all architectures --- PowerPC, AMD64, Intel, etc.), the memory latency usage may be less important, since other tasks will be able to use the other (virtual) half of the CPU while the entropy mixing is waiting on the memory access to complete. On the other hand, it does mean that we're chewing up a slightly greater amount of memory bandwidth during the entropy mixing process. Whether or not any of this is actually measurable during real-life mixing is an interesting and non-obvious question. - Ted ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-27 14:23 ` Theodore Ts'o @ 2004-09-27 14:42 ` Jean-Luc Cooke 0 siblings, 0 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-27 14:42 UTC (permalink / raw) To: Theodore Ts'o, linux, cryptoapi, jmorris, linux-kernel On Mon, Sep 27, 2004 at 10:23:52AM -0400, Theodore Ts'o wrote: > On Mon, Sep 27, 2004 at 12:50:33AM -0000, linux@horizon.com wrote: > > > And the ring-buffer system which delays the expensive mixing stages untill a > > > a sort interrupt does a great job (current and my fortuna-patch). Difference > > > being, fortuna-patch appears to be 2x faster. > > > > Ooh, cool! Must play with to steal the speed benefits. Thank you! > > This is somewhat fundamental to the philosophical question of whether > you store a large amount of entropy, taking advantage of the fact that > the kernel has easy access to hardware-generated entropy, or use tiny > pools and put a greater faith in crypto primitives. Tiny in that at most you can only pull out 256bits of entropy from one pool, you are correct. SHA-256 buffers 64 bytes at time. The transform requires 512 bytes for its mixing. The mixing of the 512 byte W[] array is done serially. random_state->pool is POOLBYTES in size. Which is poolwords*4, which is DEFAULT'd to 512 bytes. The "5 tap" LFSR reaches all over that 512byte memory for its mixing. If page sizes get big enough and we page-align the pool[] member, the standard RNG will get faster. JLC ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-26 1:42 ` Jean-Luc Cooke 2004-09-26 5:23 ` Theodore Ts'o @ 2004-09-26 6:46 ` linux 2004-09-26 16:32 ` Jean-Luc Cooke 1 sibling, 1 reply; 28+ messages in thread From: linux @ 2004-09-26 6:46 UTC (permalink / raw) To: jlcooke; +Cc: cryptoapi, jmorris, linux, linux-kernel, tytso > You claimed that the collision techniques found for the UFN design hashs > (sha0, md5, md5, haval, ripemd) demonstrated the need to not rely on hash > algorithms for a RNG. Right? I'm putting words into Ted's mouth, but it seemed clear to me he said it was good not to rely *entirely* on the ahsh algorithms. > And I showed that the SHA-1 in random.c now can produce collisions. This, I do not recall. I must have missed it. Will you please show me two inputs that, when fed to the SHA-1 in random.c, will produce identical output? > So, if your argument against the fallen UFN hashs above (SHA-1 is a UFN > hash also btw. We can probably expect more annoucments from the crypto > community in early 2005) should it not apply to SHA-1 in random.c? No, not at all. The point is that the current random.c design DOES NOT RELY on the security of the hash function. Ted could drop MD4 in there and it still couldn't be broken, although using a better-regarded hash function just feels better. > Or did I misunderstand you? Were you just mentioning the weakened algorithms > as a "what if they were more serious discoveries? Wouldn't be be nice if we > didn't rely on them?" ? Yes. And Fortuna's *only* layer of armor is the block cipher. Yes, it's a damn good layer of armor, but defense in depth sure helps. That is NOT to say that lots of half-assed algorithms piled on top of each other makes good crypto, but if you can have a good primitive and *then* use it safely as well, that's better. For example, AES is supposed to be resistant to adaptive chosen plaintext/ciphertext attacks. Suppose you are given two ciphertexts and two corresponding plaintexts, but not which corresponds to which. And then you are given access to an oracle which will, using the same key as was used on the plaintext/ciphertext pairs, give you the plaintext for any ciphertet that's not one of the two, and the ciphertext for any plaintext that's not one of the two. The orace can answer basically an infinite number of questions (well, 2^128-2) and you can look at one set of answers before posing the next. AES is supposed to prevent you from figuring out, with all that help, which plaintext of the two goes with which ciphertext, with more than 50% certainty. I.e. you are given an infinite series of such challenges and offered even-odds bets on your answer. In the long run, you shouldn't be able to make money. Yes, AES *should* be able to hold up even to that, but that's really placing all your eggs in one basket. If you can give it more help without weakening other parts, that's Good Design. If I'm designing a protocol, I'll try to design it so that an attacker *doesn't* have access to such an oracle, or the responses are too slow to make billions of them, or asking more than a few dozen questions will raise alarms, or some such. I'll change keys so the time in which an attacker has to mount their attack is limited. I'll do any of a number of things which let the German navy keep half of their U-boat traffic out of the hands of Bletchley park even through they didn't know there were vast gaping holes in the underlying cipher. > The decision to place trust in a entropy estimation scheme vs. a crypto > algorithm we have different views on. I can live with that. Better crypto is fine. But why *throw out* the entropy estimation and rely *entirely* on the crypto? Feel free to argue that the crypto in Fortuna is better (although Ted is making some strong points that it *isn't*), but is it necessary to throw the baby out with the bathwater? Can't you get the best of both worlds? > I "completly and totally" agree. I'm pointing out that no added padding > makes me, the new guy reading your code, work harder to decide if it's a > weakness. You shouldn't do that to people if you can avoid it. Sorry, but if you know enough to know why the padding is necessary, you should know when it isn't. Feel free to say "isn't this a weakness? I read in $BOOK that that padding was important to prevent some attacks" and propose a comment patch. But to say "this is crap because I don't understand one little detail and you should replace it with my shiny new 2005 model" when it's your ignorance and not a real problem is unbelievably arrogant. > Just like you shouldn't obfuscate code, even if it doesn't "weaken" > its implementation. It's just rude. Take the performance penalty to > avoid scaring people away from a very important peice of the kernel. Tell it to the marines. I'd say "tell it to Linus", because he'll laugh louder, but his time is valuable to me. Part of the Linux developer's credo, learned at Linus' knee, is that Performance Matters. If you don't worry about 5% all the time, after 15 revisions you've running at half speed and it's a lot of work to catch up. The -mm guys have been doing backflips for years to try to get good paging behaviour without high run-time overhead. This is one of the major reasons why the kernel refuses to promise a stable binary interface to kernel modules. Rearranging the order of fields in a strucure for better cache performance is a minor revision. In fact, large parts of /dev/random deliberately *don't* care about performance. The entire output mixing stage is not performance critical, and is deliberately slow. What *is* critical is the input mixing stage, because that happens at interrupt time, and many many people care passionately about interrupt latency. And /dev/random wants to be non-optional, always there for people to use so they don't have to invent their own half-assed equivalent. > The quantitaive aspects of the two RNGs in question are not being discussed. > It's the qualitative aspects we do not see eye to eye on. So I will no > longer suggest replacing the status-quo. I'd like to submit a patch to let > users chose at compile-time under Cryptographic options weither to drop in > Fortuna. > > Ted, can we leave it at this? You're welcome to write the patch. But I have to warn you, if you hope to get it into the standard kernel rather than just have a separately maintained patch, you'll need to persuade Linus or someone he trusts (who in theis case is probably Ted) that your patch is a) better in some way or another than the existing code, and b) important enough to warrant the maintenance burden that having two sets of equivalent code imposes. You're being offered a lot of clues. Please, take some. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-26 6:46 ` linux @ 2004-09-26 16:32 ` Jean-Luc Cooke 0 siblings, 0 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-26 16:32 UTC (permalink / raw) To: linux; +Cc: jmorris, cryptoapi, tytso, linux-kernel On Sun, Sep 26, 2004 at 06:46:17AM -0000, linux@horizon.com wrote: > > And I showed that the SHA-1 in random.c now can produce collisions. > > This, I do not recall. I must have missed it. Will you please show me > two inputs that, when fed to the SHA-1 in random.c, will produce > identical output? SHA-1 without padding, sure. hash("a") = hash("a\0") = hash("a\0\0") = ... hash("b") = hash("b\0") = hash("b\0\0") = ... hash("c") = hash("c\0") = hash("c\0\0") = ... I've failed in my attempt to present a good argument for Fortuna. Guess I'll just sit on this patch. Is this above a big issue? No because as you two pointed out the hash() uses full block sizes. This is a trying thread for me to continue, by no fault of yours. I thought I made it very clear when I started that I saw *no* vulnerability in the current /dev/random. This did not prevent Ted and yourself to ignore this statement as immediately assume when I say "you could have done this better" to mean "ha! I've hax0rd your silly code, I'm l33t." - an infuriating blow to my professionalism. Then I simply follow that up with insult to injury by trying to clear up the whole mess and only making things worse. > > Or did I misunderstand you? Were you just mentioning the weakened algorithms > > as a "what if they were more serious discoveries? Wouldn't be be nice if we > > didn't rely on them?" ? > > Yes. And Fortuna's *only* layer of armor is the block cipher. Yes, > it's a damn good layer of armor, but defense in depth sure helps. > > That is NOT to say that lots of half-assed algorithms piled on top of > each other makes good crypto, but if you can have a good primitive and > *then* use it safely as well, that's better. > > For example, AES is supposed to be resistant to adaptive chosen > plaintext/ciphertext attacks. Suppose you are given two ciphertexts > and two corresponding plaintexts, but not which corresponds to which. > And then you are given access to an oracle which will, using the same > key as was used on the plaintext/ciphertext pairs, give you the plaintext > for any ciphertet that's not one of the two, and the ciphertext for any > plaintext that's not one of the two. The orace can answer basically an > infinite number of questions (well, 2^128-2) and you can look at one set > of answers before posing the next. > > AES is supposed to prevent you from figuring out, with all that help, > which plaintext of the two goes with which ciphertext, with more than 50% > certainty. I.e. you are given an infinite series of such challenges and > offered even-odds bets on your answer. In the long run, you shouldn't > be able to make money. > > Yes, AES *should* be able to hold up even to that, but that's really > placing all your eggs in one basket. If you can give it more help > without weakening other parts, that's Good Design. > > If I'm designing a protocol, I'll try to design it so that an attacker > *doesn't* have access to such an oracle, or the responses are too slow > to make billions of them, or asking more than a few dozen questions will > raise alarms, or some such. I'll change keys so the time in which an > attacker has to mount their attack is limited. I'll do any of a number > of things which let the German navy keep half of their U-boat traffic > out of the hands of Bletchley park even through they didn't know there > were vast gaping holes in the underlying cipher. If say, the key for the AES256-CTR layer changed after every block-read from /dev/random? > > The decision to place trust in a entropy estimation scheme vs. a crypto > > algorithm we have different views on. I can live with that. > > Better crypto is fine. But why *throw out* the entropy estimation and > rely *entirely* on the crypto? Feel free to argue that the crypto in > Fortuna is better (although Ted is making some strong points that it > *isn't*), but is it necessary to throw the baby out with the bathwater? > Can't you get the best of both worlds? My past arguments for removing entropy estimation were hand-waving at best (rate of /dev/random output ~= rate of event sources' activity like keyboards, disks, etc). This could (not likely) lead to information about what the system is doing. If an attacker could open and close tcp ports, or ping an ethernet card to generate IRQs which are fed into the PRNG and increasing the entropy count - would this be usable in an attack? Not likely. Would you want to close-off this avenue of attack? Majority seems to say "no", but I personally would like to. And that is where my argument falls apart. > > I "completly and totally" agree. I'm pointing out that no added padding > > makes me, the new guy reading your code, work harder to decide if it's a > > weakness. You shouldn't do that to people if you can avoid it. > > Sorry, but if you know enough to know why the padding is necessary, you > should know when it isn't. Feel free to say "isn't this a weakness? > I read in $BOOK that that padding was important to prevent some attacks" > and propose a comment patch. But to say "this is crap because I don't > understand one little detail and you should replace it with my shiny > new 2005 model" when it's your ignorance and not a real problem is > unbelievably arrogant. Sigh. Perhaps I need to be excruciatingly clear: - SHA1-nopadding() is less secure than SHA1-withpadding() - It doesn't apply to random.c I though it was clear ... clearly I was delusional. > > Just like you shouldn't obfuscate code, even if it doesn't "weaken" > > its implementation. It's just rude. Take the performance penalty to > > avoid scaring people away from a very important peice of the kernel. > > Tell it to the marines. I'd say "tell it to Linus", because he'll laugh > louder, but his time is valuable to me. > > Part of the Linux developer's credo, learned at Linus' knee, is that > Performance Matters. If you don't worry about 5% all the time, after 15 > revisions you've running at half speed and it's a lot of work to catch up. I see. And in the -mm examples, is the code easily readable for other os-MemMgt types? If no, then I guess random.c is not the exception and I apologize. > What *is* critical is the input mixing stage, because that happens at > interrupt time, and many many people care passionately about interrupt > latency. And /dev/random wants to be non-optional, always there for > people to use so they don't have to invent their own half-assed > equivalent. And the ring-buffer system which delays the expensive mixing stages untill a a sort interrupt does a great job (current and my fortuna-patch). Different being, fortuna-patch appears to be 2x faster. > > The quantitaive aspects of the two RNGs in question are not being discussed. > > It's the qualitative aspects we do not see eye to eye on. So I will no > > longer suggest replacing the status-quo. I'd like to submit a patch to let > > users chose at compile-time under Cryptographic options weither to drop in > > Fortuna. > > > > Ted, can we leave it at this? > > You're welcome to write the patch. But I have to warn you, if you > hope to get it into the standard kernel rather than just have a > separately maintained patch, you'll need to persuade Linus or someone > he trusts (who in theis case is probably Ted) that your patch is > a) better in some way or another than the existing code, and > b) important enough to warrant the maintenance burden that having > two sets of equivalent code imposes. > > You're being offered a lot of clues. Please, take some. I appreciate the feedback for what it's worth. Thanks. JLC ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random 2004-09-25 14:54 ` Jean-Luc Cooke 2004-09-25 18:43 ` Theodore Ts'o @ 2004-09-26 2:31 ` linux 1 sibling, 0 replies; 28+ messages in thread From: linux @ 2004-09-26 2:31 UTC (permalink / raw) To: jlcooke; +Cc: cryptoapi, jmorris, linux-kernel, tytso > I was trying to point out a flaw in Ted's logic. He said "we've recently > discoverd these hashes are weak because we found collsions. Current > /dev/random doesn't care about this." And he's exactly right. The only attack that would be vaguely relevant to /dev/random's use would be a (first) preimage attack, and even that's probably not helpful. There *is* no flaw in his logic. The attack we need to guard against is, given hash(x) and a (currently mostly linear) state mixing function mix(), one that would let you compute (partial information about) y[i+1] = hash(x[i+1]) from y[1] = hash(x[1]) ... y[i] = hash(x[i]) where x[i] = mix(x[i-1]). Given that y[i] is much smaller than x[i], you'd need to put together a lot of them to derive something, and that's distinctly harder than a single-output preimage attack. > I certainly wasn't saying padding was a requirment. But I was trying to > point out that the SHA-1 implementaion crrently in /dev/random by design is > collision vulnerable. Collision resistance isn't a requirment for its > purposes obviously. No, it is, by design, 100% collision-resistant. An attacker neither sees nor controls the input x, so cannot use a collision attack. Thus, it's resistant to collisions in the same way that it's resistant to AIDS. [There's actually a flaw in my logic. I know Ted knows about it, because he implemented a specific defense in the /dev/random code against it; it's just not 100% information-theoretic ironclad. If anyone else can spot it, award yourself a clue point. But it's still not a plausible attack.] FURTHERMORE, even if an attacker *could* control the input, it's still exactly as collision resistant as unmodified SHA-1. Because it only accepts fixed-size input blocks, padding is unnecessary and irrelevant to security. Careful padding is ONLY required if you are working with VARIABLE-SIZED input. The fact that collision resistance is not a security requirement is a third point. > Guess my pointing this out is a lost cause. In much the same way that pointing out that the earth is flat is a lost cause. If you want people to believe nonsense, you need to dress it up a lot and call it a religion. As for Ted's words: > Whether or not we should trust the design of something as > critical to the security of security applications as /dev/random to > someone who fails to grasp the difference between these two rather > basic issues is something I will leave to the others on LKML. Fortuna may be a good idea after all (I disagree, but I can imagine being persuaded otherwise), but it has a very bad advocate right now. Would anyone else like to pick up the torch? By the way, I'd like to repeat my earlier question: you say Fortuna ia well-regarded in crypto circles. Can you cite a single paper to back that conclusion? Name a single well-known cryptographer, other than the authors, who has looked at it in some detail? There might be one, but I don't know of any. I respect the authors enough to know that even they recognize that an algorithm's designers sometimes have blind spots. ^ permalink raw reply [flat|nested] 28+ messages in thread
* [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-24 0:59 [PROPOSAL/PATCH] Fortuna PRNG in /dev/random linux 2004-09-24 2:34 ` Jean-Luc Cooke @ 2004-09-29 17:10 ` Jean-Luc Cooke 2004-09-29 19:31 ` Theodore Ts'o 1 sibling, 1 reply; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-29 17:10 UTC (permalink / raw) To: linux; +Cc: linux-kernel, cryptoapi, tytso [-- Attachment #1: Type: text/plain, Size: 3707 bytes --] Team, I have attempted to gather everyones comments into this re-write of my Fortuna /dev/random patch. I will summarize. 1. Requires the Cryptographic API. 2. Enable the Fortuna replacement to random.c in the Cryptographic Options menu of kernel config. 3. As-is, it checks at compile time that AES and SHA256 are built-in modules. 4. You can change this block cipher and message digest setting in the crypto/random-fortuna.c file. /proc/sys/kernel/random/digest_algo and cipher_algo are now strings telling the users what algorithms are being used. 5. Entropy estimation still exists in much the same way as in the Legacay random.c with the following exceptions: a. /proc/sys/kernel/random/read_wakeup_thresholds lower limit was 8, it is now 0 for those of us crazy enough to want to remove blocking. b. /proc/sys/kernel/random/entropy_avail only increases when data is added to pool-0. Pool-0 is the only 1 out of 32 Fortuna pools which is drawn from at every reseeding. So we don't over-estimate the amount of entropy we consume, this change was done. c. Since Fortuna resists consuming its entropy, it seemed inappropriate to debit 1MB of entropy from the count when reading 1MB from /dev/urandom. Now, every reseeding deducts min(cipherKeySize, hashDigestSize) from the entropy count. It should be noted that by doubling the blocksize from which you read /dev/urandom, you double the speed since you reduce the number of reseeds by a half. 6. The input mixing and output generation functions now use Fortuna a. 32 independent feedback input mixing pools using a cryptographic hash from the CryptoAPI are feed event data in round robin. b. A block cipher in CTR mode generates the output. c. Every file system block read from /dev/{u}random causes Fortuna to reseed the block cipher's key with the digest output from 1 or more of the input mixing pools. Pool-0 is used every time, pool-j is used every 2^j times. 7. Since Fortuna resists consuming its entropy, saving the 2048 byte random-seed should be changed to: dd if=/dev/urandom of=$seedfile bs=256 count=8. After 2^3 = 8 block reads, pools 0, 1, 2, and 3 will certainly be used. After 2^4 = 16, pools up to number 4 will be used and so on. 8. The difference in bzImage size is 7,268 bytes on my P4 laptop. This is a compressed image. My comparison was the 2.6.8.1 tarball with no kernel config changes vs. Enabling the cryptoapi, Fortuna, AES (i586 assembly) and SHA256. I have not yet done run-time memory consumption comparisons, but Fortuna is certainly heavier than Legacy. 9. The difference in performance in /dev/random is roughly a 32x decrease in output rates due to the 32x decrease in entropy estimation (see 5.c) 10. The difference in performance in /dev/urandom is 3x increase in output rates for a 512 byte block size. A doubling in block size, doubles the performance. I produced 512,000,000 bytes of output with a 32k block size in less than 10 seconds. The legacy /dev/urandom by comparison accomplished the same thing in 2.5 minutes. The assembly version of AES for the i586 gets credit for this. I tried to test the syn-cookie code but was unable to determine if it works. Printk()s in the syn cookie generation function were never called even though I echod 1 to the proc file. If someone can tell me what Im going wrong, I'd love to test this further. I'd appreciate beta-testers on other platforms to provide feedback. JLC [-- Attachment #2: fortuna-2.6.8.1.patch --] [-- Type: text/plain, Size: 62661 bytes --] diff -X exclude -Nur linux-2.6.8.1/crypto/Kconfig linux-2.6.8.1-rand2/crypto/Kconfig --- linux-2.6.8.1/crypto/Kconfig 2004-08-14 06:56:22.000000000 -0400 +++ linux-2.6.8.1-rand2/crypto/Kconfig 2004-09-28 23:30:04.000000000 -0400 @@ -9,6 +9,15 @@ help This option provides the core Cryptographic API. +config CRYPTO_RANDOM_FORTUNA + bool "The Fortuna RNG" + help + Replaces the legacy Linux RNG with one using the crypto API + and Fortuna by Ferguson and Schneier. Entropy estimation, and + a throttled /dev/random remain. Improvements include faster + /dev/urandom output and event input mixing. + Note: Requires AES and SHA256 to be built-in. + config CRYPTO_HMAC bool "HMAC support" depends on CRYPTO diff -X exclude -Nur linux-2.6.8.1/crypto/random-fortuna.c linux-2.6.8.1-rand2/crypto/random-fortuna.c --- linux-2.6.8.1/crypto/random-fortuna.c 1969-12-31 19:00:00.000000000 -0500 +++ linux-2.6.8.1-rand2/crypto/random-fortuna.c 2004-09-29 10:44:19.829932384 -0400 @@ -0,0 +1,2027 @@ +/* + * random-fortuna.c -- A cryptographically strong random number generator + * using Fortuna. + * + * Version 2.1.1, last modified 28-Sep-2004 + * + * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All + * rights reserved. + * Copyright Jean-Luc Cooke, 2004. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, and the entire permission notice in its entirety, + * including the disclaimer of warranties. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. The name of the author may not be used to endorse or promote + * products derived from this software without specific prior + * written permission. + * + * ALTERNATIVELY, this product may be distributed under the terms of + * the GNU General Public License, in which case the provisions of the GPL are + * required INSTEAD OF the above restrictions. (This clause is + * necessary due to a potential bad interaction between the GPL and + * the restrictions contained in a BSD-style copyright.) + * + * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF + * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT + * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE + * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH + * DAMAGE. + */ + +/* + * Taken from random.c, updated by Jean-Luc Cooke <jlcooke@certainkey.com> + * (now, with legal B.S. out of the way.....) + * + * This routine gathers environmental noise from device drivers, etc., + * and returns good random numbers, suitable for cryptographic use. + * Besides the obvious cryptographic uses, these numbers are also good + * for seeding TCP sequence numbers, and other places where it is + * desirable to have numbers which are not only random, but hard to + * predict by an attacker. + * + * Theory of operation + * =================== + * + * Computers are very predictable devices. Hence it is extremely hard + * to produce truly random numbers on a computer --- as opposed to + * pseudo-random numbers, which can easily generated by using a + * algorithm. Unfortunately, it is very easy for attackers to guess + * the sequence of pseudo-random number generators, and for some + * applications this is not acceptable. So instead, we must try to + * gather "environmental noise" from the computer's environment, which + * must be hard for outside attackers to observe, and use that to + * generate random numbers. In a Unix environment, this is best done + * from inside the kernel. + * + * Sources of randomness from the environment include inter-keyboard + * timings, inter-interrupt timings from some interrupts, and other + * events which are both (a) non-deterministic and (b) hard for an + * outside observer to measure. Randomness from these sources are + * added to an "entropy pool", which is mixed. + * As random bytes are mixed into the entropy pool, the routines keep + * an *estimate* of how many bits of randomness have been stored into + * the random number generator's internal state. + * + * Even if it is possible to analyze Fortuna in some clever way, as + * long as the amount of data returned from the generator is less than + * the inherent entropy we've estimated in the pool, the output data + * is totally unpredictable. For this reason, the routine decreases + * its internal estimate of how many bits of "true randomness" are + * contained in the entropy pool as it outputs random numbers. + * + * If this estimate goes to zero, the routine can still generate + * random numbers; however, an attacker may (at least in theory) be + * able to infer the future output of the generator from prior + * outputs. This requires successful cryptanalysis of Fortuna, which is + * not believed to be feasible, but there is a remote possibility. + * Nonetheless, these numbers should be useful for the vast majority + * of purposes. + * + * Exported interfaces ---- output + * =============================== + * + * There are three exported interfaces; the first is one designed to + * be used from within the kernel: + * + * void get_random_bytes(void *buf, int nbytes); + * + * This interface will return the requested number of random bytes, + * and place it in the requested buffer. + * + * The two other interfaces are two character devices /dev/random and + * /dev/urandom. /dev/random is suitable for use when very high + * quality randomness is desired (for example, for key generation or + * one-time pads), as it will only return a maximum of the number of + * bits of randomness (as estimated by the random number generator) + * contained in the entropy pool. + * + * The /dev/urandom device does not have this limit, and will return + * as many bytes as are requested. As more and more random bytes are + * requested without giving time for the entropy pool to recharge, + * this will result in random numbers that are merely cryptographically + * strong. For many applications, however, this is acceptable. + * + * Exported interfaces ---- input + * ============================== + * + * The current exported interfaces for gathering environmental noise + * from the devices are: + * + * void add_keyboard_randomness(unsigned char scancode); + * void add_mouse_randomness(__u32 mouse_data); + * void add_interrupt_randomness(int irq); + * + * add_keyboard_randomness() uses the inter-keypress timing, as well as the + * scancode as random inputs into the "entropy pool". + * + * add_mouse_randomness() uses the mouse interrupt timing, as well as + * the reported position of the mouse from the hardware. + * + * add_interrupt_randomness() uses the inter-interrupt timing as random + * inputs to the entropy pool. Note that not all interrupts are good + * sources of randomness! For example, the timer interrupts is not a + * good choice, because the periodicity of the interrupts is too + * regular, and hence predictable to an attacker. Disk interrupts are + * a better measure, since the timing of the disk interrupts are more + * unpredictable. + * + * All of these routines try to estimate how many bits of randomness a + * particular randomness source. They do this by keeping track of the + * first and second order deltas of the event timings. + * + * Ensuring unpredictability at system startup + * ============================================ + * + * When any operating system starts up, it will go through a sequence + * of actions that are fairly predictable by an adversary, especially + * if the start-up does not involve interaction with a human operator. + * This reduces the actual number of bits of unpredictability in the + * entropy pool below the value in entropy_count. In order to + * counteract this effect, it helps to carry information in the + * entropy pool across shut-downs and start-ups. To do this, put the + * following lines an appropriate script which is run during the boot + * sequence: + * + * echo "Initializing random number generator..." + * random_seed=/var/run/random-seed + * # Carry a random seed from start-up to start-up + * # Load and then save the whole entropy pool + * if [ -f $random_seed ]; then + * cat $random_seed >/dev/urandom + * else + * touch $random_seed + * fi + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * and the following lines in an appropriate script which is run as + * the system is shutdown: + * + * # Carry a random seed from shut-down to start-up + * # Save the whole entropy pool + * # Fortuna resists using all of its pool matirial, so we need to + * # draw 8 seperate times (count=8) to ensure we get the entropy + * # from pool[0,1,2,3]'s entropy. count=2048 pool[0 .. 10], etc. + * echo "Saving random seed..." + * random_seed=/var/run/random-seed + * touch $random_seed + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * For example, on most modern systems using the System V init + * scripts, such code fragments would be found in + * /etc/rc.d/init.d/random. On older Linux systems, the correct script + * location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0. + * + * Effectively, these commands cause the contents of the entropy pool + * to be saved at shut-down time and reloaded into the entropy pool at + * start-up. (The 'dd' in the addition to the bootup script is to + * make sure that /etc/random-seed is different for every start-up, + * even if the system crashes without executing rc.0.) Even with + * complete knowledge of the start-up activities, predicting the state + * of the entropy pool requires knowledge of the previous history of + * the system. + * + * Configuring the /dev/random driver under Linux + * ============================================== + * + * The /dev/random driver under Linux uses minor numbers 8 and 9 of + * the /dev/mem major number (#1). So if your system does not have + * /dev/random and /dev/urandom created already, they can be created + * by using the commands: + * + * mknod /dev/random c 1 8 + * mknod /dev/urandom c 1 9 + * + * Acknowledgements: + * ================= + * + * Ideas for constructing this random number generator were derived + * from Pretty Good Privacy's random number generator, and from private + * discussions with Phil Karn. Colin Plumb provided a faster random + * number generator, which speed up the mixing function of the entropy + * pool, taken from PGPfone. Dale Worley has also contributed many + * useful ideas and suggestions to improve this driver. + * + * Any flaws in the design are solely my (jlcooke) responsibility, and + * should not be attributed to the Phil, Colin, or any of authors of PGP + * or the legacy random.c (Ted Ts'o). + * + * Further background information on this topic may be obtained from + * RFC 1750, "Randomness Recommendations for Security", by Donald + * Eastlake, Steve Crocker, and Jeff Schiller. And Chapter 10 of + * Practical Cryptography by Ferguson and Schneier. + */ + +#include <linux/utsname.h> +#include <linux/config.h> +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/major.h> +#include <linux/string.h> +#include <linux/fcntl.h> +#include <linux/slab.h> +#include <linux/random.h> +#include <linux/poll.h> +#include <linux/init.h> +#include <linux/fs.h> +#include <linux/workqueue.h> +#include <linux/genhd.h> +#include <linux/interrupt.h> +#include <linux/spinlock.h> +#include <linux/percpu.h> +#include <linux/crypto.h> +#include <../crypto/internal.h> + +#include <asm/scatterlist.h> +#include <asm/processor.h> +#include <asm/uaccess.h> +#include <asm/irq.h> +#include <asm/io.h> + + +/* + * Configuration information + */ +#define BATCH_ENTROPY_SIZE 256 +#define USE_SHA256 +#define USE_AES + +/* + * Compile-time checking for our desired message digest + */ +#if defined USE_SHA256 + #if !CONFIG_CRYPTO_SHA256 + #error SHA256 not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "sha256" +#elif defined USE_WHIRLPOOL + #if !CONFIG_CRYPTO_WHIRLPOOL + #error WHIRLPOOL not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "whirlpool" +#else + #error Desired message digest algorithm not found +#endif + +/* + * Compile-time checking for our desired block cipher + */ +#if defined USE_AES + #if (!CONFIG_CRYPTO_AES && !CONFIG_CRYPTO_AES_586) + #error AES not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "aes" +#elif defined USE_TWOFISH + #if (!CONFIG_CRYPTO_TWOFISH && !CONFIG_CRYPTO_TWOFISH_586) + #error TWOFISH not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "twofish" +#else + #error Desired block cipher algorithm not found +#endif /* USE_AES */ + +#define DEFAULT_POOL_NUMBER 5 /* 2^{5} = 32 pools */ +#define DEFAULT_POOL_SIZE ( (1<<DEFAULT_POOL_NUMBER) * 256) +/* largest block of random data to extract at a time when in blocking-mode */ +#define TMP_BUF_SIZE 512 +/* SHA512/WHIRLPOOL have 64bytes == 512 bits */ +#define RANDOM_MAX_DIGEST_SIZE 64 +/* AES256 has 16byte blocks == 128 bits */ +#define RANDOM_MAX_BLOCK_SIZE 16 +/* AES256 has 32byte keys == 256 bits */ +#define RANDOM_MAX_KEY_SIZE 32 + +#if 0 + #define DEBUG_PRINTK printk +#else + #define DEBUG_PRINTK noop_printk +#endif +#if 0 + #define STATS_PRINTK printk +#else + #define STATS_PRINTK noop_printk +#endif +static inline void noop_printk(const char *a, ...) {} + +/* + * The minimum number of bits of entropy before we wake up a read on + * /dev/random. We also wait for reseed_count>0 and we do a + * random_reseed() once we do wake up. + */ +static int random_read_wakeup_thresh = 64; + +/* + * If the entropy count falls under this number of bits, then we + * should wake up processes which are selecting or polling on write + * access to /dev/random. + */ +static int random_write_wakeup_thresh = 128; + +/* + * When the input pool goes over trickle_thresh, start dropping most + * samples to avoid wasting CPU time and reduce lock contention. + */ + +static int trickle_thresh = DEFAULT_POOL_SIZE * 7; + +static DEFINE_PER_CPU(int, trickle_count) = 0; + +#define POOLBYTES\ + ( (1<<random_state->pool_number) * random_state->digestsize ) +#define POOLBITS ( POOLBYTES * 8 ) + +/* + * Linux 2.2 compatibility + */ +#ifndef DECLARE_WAITQUEUE +#define DECLARE_WAITQUEUE(WAIT, PTR) struct wait_queue WAIT = { PTR, NULL } +#endif +#ifndef DECLARE_WAIT_QUEUE_HEAD +#define DECLARE_WAIT_QUEUE_HEAD(WAIT) struct wait_queue *WAIT +#endif + +/* + * Static global variables + */ +static struct entropy_store *random_state; /* The default global store */ +static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); +static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); + +/* + * Forward procedure declarations + */ +#ifdef CONFIG_SYSCTL +static void sysctl_init_random(struct entropy_store *random_state); +#endif + +/***************************************************************** + * + * Utility functions, with some ASM defined functions for speed + * purposes + * + *****************************************************************/ + +/* + * More asm magic.... + * + * For entropy estimation, we need to do an integral base 2 + * logarithm. + * + * Note the "12bits" suffix - this is used for numbers between + * 0 and 4095 only. This allows a few shortcuts. + */ +#if 0 /* Slow but clear version */ +static inline __u32 int_ln_12bits(__u32 word) +{ + __u32 nbits = 0; + + while (word >>= 1) + nbits++; + return nbits; +} +#else /* Faster (more clever) version, courtesy Colin Plumb */ +static inline __u32 int_ln_12bits(__u32 word) +{ + /* Smear msbit right to make an n-bit mask */ + word |= word >> 8; + word |= word >> 4; + word |= word >> 2; + word |= word >> 1; + /* Remove one bit to make this a logarithm */ + word >>= 1; + /* Count the bits set in the word */ + word -= (word >> 1) & 0x555; + word = (word & 0x333) + ((word >> 2) & 0x333); + word += (word >> 4); + word += (word >> 8); + return word & 15; +} +#endif + +#if 0 +#define DEBUG_ENT(fmt, arg...) printk(KERN_DEBUG "random: " fmt, ## arg) +#else +#define DEBUG_ENT(fmt, arg...) do {} while (0) +#endif + +/********************************************************************** + * + * OS independent entropy store. Here are the functions which handle + * storing entropy in an entropy pool. + * + **********************************************************************/ + +struct entropy_store { + const char *digestAlgo; + unsigned int digestsize; + struct crypto_tfm *pools[1<<DEFAULT_POOL_NUMBER]; + /* optional, handy for statistics */ + unsigned int pools_bytes[1<<DEFAULT_POOL_NUMBER]; + + const char *cipherAlgo; + /* the key */ + unsigned char key[RANDOM_MAX_DIGEST_SIZE]; + unsigned int keysize; + /* the CTR value */ + unsigned char iv[16]; + unsigned int blocksize; + struct crypto_tfm *cipher; + + /* 2^pool_number # of pools */ + unsigned int pool_number; + /* current pool to add into */ + unsigned int pool_index; + /* size of the first pool */ + unsigned int pool0_len; + /* number of time we have reset */ + unsigned int reseed_count; + /* digest used during random_reseed() */ + struct crypto_tfm *reseedHash; + /* cipher used for network randomness */ + struct crypto_tfm *networkCipher; + /* flag indicating if networkCipher has been seeded */ + char networkCipher_ready; + + /* read-write data: */ + spinlock_t lock ____cacheline_aligned_in_smp; + int entropy_count; +}; + +/* + * Initialize the entropy store. The input argument is the size of + * the random pool. + * + * Returns an negative error if there is a problem. + */ +static int create_entropy_store(int poolnum, struct entropy_store **ret_bucket) +{ + struct entropy_store *r; + unsigned long pool_number; + int keysize, i, j; + + pool_number = poolnum; + + r = kmalloc(sizeof(struct entropy_store), GFP_KERNEL); + if (!r) { + return -ENOMEM; + } + + memset (r, 0, sizeof(struct entropy_store)); + r->pool_number = pool_number; + r->digestAlgo = RANDOM_DEFAULT_DIGEST_ALGO; + +DEBUG_PRINTK("create_entropy_store() pools=%u index=%u\n", + 1<<pool_number, r->pool_index); + for (i=0; i<(1<<pool_number); i++) { +DEBUG_PRINTK("create_entropy_store() i=%i index=%u\n", i, r->pool_index); + r->pools[i] = crypto_alloc_tfm(r->digestAlgo, 0); + if (r->pools[i] == NULL) { + for (j=0; j<i; j++) { + if (r->pools[j] != NULL) { + kfree(r->pools[j]); + } + } + kfree(r); + return -ENOMEM; + } + crypto_digest_init( r->pools[i] ); + } + r->lock = SPIN_LOCK_UNLOCKED; + *ret_bucket = r; + + r->cipherAlgo = RANDOM_DEFAULT_CIPHER_ALGO; + if ((r->cipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + /* If the HASH's output is greater then the cipher's keysize, truncate + * to the cipher's keysize */ + keysize = crypto_tfm_alg_max_keysize(r->cipher); + r->digestsize = crypto_tfm_alg_digestsize(r->pools[0]); + r->blocksize = crypto_tfm_alg_blocksize(r->cipher); + + r->keysize = (keysize < r->digestsize) ? keysize : r->digestsize; +DEBUG_PRINTK("create_RANDOM %u %u %u\n", keysize, r->digestsize, r->keysize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* digest used duing random-reseed() */ + if ((r->reseedHash=crypto_alloc_tfm(r->digestAlgo, 0)) == NULL) { + return -ENOMEM; + } + /* cipher used for network randomness */ + if ((r->networkCipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + return 0; +} + +/* + * This function adds a byte into the entropy "pool". It does not + * update the entropy estimate. The caller should call + * credit_entropy_store if this is appropriate. + */ +static void add_entropy_words(struct entropy_store *r, const __u32 *in, + int nwords) +{ + unsigned long flags; + struct scatterlist sg[1]; + static unsigned int totalBytes=0; + + if (r == NULL) { + return; + } + + spin_lock_irqsave(&r->lock, flags); + + totalBytes += nwords * sizeof(__u32); + r->pools_bytes[r->pool_index] += nwords * sizeof(__u32); + + sg[0].page = virt_to_page(in); + sg[0].offset = offset_in_page(in); + sg[0].length = nwords*sizeof(__u32); + crypto_digest_update(r->pools[r->pool_index], sg, 1); + + if (r->pool_index == 0) { + r->pool0_len += nwords*sizeof(__u32); + } +DEBUG_PRINTK("r->pool0_len = %u\n", r->pool0_len); + + /* idx = (idx + 1) mod ( (2^N)-1 ) */ + r->pool_index = (r->pool_index + 1) & ((1<<r->pool_number)-1); + + spin_unlock_irqrestore(&r->lock, flags); +DEBUG_PRINTK("0 add_entropy_words() nwords=%u pool[i].bytes=%u total=%u\n", + nwords, r->pools_bytes[r->pool_index], totalBytes); +} + +/* + * Credit (or debit) the entropy store with n bits of entropy + */ +static void credit_entropy_store(struct entropy_store *r, int nbits) +{ + unsigned long flags; + + spin_lock_irqsave(&r->lock, flags); + + if (r->entropy_count + nbits < 0) { + DEBUG_ENT("negative entropy/overflow (%d+%d)\n", + r->entropy_count, nbits); + r->entropy_count = 0; + } else if (r->entropy_count + nbits > POOLBITS) { + r->entropy_count = POOLBITS; + } else { + r->entropy_count += nbits; + if (nbits) + DEBUG_ENT("%04d : added %d bits\n", + r->entropy_count, + nbits); + } + + spin_unlock_irqrestore(&r->lock, flags); +} + +/********************************************************************** + * + * Entropy batch input management + * + * We batch entropy to be added to avoid increasing interrupt latency + * + **********************************************************************/ + +struct sample { + __u32 data[2]; + int credit; +}; + +static struct sample *batch_entropy_pool, *batch_entropy_copy; +static int batch_head, batch_tail; +static spinlock_t batch_lock = SPIN_LOCK_UNLOCKED; + +static int batch_max; +static void batch_entropy_process(void *private_); +static DECLARE_WORK(batch_work, batch_entropy_process, NULL); + +/* note: the size must be a power of 2 */ +static int __init batch_entropy_init(int size, struct entropy_store *r) +{ + batch_entropy_pool = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_pool) + return -1; + batch_entropy_copy = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_copy) { + kfree(batch_entropy_pool); + return -1; + } + batch_head = batch_tail = 0; + batch_work.data = r; + batch_max = size; + return 0; +} + +/* + * Changes to the entropy data is put into a queue rather than being added to + * the entropy counts directly. This is presumably to avoid doing heavy + * hashing calculations during an interrupt in add_timer_randomness(). + * Instead, the entropy is only added to the pool by keventd. + */ +void batch_entropy_store(u32 a, u32 b, int num) +{ + int new; + unsigned long flags; + + if (!batch_max) + return; + + spin_lock_irqsave(&batch_lock, flags); + + batch_entropy_pool[batch_head].data[0] = a; + batch_entropy_pool[batch_head].data[1] = b; + batch_entropy_pool[batch_head].credit = num; + + if (((batch_head - batch_tail) & (batch_max-1)) >= (batch_max / 2)) { + /* + * Schedule it for the next timer tick: + */ + schedule_delayed_work(&batch_work, 1); + } + + new = (batch_head+1) & (batch_max-1); + if (new == batch_tail) { + DEBUG_ENT("batch entropy buffer full\n"); + } else { + batch_head = new; + } + + spin_unlock_irqrestore(&batch_lock, flags); +} + +EXPORT_SYMBOL(batch_entropy_store); + +/* + * Flush out the accumulated entropy operations, adding entropy to the passed + * store (normally random_state). If that store has enough entropy, alternate + * between randomizing the data of the primary and secondary stores. + */ +static void batch_entropy_process(void *private_) +{ + int max_entropy = POOLBITS; + unsigned head, tail; + + /* Mixing into the pool is expensive, so copy over the batch + * data and release the batch lock. The pool is at least half + * full, so don't worry too much about copying only the used + * part. + */ + spin_lock_irq(&batch_lock); + + memcpy(batch_entropy_copy, batch_entropy_pool, + batch_max*sizeof(struct sample)); + + head = batch_head; + tail = batch_tail; + batch_tail = batch_head; + + spin_unlock_irq(&batch_lock); + + while (head != tail) { + if (random_state->entropy_count >= max_entropy) { + max_entropy = POOLBITS; + } + /* + * Only credit if we're feeding into pool[0] + * Otherwise we'd be assuming entropy in pool[31] would be + * usable when we read. This is conservative, but it'll + * not over-credit our entropy estimate for users of + * /dev/random, /dev/urandom will not be effected. + */ + if (random_state->pool_index == 0) { + credit_entropy_store(random_state, + batch_entropy_copy[tail].credit); + } + add_entropy_words(random_state, + batch_entropy_copy[tail].data, 2); + tail = (tail+1) & (batch_max-1); + } + if (random_state->entropy_count >= random_read_wakeup_thresh + || random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); +} + +/********************************************************************* + * + * Entropy input management + * + *********************************************************************/ + +/* There is one of these per entropy source */ +struct timer_rand_state { + __u32 last_time; + __s32 last_delta,last_delta2; + int dont_count_entropy:1; +}; + +static struct timer_rand_state keyboard_timer_state; +static struct timer_rand_state mouse_timer_state; +static struct timer_rand_state extract_timer_state; +static struct timer_rand_state *irq_timer_state[NR_IRQS]; + +/* + * This function adds entropy to the entropy "pool" by using timing + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. + * + * The number "num" is also added to the pool - it should somehow describe + * the type of event which just happened. This is currently 0-255 for + * keyboard scan codes, and 256 upwards for interrupts. + * On the i386, this is assumed to be at most 16 bits, and the high bits + * are used for a high-resolution timer. + * + */ +static void add_timer_randomness(struct timer_rand_state *state, unsigned num) +{ + __u32 time; + __s32 delta, delta2, delta3; + int entropy = 0; + + /* if over the trickle threshold, use only 1 in 4096 samples */ + if ( random_state->entropy_count > trickle_thresh && + (__get_cpu_var(trickle_count)++ & 0xfff)) + return; + +#if defined (__i386__) || defined (__x86_64__) + if (cpu_has_tsc) { + __u32 high; + rdtsc(time, high); + num ^= high; + } else { + time = jiffies; + } +#elif defined (__sparc_v9__) + unsigned long tick = tick_ops->get_tick(); + + time = (unsigned int) tick; + num ^= (tick >> 32UL); +#else + time = jiffies; +#endif + + /* + * Calculate number of bits of randomness we probably added. + * We take into account the first, second and third-order deltas + * in order to make our estimate. + */ + if (!state->dont_count_entropy) { + delta = time - state->last_time; + state->last_time = time; + + delta2 = delta - state->last_delta; + state->last_delta = delta; + + delta3 = delta2 - state->last_delta2; + state->last_delta2 = delta2; + + if (delta < 0) + delta = -delta; + if (delta2 < 0) + delta2 = -delta2; + if (delta3 < 0) + delta3 = -delta3; + if (delta > delta2) + delta = delta2; + if (delta > delta3) + delta = delta3; + + /* + * delta is now minimum absolute delta. + * Round down by 1 bit on general principles, + * and limit entropy entimate to 12 bits. + */ + delta >>= 1; + delta &= (1 << 12) - 1; + + entropy = int_ln_12bits(delta); + } + batch_entropy_store(num, time, entropy); +} + +void add_keyboard_randomness(unsigned char scancode) +{ + static unsigned char last_scancode; + /* ignore autorepeat (multiple key down w/o key up) */ + if (scancode != last_scancode) { + last_scancode = scancode; + add_timer_randomness(&keyboard_timer_state, scancode); + } +} + +EXPORT_SYMBOL(add_keyboard_randomness); + +void add_mouse_randomness(__u32 mouse_data) +{ + add_timer_randomness(&mouse_timer_state, mouse_data); +} + +EXPORT_SYMBOL(add_mouse_randomness); + +void add_interrupt_randomness(int irq) +{ + if (irq >= NR_IRQS || irq_timer_state[irq] == 0) + return; + + add_timer_randomness(irq_timer_state[irq], 0x100+irq); +} + +EXPORT_SYMBOL(add_interrupt_randomness); + +void add_disk_randomness(struct gendisk *disk) +{ + if (!disk || !disk->random) + return; + /* first major is 1, so we get >= 0x200 here */ + add_timer_randomness(disk->random, + 0x100+MKDEV(disk->major, disk->first_minor)); +} + +EXPORT_SYMBOL(add_disk_randomness); + +/********************************************************************* + * + * Entropy extraction routines + * + *********************************************************************/ + +#define EXTRACT_ENTROPY_USER 1 +#define EXTRACT_ENTROPY_LIMIT 4 + +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags); + +static inline void increment_iv(unsigned char *iv, const unsigned int IVsize) { + switch (IVsize) { + case 8: + if (++((u32*)iv)[0]) + ++((u32*)iv)[1]; + break; + + case 16: + if (++((u32*)iv)[0]) + if (++((u32*)iv)[1]) + if (++((u32*)iv)[2]) + ++((u32*)iv)[3]; + break; + + default: + { + int i; + for (i=0; i<IVsize; i++) + if (++iv[i]) + break; + } + break; + } +} + +/* + * Fortuna's Reseed + * + * Key' = hash(Key || hash(pool[a0]) || hash(pool[a1]) || ...) + * where {a0,a1,...} are facators of r->reseed_count+1 which are of the form + * 2^j, 0<=j. + * Prevents backtracking attacks and with event inputs, supports forward + * secrecy + */ +static void random_reseed(struct entropy_store *r, size_t nbytes, int flags) { + struct scatterlist sg[1]; + unsigned int i, deduct; + unsigned char tmp[RANDOM_MAX_DIGEST_SIZE]; + unsigned long cpuflags; + + deduct = (r->keysize < r->digestsize) ? r->keysize : r->digestsize; + + /* Hold lock while accounting */ + spin_lock_irqsave(&r->lock, cpuflags); + + DEBUG_ENT("%04d : trying to extract %d bits\n", + random_state->entropy_count, + deduct * 8); + + /* + * Don't extract more data than in the entropy in the pooling system + */ + if (flags & EXTRACT_ENTROPY_LIMIT && nbytes >= r->entropy_count / 8) { + nbytes = r->entropy_count / 8; + } + + if (deduct*8 <= r->entropy_count) { + r->entropy_count -= deduct*8; + } else { + r->entropy_count = 0; + } + + if (r->entropy_count < random_write_wakeup_thresh) + wake_up_interruptible(&random_write_wait); + + DEBUG_ENT("%04d : debiting %d bits%s\n", + random_state->entropy_count, + deduct * 8, + flags & EXTRACT_ENTROPY_LIMIT ? "" : " (unlimited)"); + + r->reseed_count++; + r->pool0_len = 0; + + /* Entropy accounting done, release lock. */ + spin_unlock_irqrestore(&r->lock, cpuflags); + + DEBUG_PRINTK("random_reseed count=%u\n", r->reseed_count); + + crypto_digest_init(r->reseedHash); + + sg[0].page = virt_to_page(r->key); + sg[0].offset = offset_in_page(r->key); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + +#define TESTBIT(VAL, N)\ + ( ((VAL) >> (N)) & 1 ) + for (i=0; i<(1<<r->pool_number); i++) { + /* using pool[i] if r->reseed_count is divisible by 2^i + * since 2^0 == 1, we always use pool[0] + */ + if ( (i==0) || TESTBIT(r->reseed_count,i)==0 ) { + crypto_digest_final(r->pools[i], tmp); + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + + crypto_digest_init(r->pools[i]); + /* Each pool carries its past state forward */ + crypto_digest_update(r->pools[i], sg, 1); + } else { + /* pool j is only used once every 2^j times */ + break; + } + } +#undef TESTBIT + + crypto_digest_final(r->reseedHash, r->key); + crypto_cipher_setkey(r->cipher, r->key, r->keysize); + increment_iv(r->iv, r->blocksize); +} + + +/* + * This function extracts randomness from the "entropy pool", and + * returns it in a buffer. This function computes how many remaining + * bits of entropy are left in the pool, but it does not restrict the + * number of bytes that are actually obtained. If the EXTRACT_ENTROPY_USER + * flag is given, then the buf pointer is assumed to be in user space. + */ +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags) +{ + ssize_t ret, i, deduct; + __u32 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgiv[1], sgtmp[1]; + + /* Redundant, but just in case... */ + if (r->entropy_count > POOLBITS) + r->entropy_count = POOLBITS; + + /* + * The size of block you read from at a go is directly related to + * the number of Fortuna-reseeds you perform. And thus, the amount + * of entropy you draw from the pooling system. + * + * Reading from /dev/urandom, you can specify any block size, + * the larger the less Fortuna-reseeds, the faster the output. + * + * Reading from /dev/random however, we limit this to the amount of + * entropy to deduct from our estimate. This estimate is most + * naturally updated from inside Fortuna-reseed, so we limit our block + * size here. + * + * At most, Fortuna will use e=min(r->digestsize, r->keysize) of + * entropy to reseed. + */ + deduct = (r->keysize < r->digestsize) ? r->keysize : r->digestsize; + if (flags & EXTRACT_ENTROPY_LIMIT && deduct < nbytes) { + nbytes = deduct; + } + + random_reseed(r, nbytes, flags); + + sgiv[0].page = virt_to_page(r->iv); + sgiv[0].offset = offset_in_page(r->iv); + sgiv[0].length = r->blocksize; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = r->blocksize; + + ret = 0; + while (nbytes) { + /* + * Check if we need to break out or reschedule.... + */ + if ((flags & EXTRACT_ENTROPY_USER) && need_resched()) { + if (signal_pending(current)) { + if (ret == 0) + ret = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : extract sleeping (%d bytes left)\n", + random_state->entropy_count, + nbytes); + + schedule(); + + /* + * when we wakeup, there will be more data in our + * pooling system so we will reseed + */ + random_reseed(r, nbytes, flags); + + DEBUG_ENT("%04d : extract woke up\n", + random_state->entropy_count); + } + + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, r->blocksize); + increment_iv(r->iv, r->blocksize); + + /* Copy data to destination buffer */ + i = (nbytes < r->blocksize) ? nbytes : r->blocksize; + if (flags & EXTRACT_ENTROPY_USER) { + i -= copy_to_user(buf, (__u8 const *)tmp, i); + if (!i) { + ret = -EFAULT; + break; + } + } else + memcpy(buf, (__u8 const *)tmp, i); + nbytes -= i; + buf += i; + ret += i; + } + + /* generate a new key */ + /* take into account the possibility that keysize >= blocksize */ + for (i=0; i+r->blocksize<=r->keysize; i+=r->blocksize) { + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + } + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize-i; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* Wipe data just returned from memory */ + memset(tmp, 0, sizeof(tmp)); + + return ret; +} + +/* + * This function is the exported kernel interface. It returns some + * number of good random numbers, suitable for seeding TCP sequence + * numbers, etc. + */ +void get_random_bytes(void *buf, int nbytes) +{ + if (random_state) + extract_entropy(random_state, (char *) buf, nbytes, 0); + else + printk(KERN_NOTICE "get_random_bytes called before " + "random driver initialization\n"); +} + +EXPORT_SYMBOL(get_random_bytes); + +/********************************************************************* + * + * Functions to interface with Linux + * + *********************************************************************/ + +/* + * Initialize the random pool with standard stuff. + * This is not secure random data, but it can't hurt us and people scream + * when you try to remove it. + * + * NOTE: This is an OS-dependent function. + */ +static void init_std_data(struct entropy_store *r) +{ + struct timeval tv; + __u32 words[2]; + char *p; + int i; + + do_gettimeofday(&tv); + words[0] = tv.tv_sec; + words[1] = tv.tv_usec; + add_entropy_words(r, words, 2); + + /* + * This doesn't lock system.utsname. However, we are generating + * entropy so a race with a name set here is fine. + */ + p = (char *) &system_utsname; + for (i = sizeof(system_utsname) / sizeof(words); i; i--) { + memcpy(words, p, sizeof(words)); + add_entropy_words(r, words, sizeof(words)/4); + p += sizeof(words); + } +} + +static int __init rand_initialize(void) +{ + int i; + + if (create_entropy_store(DEFAULT_POOL_NUMBER, &random_state)) + goto err; + if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state)) + goto err; + init_std_data(random_state); +#ifdef CONFIG_SYSCTL + sysctl_init_random(random_state); +#endif + for (i = 0; i < NR_IRQS; i++) + irq_timer_state[i] = NULL; + memset(&keyboard_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&mouse_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&extract_timer_state, 0, sizeof(struct timer_rand_state)); + extract_timer_state.dont_count_entropy = 1; + return 0; +err: + return -1; +} +module_init(rand_initialize); + +void rand_initialize_irq(int irq) +{ + struct timer_rand_state *state; + + if (irq >= NR_IRQS || irq_timer_state[irq]) + return; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + irq_timer_state[irq] = state; + } +} + +void rand_initialize_disk(struct gendisk *disk) +{ + struct timer_rand_state *state; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + disk->random = state; + } +} + +static ssize_t +random_read(struct file * file, char __user * buf, size_t nbytes, loff_t *ppos) +{ + DECLARE_WAITQUEUE(wait, current); + ssize_t n, retval = 0, count = 0, + max_xfer_size; + + if (nbytes == 0) + return 0; + + /* + * only read out of extract_entropy() the minimum bits of pool + * matirial we can deduce from the output if we could attack our + * block cipher and message digest functions in Fortuna + */ + max_xfer_size = (random_state->digestsize < random_state->keysize) + ? random_state->keysize + : random_state->digestsize; + + while (nbytes > 0) { + n = nbytes; + if (n > max_xfer_size) + n = max_xfer_size; + + DEBUG_ENT("%04d : reading %d bits, p: %d s: %d\n", + random_state->entropy_count, + n*8, random_state->entropy_count, + random_state->entropy_count); + + n = extract_entropy(random_state, buf, n, + EXTRACT_ENTROPY_USER | + EXTRACT_ENTROPY_LIMIT); + + DEBUG_ENT("%04d : read got %d bits (%d needed, reseeds=%d)\n", + random_state->entropy_count, + random_state->reseed_count, + n*8, (nbytes-n)*8); + + if (n == 0) { + if (file->f_flags & O_NONBLOCK) { + retval = -EAGAIN; + break; + } + if (signal_pending(current)) { + retval = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : sleeping?\n", + random_state->entropy_count); + + set_current_state(TASK_INTERRUPTIBLE); + add_wait_queue(&random_read_wait, &wait); + + if (random_state->entropy_count / 8 == 0 + || random_state->reseed_count == 0) + schedule(); + + set_current_state(TASK_RUNNING); + remove_wait_queue(&random_read_wait, &wait); + + DEBUG_ENT("%04d : waking up\n", + random_state->entropy_count); + + continue; + } + + if (n < 0) { + retval = n; + break; + } + count += n; + buf += n; + nbytes -= n; + break; /* This break makes the device work */ + /* like a named pipe */ + } + + /* + * If we gave the user some bytes, update the access time. + */ + if (count) + file_accessed(file); + + return (count ? count : retval); +} + +static ssize_t +urandom_read(struct file * file, char __user * buf, + size_t nbytes, loff_t *ppos) +{ + /* Don't return anything untill we've reseeded at least once */ + if (random_state->reseed_count == 0) + return 0; + + return extract_entropy(random_state, buf, nbytes, + EXTRACT_ENTROPY_USER); +} + +static unsigned int +random_poll(struct file *file, poll_table * wait) +{ + unsigned int mask; + + poll_wait(file, &random_read_wait, wait); + poll_wait(file, &random_write_wait, wait); + mask = 0; + if (random_state->entropy_count >= random_read_wakeup_thresh) + mask |= POLLIN | POLLRDNORM; + if (random_state->entropy_count < random_write_wakeup_thresh) + mask |= POLLOUT | POLLWRNORM; + return mask; +} + +static ssize_t +random_write(struct file * file, const char __user * buffer, + size_t count, loff_t *ppos) +{ + int ret = 0; + size_t bytes; + __u32 buf[16]; + const char __user *p = buffer; + size_t c = count; + + while (c > 0) { + bytes = min(c, sizeof(buf)); + + bytes -= copy_from_user(&buf, p, bytes); + if (!bytes) { + ret = -EFAULT; + break; + } + c -= bytes; + p += bytes; + + add_entropy_words(random_state, buf, (bytes + 3) / 4); + } + if (p == buffer) { + return (ssize_t)ret; + } else { + file->f_dentry->d_inode->i_mtime = CURRENT_TIME; + mark_inode_dirty(file->f_dentry->d_inode); + return (ssize_t)(p - buffer); + } +} + +static int +random_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, unsigned long arg) +{ + int size, ent_count; + int __user *p = (int __user *)arg; + int retval; + + switch (cmd) { + case RNDGETENTCNT: + ent_count = random_state->entropy_count; + if (put_user(ent_count, p)) + return -EFAULT; + return 0; + case RNDADDTOENTCNT: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p)) + return -EFAULT; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDGETPOOL: + /* can't do this anymore */ + return 0; + case RNDADDENTROPY: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p++)) + return -EFAULT; + if (ent_count < 0) + return -EINVAL; + if (get_user(size, p++)) + return -EFAULT; + retval = random_write(file, (const char __user *) p, + size, &file->f_pos); + if (retval < 0) + return retval; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDZAPENTCNT: + /* Can't do this anymore */ + return 0; + case RNDCLEARPOOL: + /* Can't to this anymore */ + return 0; + default: + return -EINVAL; + } +} + +struct file_operations random_fops = { + .read = random_read, + .write = random_write, + .poll = random_poll, + .ioctl = random_ioctl, +}; + +struct file_operations urandom_fops = { + .read = urandom_read, + .write = random_write, + .ioctl = random_ioctl, +}; + +/*************************************************************** + * Random UUID interface + * + * Used here for a Boot ID, but can be useful for other kernel + * drivers. + ***************************************************************/ + +/* + * Generate random UUID + */ +void generate_random_uuid(unsigned char uuid_out[16]) +{ + get_random_bytes(uuid_out, 16); + /* Set UUID version to 4 --- truely random generation */ + uuid_out[6] = (uuid_out[6] & 0x0F) | 0x40; + /* Set the UUID variant to DCE */ + uuid_out[8] = (uuid_out[8] & 0x3F) | 0x80; +} + +EXPORT_SYMBOL(generate_random_uuid); + +/******************************************************************** + * + * Sysctl interface + * + ********************************************************************/ + +#ifdef CONFIG_SYSCTL + +#include <linux/sysctl.h> + +static int sysctl_poolsize; +static int min_read_thresh, max_read_thresh; +static int min_write_thresh, max_write_thresh; +static char sysctl_bootid[16]; + +static int proc_do_poolsize(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + int ret; + + sysctl_poolsize = POOLBITS; + + ret = proc_dointvec(table, write, filp, buffer, lenp, ppos); + if (ret || !write || + (sysctl_poolsize == POOLBITS)) + return ret; + + return ret; /* can't change the pool size in fortuna */ +} + +static int poolsize_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + int len; + + sysctl_poolsize = POOLBITS; + + /* + * We only handle the write case, since the read case gets + * handled by the default handler (and we don't care if the + * write case happens twice; it's harmless). + */ + if (newval && newlen) { + len = newlen; + if (len > table->maxlen) + len = table->maxlen; + if (copy_from_user(table->data, newval, len)) + return -EFAULT; + } + + return 0; +} + +/* + * These functions is used to return both the bootid UUID, and random + * UUID. The difference is in whether table->data is NULL; if it is, + * then a new UUID is generated and returned to the user. + * + * If the user accesses this via the proc interface, it will be returned + * as an ASCII string in the standard UUID format. If accesses via the + * sysctl system call, it is returned as 16 bytes of binary data. + */ +static int proc_do_uuid(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + ctl_table fake_table; + unsigned char buf[64], tmp_uuid[16], *uuid; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + sprintf(buf, "%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x%02x-" + "%02x%02x%02x%02x%02x%02x", + uuid[0], uuid[1], uuid[2], uuid[3], + uuid[4], uuid[5], uuid[6], uuid[7], + uuid[8], uuid[9], uuid[10], uuid[11], + uuid[12], uuid[13], uuid[14], uuid[15]); + fake_table.data = buf; + fake_table.maxlen = sizeof(buf); + + return proc_dostring(&fake_table, write, filp, buffer, lenp, ppos); +} + +static int uuid_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + unsigned char tmp_uuid[16], *uuid; + unsigned int len; + + if (!oldval || !oldlenp) + return 1; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + if (get_user(len, oldlenp)) + return -EFAULT; + if (len) { + if (len > 16) + len = 16; + if (copy_to_user(oldval, uuid, len) || + put_user(len, oldlenp)) + return -EFAULT; + } + return 1; +} + +ctl_table random_table[] = { + { + .ctl_name = RANDOM_POOLSIZE, + .procname = "poolsize", + .data = &sysctl_poolsize, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_do_poolsize, + .strategy = &poolsize_strategy, + }, + { + .ctl_name = RANDOM_ENTROPY_COUNT, + .procname = "entropy_avail", + .maxlen = sizeof(int), + .mode = 0444, + .proc_handler = &proc_dointvec, + }, + { + .ctl_name = RANDOM_READ_THRESH, + .procname = "read_wakeup_threshold", + .data = &random_read_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_read_thresh, + .extra2 = &max_read_thresh, + }, + { + .ctl_name = RANDOM_WRITE_THRESH, + .procname = "write_wakeup_threshold", + .data = &random_write_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_write_thresh, + .extra2 = &max_write_thresh, + }, + { + .ctl_name = RANDOM_BOOT_ID, + .procname = "boot_id", + .data = &sysctl_bootid, + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_UUID, + .procname = "uuid", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_DIGEST_ALGO, + .procname = "digest_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { + .ctl_name = RANDOM_CIPHER_ALGO, + .procname = "cipher_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { .ctl_name = 0 } +}; + +static void sysctl_init_random(struct entropy_store *random_state) +{ + int i; + + /* If the sys-admin doesn't want people to know how fast + * random events are happening, he can set the read-threshhold + * down to zero so /dev/random never blocks. Default is to block. + * This is for the paranoid loonies who think frequency analysis + * would lead to something. + */ + min_read_thresh = 0; + min_write_thresh = 0; + max_read_thresh = max_write_thresh = POOLBITS; + for (i=0; random_table[i].ctl_name!=0; i++) { + switch (random_table[i].ctl_name) { + case RANDOM_ENTROPY_COUNT: + random_table[i].data = &random_state->entropy_count; + break; + + case RANDOM_DIGEST_ALGO: + random_table[i].data = (void*)random_state->digestAlgo; + break; + + case RANDOM_CIPHER_ALGO: + random_table[i].data = (void*)random_state->cipherAlgo; + break; + + default: + break; + } + } +} +#endif /* CONFIG_SYSCTL */ + +/******************************************************************** + * + * Random funtions for networking + * + ********************************************************************/ + +/* + * TCP initial sequence number picking. This uses the random number + * generator to pick an initial secret value. This value is encrypted + * with the TCP endpoint information to provide a unique starting point + * for each pair of TCP endpoints. This defeats attacks which rely on + * guessing the initial TCP sequence number. This algorithm was + * suggested by Steve Bellovin, modified by Jean-Luc Cooke. + * + * Using a very strong hash was taking an appreciable amount of the total + * TCP connection establishment time, so this is a weaker hash, + * compensated for by changing the secret periodically. This was changed + * again by Jean-Luc Cooke to use AES256-CBC encryption which is faster + * still (see `/usr/bin/openssl speed md4 sha1 aes`) + */ + +/* This should not be decreased so low that ISNs wrap too fast. */ +#define REKEY_INTERVAL 300 +/* + * Bit layout of the tcp sequence numbers (before adding current time): + * bit 24-31: increased after every key exchange + * bit 0-23: hash(source,dest) + * + * The implementation is similar to the algorithm described + * in the Appendix of RFC 1185, except that + * - it uses a 1 MHz clock instead of a 250 kHz clock + * - it performs a rekey every 5 minutes, which is equivalent + * to a (source,dest) tulple dependent forward jump of the + * clock by 0..2^(HASH_BITS+1) + * + * Thus the average ISN wraparound time is 68 minutes instead of + * 4.55 hours. + * + * SMP cleanup and lock avoidance with poor man's RCU. + * Manfred Spraul <manfred@colorfullife.com> + * + */ +#define COUNT_BITS 8 +#define COUNT_MASK ( (1<<COUNT_BITS)-1) +#define HASH_BITS 24 +#define HASH_MASK ( (1<<HASH_BITS)-1 ) + +static spinlock_t ip_lock = SPIN_LOCK_UNLOCKED; +static unsigned int ip_cnt, network_count; + +static void __check_and_rekey(time_t time) +{ + u8 tmp[RANDOM_MAX_KEY_SIZE]; + spin_lock_bh(&ip_lock); + + get_random_bytes(tmp, random_state->keysize); + crypto_cipher_setkey(random_state->networkCipher, + (const u8*)tmp, + random_state->keysize); + random_state->networkCipher_ready = 1; + network_count = (ip_cnt & COUNT_MASK) << HASH_BITS; + mb(); + ip_cnt++; + + spin_unlock_bh(&ip_lock); + return; +} + +static inline void check_and_rekey(time_t time) +{ + static time_t rekey_time=0; + + rmb(); + if (!rekey_time || (time - rekey_time) > REKEY_INTERVAL) { + __check_and_rekey(time); + rekey_time = time; + } + + return; +} + +#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) +__u32 secure_tcpv6_sequence_number(__u32 *saddr, __u32 *daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * The procedure is the same as for IPv4, but addresses are longer. + * Thus we must use two AES operations. + */ + + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 2 block in CBC with + * and IV={(sport)<<16 | dport, 0, 0, 0} + * + * seq = ct[0], ct = Enc-CBC(Key, {ports}, {daddr, saddr}); + * = Enc(Key, saddr xor Enc(Key, daddr)) + */ + + /* PT0 = daddr */ + memcpy(tmp, daddr, random_state->blocksize); + /* IV = {ports,0,0,0} */ + tmp[0] ^= (sport<<16) | dport; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + /* PT1 = saddr */ + random_state->networkCipher->crt_cipher.cit_xor_block(tmp, (const u8*)saddr); + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + seq += tv.tv_usec + tv.tv_sec*1000000; + + return seq; +} +EXPORT_SYMBOL(secure_tcpv6_sequence_number); + +__u32 secure_ipv6_id(__u32 *daddr) +{ + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + check_and_rekey(get_seconds()); + + memcpy(tmp, daddr, random_state->blocksize); + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* id = tmp[0], tmp = Enc(Key, daddr); */ + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +EXPORT_SYMBOL(secure_ipv6_id); +#endif + + +__u32 secure_tcp_sequence_number(__u32 saddr, __u32 daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * Pick a random secret every REKEY_INTERVAL seconds. + */ + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + /* + * Pick a unique starting offset for each TCP connection endpoints + * (saddr, daddr, sport, dport). + * Note that the words are placed into the starting vector, which is + * then mixed with a partial MD4 over random data. + */ + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 1 block + * + * seq = ct[0], ct = Enc(Key, {(sport<<16)|dport, daddr, saddr, 0}) + */ + + tmp[0] = (sport<<16) | dport; + tmp[1] = daddr; + tmp[2] = saddr; + tmp[3] = 0; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + /* + * As close as possible to RFC 793, which + * suggests using a 250 kHz clock. + * Further reading shows this assumes 2 Mb/s networks. + * For 10 Mb/s Ethernet, a 1 MHz clock is appropriate. + * That's funny, Linux has one built in! Use it! + * (Networks are faster now - should this be increased?) + */ + seq += tv.tv_usec + tv.tv_sec*1000000; + +#if 0 + printk("init_seq(%lx, %lx, %d, %d) = %d\n", + saddr, daddr, sport, dport, seq); +#endif + return seq; +} + +EXPORT_SYMBOL(secure_tcp_sequence_number); + +/* The code below is shamelessly stolen from secure_tcp_sequence_number(). + * All blames to Andrey V. Savochkin <saw@msu.ru>. + * Changed by Jean-Luc Cooke <jlcooke@certainkey.com> to use AES & C.A.P.I. + */ +__u32 secure_ip_id(__u32 daddr) +{ + struct scatterlist sgtmp[1]; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + + check_and_rekey(get_seconds()); + + /* + * Pick a unique starting offset for each IP destination. + * id = ct[0], ct = Enc(Key, {daddr,0,0,0}); + */ + tmp[0] = daddr; + tmp[1] = 0; + tmp[2] = 0; + tmp[3] = 0; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +#ifdef CONFIG_SYN_COOKIES +/* + * Secure SYN cookie computation. This is the algorithm worked out by + * Dan Bernstein and Eric Schenk. + * + * For linux I implement the 1 minute counter by looking at the jiffies clock. + * The count is passed in as a parameter, so this code doesn't much care. + * + * SYN cookie (and seq# & id#) Changed in 2004 by Jean-Luc Cooke + * <jlcooke@certainkey.com> to use the C.A.P.I. and AES256. + */ + +#define COOKIEBITS 24 /* Upper bits store count */ +#define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1) + +__u32 secure_tcp_syn_cookie(__u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 data) +{ + struct scatterlist sg[1]; + __u32 tmp[4]; + + /* + * Compute the secure sequence number. + * + * Output is the 32bit tag of a CBC-MAC of + * PT={count,0,0,0} with IV={addr,daddr,sport|dport,sseq} + * cookie = {<8bit count>, + * truncate_24bit( + * Encrypt(Sec, {saddr,daddr,sport|dport,sseq}) + * ) + * } + * + * DJB wrote (http://cr.yp.to/syncookies/archive) about how to do this + * with hash algorithms. + * - we can replace two SHA1s used in the previous kernel with 1 AES + * and make things 5x faster + * - I'd like to propose we remove the use of two whittenings with a + * single operation since we were only using addition modulo 2^32 of + * all these values anyways. Not to mention the hashs differ only in + * that the second processes more data... why drop the first hash? + * We did learn that addition is commutative and associative long ago. + * - by replacing two SHA1s and addition modulo 2^32 with encryption of + * a 32bit value using CAPI we've made it 1,000,000,000 times easier + * to understand what is going on. + */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + if (!random_state->networkCipher_ready) { + check_and_rekey(get_seconds()); + } + /* tmp[]/sg[0] = Enc(Sec, {saddr,daddr,sport|dport,sseq}) */ + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* cookie = CTR encrypt of 8-bit-count and 24-bit-data */ + return tmp[0] ^ ( (count << COOKIEBITS) | (data & COOKIEMASK) ); +} + +/* + * This retrieves the small "data" value from the syncookie. + * If the syncookie is bad, the data returned will be out of + * range. This must be checked by the caller. + * + * The count value used to generate the cookie must be within + * "maxdiff" if the current (passed-in) "count". The return value + * is (__u32)-1 if this test fails. + */ +__u32 check_tcp_syn_cookie(__u32 cookie, __u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 maxdiff) +{ + struct scatterlist sg[1]; + __u32 tmp[4], thiscount, diff; + + if (random_state == NULL || !random_state->networkCipher_ready) + return (__u32)-1; /* Well, duh! */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* CTR decrypt the cookie */ + cookie ^= tmp[0]; + + /* top 8 bits are 'count' */ + thiscount = cookie >> COOKIEBITS; + + diff = count - thiscount; + if (diff >= maxdiff) + return (__u32)-1; + + /* bottom 24 bits are 'data' */ + return cookie & COOKIEMASK; +} +#endif diff -X exclude -Nur linux-2.6.8.1/drivers/char/random.c linux-2.6.8.1-rand2/drivers/char/random.c --- linux-2.6.8.1/drivers/char/random.c 2004-09-27 16:04:53.000000000 -0400 +++ linux-2.6.8.1-rand2/drivers/char/random.c 2004-09-28 23:25:46.000000000 -0400 @@ -261,6 +261,17 @@ #include <asm/io.h> /* + * In September 2004, Jean-Luc Cooke wrote a Fortuna RNG for Linux + * which was non-blocking and used the Cryptographic API. + * We use it now if the user wishes. + */ +#ifdef CONFIG_CRYPTO_RANDOM_FORTUNA + #warning using the Fortuna PRNG for /dev/random + #include "../crypto/random-fortuna.c" +#else /* CONFIG_CRYPTO_RANDOM_FORTUNA */ + #warning using the Linux Legacy PRNG for /dev/random + +/* * Configuration information */ #define DEFAULT_POOL_SIZE 512 @@ -2483,3 +2494,5 @@ return (cookie - tmp[17]) & COOKIEMASK; /* Leaving the data behind */ } #endif + +#endif /* CONFIG_CRYPTO_RANDOM_FORTUNA */ diff -X exclude -Nur linux-2.6.8.1/include/linux/sysctl.h linux-2.6.8.1-rand2/include/linux/sysctl.h --- linux-2.6.8.1/include/linux/sysctl.h 2004-08-14 06:55:33.000000000 -0400 +++ linux-2.6.8.1-rand2/include/linux/sysctl.h 2004-09-29 10:45:20.592695040 -0400 @@ -198,7 +198,9 @@ RANDOM_READ_THRESH=3, RANDOM_WRITE_THRESH=4, RANDOM_BOOT_ID=5, - RANDOM_UUID=6 + RANDOM_UUID=6, + RANDOM_DIGEST_ALGO=7, + RANDOM_CIPHER_ALGO=8 }; /* /proc/sys/kernel/pty */ ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-29 17:10 ` [PROPOSAL/PATCH 2] " Jean-Luc Cooke @ 2004-09-29 19:31 ` Theodore Ts'o 2004-09-29 20:27 ` Jean-Luc Cooke 0 siblings, 1 reply; 28+ messages in thread From: Theodore Ts'o @ 2004-09-29 19:31 UTC (permalink / raw) To: Jean-Luc Cooke; +Cc: linux, linux-kernel, cryptoapi While addition of the entropy estimator helps protect the Fortuna Random number generator against a state extension attack, /dev/urandom is using the same entropy extraction routine as /dev/random, and so Fortuna is still vulernable to state extension attacks. This is because a key aspect of the Fortuna design has been ignored in JLC's implementation. This missing piece to assure that a rekey can only take place when there has been sufficient entropy built up in the higher order pools in order to assure a catastrophic rekey. Otherwise, the attacker can simply brute force a wide variety of entropy inputs from the hardware, and see if any of them matches output from the /dev/urandom (from which the attacker is continuously pulling output). So in the original design, the rekey from a higher order pool only takes place after k*2^n seconds, where n is the order of the pool, and k is some constant. The idea is that after some period of time hopefully one of the pools has built up at least 128 bits or so worth of entropy, and so the catastrophic reseeding will prevent an attacker from trying all possible inputs and determining the state of the pool. (Neils recommends that k be at least a tenth of a second; see pages 38-40 of http://th.informatik.uni-mannheim.de/people/lucks/papers/Ferguson/Fortuna.pdf). Unfortunately, Fortuna will call random_reseed() after every single read from /dev/urandom. This is not time-limited at all, so as long as the attacker can call /dev/urandom fast enough, it can continue to monitor the various higher-level pools. This can be fixed easily by simply changing the rekey function so that it only attempts a reseed after some period of time has gone by. There is of course the question of whether a state extension attack is realistic. After all, most attacks where the attacker as sufficient privileges to obtain the complete state of the RNG is also one where the attacker also has enough privileges to install a rootkit, or compromise the kernel by loading a hostile loadable kernel module, etc. Also, there is the question about whether an attacker could read sufficient amounts of to keep track of the the contents of the pool, and whether the attacker can either do the brute-forcing on the local machine, or send the large amounts of information read from /dev/urandom to an outside machine, without using enough CPU time that it would be noticed by a system administrator ---- but then again, the Crypto academics that are worried about things like state extension attacks aren't worried about practical niceties. But then again, if we decide that state extension attacks aren't practically possible, or otherwise not worthy of concern, or if JLC's Fortuna implementation is vulnerable to state extension attacks, there's no reason to use JLC's implementation in the first place. - Ted ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-29 19:31 ` Theodore Ts'o @ 2004-09-29 20:27 ` Jean-Luc Cooke 2004-09-29 21:40 ` Theodore Ts'o 2004-09-29 21:53 ` Theodore Ts'o 0 siblings, 2 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-29 20:27 UTC (permalink / raw) To: Theodore Ts'o, linux, linux-kernel, cryptoapi Why would we want to miss that when so much effort was made to meet the requirements of the traditional /dev/random? So... Here's patch v2.1.2 that waits at least 0.1 sec before reseeding for non-blocking reads to alleviate Ted's concern wrt waiting for reseeds. When reading nbytes from /dev/{u}random, Legacy /dev/random would: - Mix nbytes of data from primary pool into secondary pool - Then generate nbytes from secondary pool When reading nbytes from /dev/{u}random, Fortuna-patch /dev/random would: - Mix ??? of data from input pools into the AES key for output generation - Then generate nbytes from AES256-CTR Perhaps I miss the subtlety of the difference in terms of security. If nbytes >= size of both pools - wouldn't Legacy also be vulnerable to the same attack? JLC On Wed, Sep 29, 2004 at 03:31:17PM -0400, Theodore Ts'o wrote: > While addition of the entropy estimator helps protect the Fortuna > Random number generator against a state extension attack, /dev/urandom > is using the same entropy extraction routine as /dev/random, and so > Fortuna is still vulernable to state extension attacks. This is > because a key aspect of the Fortuna design has been ignored in JLC's > implementation. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-29 20:27 ` Jean-Luc Cooke @ 2004-09-29 21:40 ` Theodore Ts'o 2004-09-29 21:53 ` Theodore Ts'o 1 sibling, 0 replies; 28+ messages in thread From: Theodore Ts'o @ 2004-09-29 21:40 UTC (permalink / raw) To: Jean-Luc Cooke; +Cc: linux, linux-kernel, cryptoapi On Wed, Sep 29, 2004 at 04:27:07PM -0400, Jean-Luc Cooke wrote: > > When reading nbytes from /dev/{u}random, Legacy /dev/random would: > - Mix nbytes of data from primary pool into secondary pool > - Then generate nbytes from secondary pool > > When reading nbytes from /dev/{u}random, Fortuna-patch /dev/random would: > - Mix ??? of data from input pools into the AES key for output generation > - Then generate nbytes from AES256-CTR > > Perhaps I miss the subtlety of the difference in terms of security. If > nbytes >= size of both pools - wouldn't Legacy also be vulnerable to the > same attack? Sure, but the Fortuna is supposed to be "more secure" because it resists the state extension attack. I don't think the state extension attack is at all realistic, for the reasons cited above. But if your implementation doesn't resist the state extension attack, then why bloat the kernel with an alternate random algorithm that's no better as far as security is concerned? (And is more heavy weight, and is more wasteful with its entropy, etc., etc.?) - Ted P.S. I'll also note by the way, that in more recent versions of /dev/random, we use a separate pool for /dev/urandom and /dev/random. A further enhancement which I'm thinking might be a good one to add is to limit the rate at which we transfer randomness from the primary pool to the urandom pool. So that it's not that I'm against making changes; it's just that I want the changes to make sense, and protect against realistic threats, not imaginary ones. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-29 20:27 ` Jean-Luc Cooke 2004-09-29 21:40 ` Theodore Ts'o @ 2004-09-29 21:53 ` Theodore Ts'o 2004-09-29 23:24 ` Jean-Luc Cooke 2004-09-30 0:21 ` Jean-Luc Cooke 1 sibling, 2 replies; 28+ messages in thread From: Theodore Ts'o @ 2004-09-29 21:53 UTC (permalink / raw) To: Jean-Luc Cooke; +Cc: linux, linux-kernel, cryptoapi On Wed, Sep 29, 2004 at 04:27:07PM -0400, Jean-Luc Cooke wrote: > > Here's patch v2.1.2 that waits at least 0.1 sec before reseeding for > non-blocking reads to alleviate Ted's concern wrt waiting for reseeds. You didn't include the patch, and in any case, you'll probably want to probably want to do it for both blocking as well as non-blocking reads. And keep in mind, it's not *my* concerns, but it's Neil Ferguson and Bruce Schneier's concerns. After all, if you're going to call it Fortuna, you might as well be faithful to their design, especially since if you don't, you're leaving it to be utterly vulnerable to this state extension attack they are so worried about. - Ted ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-29 21:53 ` Theodore Ts'o @ 2004-09-29 23:24 ` Jean-Luc Cooke 2004-09-30 0:21 ` Jean-Luc Cooke 1 sibling, 0 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-29 23:24 UTC (permalink / raw) To: Theodore Ts'o, linux, linux-kernel, cryptoapi Oops! my bad. Attached here along with a few other changes. I'm running a change log now at the top of the random-fortuna.c file. Being 100% faithful to their designs would require such constructions as each event source would hold its own pool index pointer, right now event collection is separated from event mixing for API and performance reasons. Trying to find solace and more articulate advise I've sat down with Practical Crypto again reviewed the state compromise-extension attack recovery section. If copyright laws smile on me, I'll quote section 10.5.2 starting at chapter 6 on page 170 of the soft-cover print. -- start quote -- The speed at which the system recovers from a compromised state depends on the rate at which entropy (with respect to the attacker) flaws into the pools. If we assume that this is a fixed rate R, then after t seconds we have in total R*t bits of entropy. Each pool receives about R*t/32 bits in this time period. The attacker can no longer keep track of the state if the generator is reseeded with a pool with more then 128 bits of entropy in it. There are two cases. If pool P_0 collects 128 bits of entropy before the next reseed operation, then we have recovered from the compromise. How fast this happens depends on how large we let P_0 grow before we reseed. The second case is when P_0 is reseeding too fast, due to random events unknown to (or generated by) the attacker. Let t be the time between reseeds. Then pool P_i collects 2^i*R*t/32 bits of entropy between reseeds, and is used in a reseed every 2^i*t seconds. The recovery from the compromise happens the first time we reseed with pool P_i where 128 <= 2^i*R*t < 256. (The upper bound derives from the fact that otherwise pool P_{i-1} would contain 128 bits of entropy between reseeds.) This inequality gives us 2^i * R * t ----------- < 256 32 and thus 8192 2^i * t < ---- R In other words, the time between recovery points (2^i*t) is bounded by the time it takes to collect 2^13 bits of entropy (8192 / R). The number 2^13 seems a bit large, but it can be explained in the following way. We need at least 2^7 bits to recover from a compromise. We might be unlucky if the system reseeds just before we have collected 2^7 bits in a particular pool, and then we have to use the next pool, which collect close to 2^8 bits before the reseed. Finally, we divide our data over 32 pools, which accounts for another factor of 2^5. This is a very good result. The solution is within a factor of 64 of an ideal solution (it needs at most 64 times as much randomness as an ideal solution would need). The is a constant factor and it ensures that we can never do terribly badly, and will always recover eventually. Furthermore, we do not need to know how much entropy out events have or how much the attacker knows. That is the real advantage Fortuna has over Yarrow. The impossible-to-construct entropy estimators are gone for good. Everything is fully automatic; if there is a good flow of random data, the PRNG will recover quickly. If there is only a trickle of random data, it takes a long time to recover. -- end quote -- Hopefully the above quote from the book will be interpreted as free advertising and not theft. On Wed, Sep 29, 2004 at 05:53:15PM -0400, Theodore Ts'o wrote: > On Wed, Sep 29, 2004 at 04:27:07PM -0400, Jean-Luc Cooke wrote: > > > > Here's patch v2.1.2 that waits at least 0.1 sec before reseeding for > > non-blocking reads to alleviate Ted's concern wrt waiting for reseeds. > > You didn't include the patch, and in any case, you'll probably want to > probably want to do it for both blocking as well as non-blocking > reads. And keep in mind, it's not *my* concerns, but it's Neil > Ferguson and Bruce Schneier's concerns. After all, if you're going to > call it Fortuna, you might as well be faithful to their design, > especially since if you don't, you're leaving it to be utterly > vulnerable to this state extension attack they are so worried about. > > - Ted ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-29 21:53 ` Theodore Ts'o 2004-09-29 23:24 ` Jean-Luc Cooke @ 2004-09-30 0:21 ` Jean-Luc Cooke 2004-09-30 4:23 ` Jean-Luc Cooke 1 sibling, 1 reply; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-30 0:21 UTC (permalink / raw) To: Theodore Ts'o, linux, linux-kernel, cryptoapi [-- Attachment #1: Type: text/plain, Size: 1004 bytes --] Damn, Need to eat me some brain-food. JLC On Wed, Sep 29, 2004 at 05:53:15PM -0400, Theodore Ts'o wrote: > On Wed, Sep 29, 2004 at 04:27:07PM -0400, Jean-Luc Cooke wrote: > > > > Here's patch v2.1.2 that waits at least 0.1 sec before reseeding for > > non-blocking reads to alleviate Ted's concern wrt waiting for reseeds. > > You didn't include the patch, and in any case, you'll probably want to > probably want to do it for both blocking as well as non-blocking > reads. And keep in mind, it's not *my* concerns, but it's Neil > Ferguson and Bruce Schneier's concerns. After all, if you're going to > call it Fortuna, you might as well be faithful to their design, > especially since if you don't, you're leaving it to be utterly > vulnerable to this state extension attack they are so worried about. > > - Ted > _______________________________________________ > > Subscription: http://lists.logix.cz/mailman/listinfo/cryptoapi > List archive: http://lists.logix.cz/pipermail/cryptoapi [-- Attachment #2: fortuna-2.6.8.1.patch --] [-- Type: text/plain, Size: 65145 bytes --] diff -X exclude -Nur linux-2.6.8.1/crypto/Kconfig linux-2.6.8.1-rand2/crypto/Kconfig --- linux-2.6.8.1/crypto/Kconfig 2004-08-14 06:56:22.000000000 -0400 +++ linux-2.6.8.1-rand2/crypto/Kconfig 2004-09-28 23:30:04.000000000 -0400 @@ -9,6 +9,15 @@ help This option provides the core Cryptographic API. +config CRYPTO_RANDOM_FORTUNA + bool "The Fortuna RNG" + help + Replaces the legacy Linux RNG with one using the crypto API + and Fortuna by Ferguson and Schneier. Entropy estimation, and + a throttled /dev/random remain. Improvements include faster + /dev/urandom output and event input mixing. + Note: Requires AES and SHA256 to be built-in. + config CRYPTO_HMAC bool "HMAC support" depends on CRYPTO diff -X exclude -Nur linux-2.6.8.1/crypto/random-fortuna.c linux-2.6.8.1-rand2/crypto/random-fortuna.c --- linux-2.6.8.1/crypto/random-fortuna.c 1969-12-31 19:00:00.000000000 -0500 +++ linux-2.6.8.1-rand2/crypto/random-fortuna.c 2004-09-29 20:21:49.686353536 -0400 @@ -0,0 +1,2100 @@ +/* + * random-fortuna.c -- A cryptographically strong random number generator + * using Fortuna. + * + * Version 2.1.2, last modified 28-Sep-2004 + * Change log: + * v2.1.3: + * - Added a seperate round-robin index for use inputs. Avoids a + * super-cleaver user from forcing all system (unknown) random + * events from being fed into, say, pool-31. + * - Added a "can only extract RANDOM_MAX_EXTRACT_SIZE bytes at a time" + * to extract_entropy() + * v2.1.2: + * - Ts'o's (I love writting that!) recomendation to force reseeds + * to be at least 0.1 ms apart. + * v2.1.1: + * - Re-worked to keep the blocking /dev/random. Yes I finally gave + * in to what everyone's been telling me. + * - Entropy accounting is *only* done on events going into pool-0 + * since it's used for every reseed. Those who expect /dev/random + * to only output data when the system is confident it has + * info-theoretic entropy to justify this output, this is the only + * sensible method to count entropy. + * v2.0: + * - Inital version + * + * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All + * rights reserved. + * Copyright Jean-Luc Cooke, 2004. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, and the entire permission notice in its entirety, + * including the disclaimer of warranties. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. The name of the author may not be used to endorse or promote + * products derived from this software without specific prior + * written permission. + * + * ALTERNATIVELY, this product may be distributed under the terms of + * the GNU General Public License, in which case the provisions of the GPL are + * required INSTEAD OF the above restrictions. (This clause is + * necessary due to a potential bad interaction between the GPL and + * the restrictions contained in a BSD-style copyright.) + * + * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF + * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT + * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE + * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH + * DAMAGE. + */ + +/* + * Taken from random.c, updated by Jean-Luc Cooke <jlcooke@certainkey.com> + * (now, with legal B.S. out of the way.....) + * + * This routine gathers environmental noise from device drivers, etc., + * and returns good random numbers, suitable for cryptographic use. + * Besides the obvious cryptographic uses, these numbers are also good + * for seeding TCP sequence numbers, and other places where it is + * desirable to have numbers which are not only random, but hard to + * predict by an attacker. + * + * Theory of operation + * =================== + * + * Computers are very predictable devices. Hence it is extremely hard + * to produce truly random numbers on a computer --- as opposed to + * pseudo-random numbers, which can easily generated by using a + * algorithm. Unfortunately, it is very easy for attackers to guess + * the sequence of pseudo-random number generators, and for some + * applications this is not acceptable. So instead, we must try to + * gather "environmental noise" from the computer's environment, which + * must be hard for outside attackers to observe, and use that to + * generate random numbers. In a Unix environment, this is best done + * from inside the kernel. + * + * Sources of randomness from the environment include inter-keyboard + * timings, inter-interrupt timings from some interrupts, and other + * events which are both (a) non-deterministic and (b) hard for an + * outside observer to measure. Randomness from these sources are + * added to an "entropy pool", which is mixed. + * As random bytes are mixed into the entropy pool, the routines keep + * an *estimate* of how many bits of randomness have been stored into + * the random number generator's internal state. + * + * Even if it is possible to analyze Fortuna in some clever way, as + * long as the amount of data returned from the generator is less than + * the inherent entropy we've estimated in the pool, the output data + * is totally unpredictable. For this reason, the routine decreases + * its internal estimate of how many bits of "true randomness" are + * contained in the entropy pool as it outputs random numbers. + * + * If this estimate goes to zero, the routine can still generate + * random numbers; however, an attacker may (at least in theory) be + * able to infer the future output of the generator from prior + * outputs. This requires successful cryptanalysis of Fortuna, which is + * not believed to be feasible, but there is a remote possibility. + * Nonetheless, these numbers should be useful for the vast majority + * of purposes. + * + * Exported interfaces ---- output + * =============================== + * + * There are three exported interfaces; the first is one designed to + * be used from within the kernel: + * + * void get_random_bytes(void *buf, int nbytes); + * + * This interface will return the requested number of random bytes, + * and place it in the requested buffer. + * + * The two other interfaces are two character devices /dev/random and + * /dev/urandom. /dev/random is suitable for use when very high + * quality randomness is desired (for example, for key generation or + * one-time pads), as it will only return a maximum of the number of + * bits of randomness (as estimated by the random number generator) + * contained in the entropy pool. + * + * The /dev/urandom device does not have this limit, and will return + * as many bytes as are requested. As more and more random bytes are + * requested without giving time for the entropy pool to recharge, + * this will result in random numbers that are merely cryptographically + * strong. For many applications, however, this is acceptable. + * + * Exported interfaces ---- input + * ============================== + * + * The current exported interfaces for gathering environmental noise + * from the devices are: + * + * void add_keyboard_randomness(unsigned char scancode); + * void add_mouse_randomness(__u32 mouse_data); + * void add_interrupt_randomness(int irq); + * + * add_keyboard_randomness() uses the inter-keypress timing, as well as the + * scancode as random inputs into the "entropy pool". + * + * add_mouse_randomness() uses the mouse interrupt timing, as well as + * the reported position of the mouse from the hardware. + * + * add_interrupt_randomness() uses the inter-interrupt timing as random + * inputs to the entropy pool. Note that not all interrupts are good + * sources of randomness! For example, the timer interrupts is not a + * good choice, because the periodicity of the interrupts is too + * regular, and hence predictable to an attacker. Disk interrupts are + * a better measure, since the timing of the disk interrupts are more + * unpredictable. + * + * All of these routines try to estimate how many bits of randomness a + * particular randomness source. They do this by keeping track of the + * first and second order deltas of the event timings. + * + * Ensuring unpredictability at system startup + * ============================================ + * + * When any operating system starts up, it will go through a sequence + * of actions that are fairly predictable by an adversary, especially + * if the start-up does not involve interaction with a human operator. + * This reduces the actual number of bits of unpredictability in the + * entropy pool below the value in entropy_count. In order to + * counteract this effect, it helps to carry information in the + * entropy pool across shut-downs and start-ups. To do this, put the + * following lines an appropriate script which is run during the boot + * sequence: + * + * echo "Initializing random number generator..." + * random_seed=/var/run/random-seed + * # Carry a random seed from start-up to start-up + * # Load and then save the whole entropy pool + * if [ -f $random_seed ]; then + * cat $random_seed >/dev/urandom + * else + * touch $random_seed + * fi + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * and the following lines in an appropriate script which is run as + * the system is shutdown: + * + * # Carry a random seed from shut-down to start-up + * # Save the whole entropy pool + * # Fortuna resists using all of its pool matirial, so we need to + * # draw 8 seperate times (count=8) to ensure we get the entropy + * # from pool[0,1,2,3]'s entropy. count=2048 pool[0 .. 10], etc. + * echo "Saving random seed..." + * random_seed=/var/run/random-seed + * touch $random_seed + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * For example, on most modern systems using the System V init + * scripts, such code fragments would be found in + * /etc/rc.d/init.d/random. On older Linux systems, the correct script + * location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0. + * + * Effectively, these commands cause the contents of the entropy pool + * to be saved at shut-down time and reloaded into the entropy pool at + * start-up. (The 'dd' in the addition to the bootup script is to + * make sure that /etc/random-seed is different for every start-up, + * even if the system crashes without executing rc.0.) Even with + * complete knowledge of the start-up activities, predicting the state + * of the entropy pool requires knowledge of the previous history of + * the system. + * + * Configuring the /dev/random driver under Linux + * ============================================== + * + * The /dev/random driver under Linux uses minor numbers 8 and 9 of + * the /dev/mem major number (#1). So if your system does not have + * /dev/random and /dev/urandom created already, they can be created + * by using the commands: + * + * mknod /dev/random c 1 8 + * mknod /dev/urandom c 1 9 + * + * Acknowledgements: + * ================= + * + * Ideas for constructing this random number generator were derived + * from Pretty Good Privacy's random number generator, and from private + * discussions with Phil Karn. Colin Plumb provided a faster random + * number generator, which speed up the mixing function of the entropy + * pool, taken from PGPfone. Dale Worley has also contributed many + * useful ideas and suggestions to improve this driver. + * + * Any flaws in the design are solely my (jlcooke) responsibility, and + * should not be attributed to the Phil, Colin, or any of authors of PGP + * or the legacy random.c (Ted Ts'o). + * + * Further background information on this topic may be obtained from + * RFC 1750, "Randomness Recommendations for Security", by Donald + * Eastlake, Steve Crocker, and Jeff Schiller. And Chapter 10 of + * Practical Cryptography by Ferguson and Schneier. + */ + +#include <linux/utsname.h> +#include <linux/config.h> +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/major.h> +#include <linux/string.h> +#include <linux/fcntl.h> +#include <linux/slab.h> +#include <linux/random.h> +#include <linux/poll.h> +#include <linux/init.h> +#include <linux/fs.h> +#include <linux/workqueue.h> +#include <linux/genhd.h> +#include <linux/interrupt.h> +#include <linux/spinlock.h> +#include <linux/percpu.h> +#include <linux/crypto.h> +#include <../crypto/internal.h> + +#include <asm/scatterlist.h> +#include <asm/processor.h> +#include <asm/uaccess.h> +#include <asm/irq.h> +#include <asm/io.h> + + +/* + * Configuration information + */ +#define BATCH_ENTROPY_SIZE 256 +/* milli-seconds between random_reseeds for non-blocking reads */ +#define RANDOM_RESEED_INTERVAL 100 +/* + * Number of bytes you can extract at a time, 1MB is recomended in + * Practical Cryptography rev-0 + */ +#define RANDOM_MAX_EXTRACT_SIZE (1<<20) +#define USE_SHA256 +#define USE_AES + +/* + * Compile-time checking for our desired message digest + */ +#if defined USE_SHA256 + #if !CONFIG_CRYPTO_SHA256 + #error SHA256 not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "sha256" +#elif defined USE_WHIRLPOOL + #if !CONFIG_CRYPTO_WHIRLPOOL + #error WHIRLPOOL not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "whirlpool" +#else + #error Desired message digest algorithm not found +#endif + +/* + * Compile-time checking for our desired block cipher + */ +#if defined USE_AES + #if (!CONFIG_CRYPTO_AES && !CONFIG_CRYPTO_AES_586) + #error AES not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "aes" +#elif defined USE_TWOFISH + #if (!CONFIG_CRYPTO_TWOFISH && !CONFIG_CRYPTO_TWOFISH_586) + #error TWOFISH not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "twofish" +#else + #error Desired block cipher algorithm not found +#endif /* USE_AES */ + +#define DEFAULT_POOL_NUMBER 5 /* 2^{5} = 32 pools */ +#define DEFAULT_POOL_SIZE ( (1<<DEFAULT_POOL_NUMBER) * 256) +/* largest block of random data to extract at a time when in blocking-mode */ +#define TMP_BUF_SIZE 512 +/* SHA512/WHIRLPOOL have 64bytes == 512 bits */ +#define RANDOM_MAX_DIGEST_SIZE 64 +/* AES256 has 16byte blocks == 128 bits */ +#define RANDOM_MAX_BLOCK_SIZE 16 +/* AES256 has 32byte keys == 256 bits */ +#define RANDOM_MAX_KEY_SIZE 32 + +#if 0 + #define DEBUG_PRINTK printk +#else + #define DEBUG_PRINTK noop_printk +#endif +#if 0 + #define STATS_PRINTK printk +#else + #define STATS_PRINTK noop_printk +#endif +static inline void noop_printk(const char *a, ...) {} + +/* + * The minimum number of bits of entropy before we wake up a read on + * /dev/random. We also wait for reseed_count>0 and we do a + * random_reseed() once we do wake up. + */ +static int random_read_wakeup_thresh = 64; + +/* + * If the entropy count falls under this number of bits, then we + * should wake up processes which are selecting or polling on write + * access to /dev/random. + */ +static int random_write_wakeup_thresh = 128; + +/* + * When the input pool goes over trickle_thresh, start dropping most + * samples to avoid wasting CPU time and reduce lock contention. + */ + +static int trickle_thresh = DEFAULT_POOL_SIZE * 7; + +static DEFINE_PER_CPU(int, trickle_count) = 0; + +#define POOLBYTES\ + ( (1<<random_state->pool_number) * random_state->digestsize ) +#define POOLBITS ( POOLBYTES * 8 ) + +/* + * Linux 2.2 compatibility + */ +#ifndef DECLARE_WAITQUEUE +#define DECLARE_WAITQUEUE(WAIT, PTR) struct wait_queue WAIT = { PTR, NULL } +#endif +#ifndef DECLARE_WAIT_QUEUE_HEAD +#define DECLARE_WAIT_QUEUE_HEAD(WAIT) struct wait_queue *WAIT +#endif + +/* + * Static global variables + */ +static struct entropy_store *random_state; /* The default global store */ +static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); +static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); + +/* + * Forward procedure declarations + */ +#ifdef CONFIG_SYSCTL +static void sysctl_init_random(struct entropy_store *random_state); +#endif + +/***************************************************************** + * + * Utility functions, with some ASM defined functions for speed + * purposes + * + *****************************************************************/ + +/* + * More asm magic.... + * + * For entropy estimation, we need to do an integral base 2 + * logarithm. + * + * Note the "12bits" suffix - this is used for numbers between + * 0 and 4095 only. This allows a few shortcuts. + */ +#if 0 /* Slow but clear version */ +static inline __u32 int_ln_12bits(__u32 word) +{ + __u32 nbits = 0; + + while (word >>= 1) + nbits++; + return nbits; +} +#else /* Faster (more clever) version, courtesy Colin Plumb */ +static inline __u32 int_ln_12bits(__u32 word) +{ + /* Smear msbit right to make an n-bit mask */ + word |= word >> 8; + word |= word >> 4; + word |= word >> 2; + word |= word >> 1; + /* Remove one bit to make this a logarithm */ + word >>= 1; + /* Count the bits set in the word */ + word -= (word >> 1) & 0x555; + word = (word & 0x333) + ((word >> 2) & 0x333); + word += (word >> 4); + word += (word >> 8); + return word & 15; +} +#endif + +#if 0 +#define DEBUG_ENT(fmt, arg...) printk(KERN_DEBUG "random: " fmt, ## arg) +#else +#define DEBUG_ENT(fmt, arg...) do {} while (0) +#endif + +/********************************************************************** + * + * OS independent entropy store. Here are the functions which handle + * storing entropy in an entropy pool. + * + **********************************************************************/ + +struct entropy_store { + const char *digestAlgo; + unsigned int digestsize; + struct crypto_tfm *pools[1<<DEFAULT_POOL_NUMBER]; + /* optional, handy for statistics */ + unsigned int pools_bytes[1<<DEFAULT_POOL_NUMBER]; + + const char *cipherAlgo; + /* the key */ + unsigned char key[RANDOM_MAX_DIGEST_SIZE]; + unsigned int keysize; + /* the CTR value */ + unsigned char iv[16]; + unsigned int blocksize; + struct crypto_tfm *cipher; + + /* 2^pool_number # of pools */ + unsigned int pool_number; + /* current pool to add into */ + unsigned int pool_index; + /* size of the first pool */ + unsigned int pool0_len; + /* number of time we have reset */ + unsigned int reseed_count; + /* time in msec of the last reseed */ + time_t reseed_time; + /* digest used during random_reseed() */ + struct crypto_tfm *reseedHash; + /* cipher used for network randomness */ + struct crypto_tfm *networkCipher; + /* flag indicating if networkCipher has been seeded */ + char networkCipher_ready; + + /* read-write data: */ + spinlock_t lock ____cacheline_aligned_in_smp; + int entropy_count; +}; + +/* + * Initialize the entropy store. The input argument is the size of + * the random pool. + * + * Returns an negative error if there is a problem. + */ +static int create_entropy_store(int poolnum, struct entropy_store **ret_bucket) +{ + struct entropy_store *r; + unsigned long pool_number; + int keysize, i, j; + + pool_number = poolnum; + + r = kmalloc(sizeof(struct entropy_store), GFP_KERNEL); + if (!r) { + return -ENOMEM; + } + + memset (r, 0, sizeof(struct entropy_store)); + r->pool_number = pool_number; + r->digestAlgo = RANDOM_DEFAULT_DIGEST_ALGO; + +DEBUG_PRINTK("create_entropy_store() pools=%u index=%u\n", + 1<<pool_number, r->pool_index); + for (i=0; i<(1<<pool_number); i++) { +DEBUG_PRINTK("create_entropy_store() i=%i index=%u\n", i, r->pool_index); + r->pools[i] = crypto_alloc_tfm(r->digestAlgo, 0); + if (r->pools[i] == NULL) { + for (j=0; j<i; j++) { + if (r->pools[j] != NULL) { + kfree(r->pools[j]); + } + } + kfree(r); + return -ENOMEM; + } + crypto_digest_init( r->pools[i] ); + } + r->lock = SPIN_LOCK_UNLOCKED; + *ret_bucket = r; + + r->cipherAlgo = RANDOM_DEFAULT_CIPHER_ALGO; + if ((r->cipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + /* If the HASH's output is greater then the cipher's keysize, truncate + * to the cipher's keysize */ + keysize = crypto_tfm_alg_max_keysize(r->cipher); + r->digestsize = crypto_tfm_alg_digestsize(r->pools[0]); + r->blocksize = crypto_tfm_alg_blocksize(r->cipher); + + r->keysize = (keysize < r->digestsize) ? keysize : r->digestsize; +DEBUG_PRINTK("create_RANDOM %u %u %u\n", keysize, r->digestsize, r->keysize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* digest used duing random-reseed() */ + if ((r->reseedHash=crypto_alloc_tfm(r->digestAlgo, 0)) == NULL) { + return -ENOMEM; + } + /* cipher used for network randomness */ + if ((r->networkCipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + return 0; +} + +/* + * This function adds a byte into the entropy "pool". It does not + * update the entropy estimate. The caller should call + * credit_entropy_store if this is appropriate. + */ +static void add_entropy_words(struct entropy_store *r, const __u32 *in, + int nwords, int dst_pool) +{ + unsigned long flags; + struct scatterlist sg[1]; + static unsigned int totalBytes=0; + + if (r == NULL) { + return; + } + + spin_lock_irqsave(&r->lock, flags); + + totalBytes += nwords * sizeof(__u32); + + sg[0].page = virt_to_page(in); + sg[0].offset = offset_in_page(in); + sg[0].length = nwords*sizeof(__u32); + + if (dst_pool == -1) { + r->pools_bytes[r->pool_index] += nwords * sizeof(__u32); + crypto_digest_update(r->pools[r->pool_index], sg, 1); + if (r->pool_index == 0) { + r->pool0_len += nwords*sizeof(__u32); + } + /* idx = (idx + 1) mod ( (2^N)-1 ) */ + r->pool_index = (r->pool_index + 1) + & ((1<<random_state->pool_number)-1); + } else { + /* Let's make sure nothing mean is happening... */ + dst_pool &= (1<<random_state->pool_number) - 1; + r->pools_bytes[dst_pool] += nwords * sizeof(__u32); + crypto_digest_update(r->pools[dst_pool], sg, 1); + } +DEBUG_PRINTK("r->pool0_len = %u\n", r->pool0_len); + + + spin_unlock_irqrestore(&r->lock, flags); +DEBUG_PRINTK("0 add_entropy_words() nwords=%u pool[i].bytes=%u total=%u\n", + nwords, r->pools_bytes[r->pool_index], totalBytes); +} + +/* + * Credit (or debit) the entropy store with n bits of entropy + */ +static void credit_entropy_store(struct entropy_store *r, int nbits) +{ + unsigned long flags; + + spin_lock_irqsave(&r->lock, flags); + + if (r->entropy_count + nbits < 0) { + DEBUG_ENT("negative entropy/overflow (%d+%d)\n", + r->entropy_count, nbits); + r->entropy_count = 0; + } else if (r->entropy_count + nbits > POOLBITS) { + r->entropy_count = POOLBITS; + } else { + r->entropy_count += nbits; + if (nbits) + DEBUG_ENT("%04d : added %d bits\n", + r->entropy_count, + nbits); + } + + spin_unlock_irqrestore(&r->lock, flags); +} + +/********************************************************************** + * + * Entropy batch input management + * + * We batch entropy to be added to avoid increasing interrupt latency + * + **********************************************************************/ + +struct sample { + __u32 data[2]; + int credit; +}; + +static struct sample *batch_entropy_pool, *batch_entropy_copy; +static int batch_head, batch_tail; +static spinlock_t batch_lock = SPIN_LOCK_UNLOCKED; + +static int batch_max; +static void batch_entropy_process(void *private_); +static DECLARE_WORK(batch_work, batch_entropy_process, NULL); + +/* note: the size must be a power of 2 */ +static int __init batch_entropy_init(int size, struct entropy_store *r) +{ + batch_entropy_pool = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_pool) + return -1; + batch_entropy_copy = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_copy) { + kfree(batch_entropy_pool); + return -1; + } + batch_head = batch_tail = 0; + batch_work.data = r; + batch_max = size; + return 0; +} + +/* + * Changes to the entropy data is put into a queue rather than being added to + * the entropy counts directly. This is presumably to avoid doing heavy + * hashing calculations during an interrupt in add_timer_randomness(). + * Instead, the entropy is only added to the pool by keventd. + */ +void batch_entropy_store(u32 a, u32 b, int num) +{ + int new; + unsigned long flags; + + if (!batch_max) + return; + + spin_lock_irqsave(&batch_lock, flags); + + batch_entropy_pool[batch_head].data[0] = a; + batch_entropy_pool[batch_head].data[1] = b; + batch_entropy_pool[batch_head].credit = num; + + if (((batch_head - batch_tail) & (batch_max-1)) >= (batch_max / 2)) { + /* + * Schedule it for the next timer tick: + */ + schedule_delayed_work(&batch_work, 1); + } + + new = (batch_head+1) & (batch_max-1); + if (new == batch_tail) { + DEBUG_ENT("batch entropy buffer full\n"); + } else { + batch_head = new; + } + + spin_unlock_irqrestore(&batch_lock, flags); +} + +EXPORT_SYMBOL(batch_entropy_store); + +/* + * Flush out the accumulated entropy operations, adding entropy to the passed + * store (normally random_state). If that store has enough entropy, alternate + * between randomizing the data of the primary and secondary stores. + */ +static void batch_entropy_process(void *private_) +{ + int max_entropy = POOLBITS; + unsigned head, tail; + + /* Mixing into the pool is expensive, so copy over the batch + * data and release the batch lock. The pool is at least half + * full, so don't worry too much about copying only the used + * part. + */ + spin_lock_irq(&batch_lock); + + memcpy(batch_entropy_copy, batch_entropy_pool, + batch_max*sizeof(struct sample)); + + head = batch_head; + tail = batch_tail; + batch_tail = batch_head; + + spin_unlock_irq(&batch_lock); + + while (head != tail) { + if (random_state->entropy_count >= max_entropy) { + max_entropy = POOLBITS; + } + /* + * Only credit if we're feeding into pool[0] + * Otherwise we'd be assuming entropy in pool[31] would be + * usable when we read. This is conservative, but it'll + * not over-credit our entropy estimate for users of + * /dev/random, /dev/urandom will not be effected. + */ + if (random_state->pool_index == 0) { + credit_entropy_store(random_state, + batch_entropy_copy[tail].credit); + } + add_entropy_words(random_state, + batch_entropy_copy[tail].data, 2, -1); +; + + tail = (tail+1) & (batch_max-1); + } + if (random_state->entropy_count >= random_read_wakeup_thresh + || random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); +} + +/********************************************************************* + * + * Entropy input management + * + *********************************************************************/ + +/* There is one of these per entropy source */ +struct timer_rand_state { + __u32 last_time; + __s32 last_delta,last_delta2; + int dont_count_entropy:1; +}; + +static struct timer_rand_state keyboard_timer_state; +static struct timer_rand_state mouse_timer_state; +static struct timer_rand_state extract_timer_state; +static struct timer_rand_state *irq_timer_state[NR_IRQS]; + +/* + * This function adds entropy to the entropy "pool" by using timing + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. + * + * The number "num" is also added to the pool - it should somehow describe + * the type of event which just happened. This is currently 0-255 for + * keyboard scan codes, and 256 upwards for interrupts. + * On the i386, this is assumed to be at most 16 bits, and the high bits + * are used for a high-resolution timer. + * + */ +static void add_timer_randomness(struct timer_rand_state *state, unsigned num) +{ + __u32 time; + __s32 delta, delta2, delta3; + int entropy = 0; + + /* if over the trickle threshold, use only 1 in 4096 samples */ + if ( random_state->entropy_count > trickle_thresh && + (__get_cpu_var(trickle_count)++ & 0xfff)) + return; + +#if defined (__i386__) || defined (__x86_64__) + if (cpu_has_tsc) { + __u32 high; + rdtsc(time, high); + num ^= high; + } else { + time = jiffies; + } +#elif defined (__sparc_v9__) + unsigned long tick = tick_ops->get_tick(); + + time = (unsigned int) tick; + num ^= (tick >> 32UL); +#else + time = jiffies; +#endif + + /* + * Calculate number of bits of randomness we probably added. + * We take into account the first, second and third-order deltas + * in order to make our estimate. + */ + if (!state->dont_count_entropy) { + delta = time - state->last_time; + state->last_time = time; + + delta2 = delta - state->last_delta; + state->last_delta = delta; + + delta3 = delta2 - state->last_delta2; + state->last_delta2 = delta2; + + if (delta < 0) + delta = -delta; + if (delta2 < 0) + delta2 = -delta2; + if (delta3 < 0) + delta3 = -delta3; + if (delta > delta2) + delta = delta2; + if (delta > delta3) + delta = delta3; + + /* + * delta is now minimum absolute delta. + * Round down by 1 bit on general principles, + * and limit entropy entimate to 12 bits. + */ + delta >>= 1; + delta &= (1 << 12) - 1; + + entropy = int_ln_12bits(delta); + } + batch_entropy_store(num, time, entropy); +} + +void add_keyboard_randomness(unsigned char scancode) +{ + static unsigned char last_scancode; + /* ignore autorepeat (multiple key down w/o key up) */ + if (scancode != last_scancode) { + last_scancode = scancode; + add_timer_randomness(&keyboard_timer_state, scancode); + } +} + +EXPORT_SYMBOL(add_keyboard_randomness); + +void add_mouse_randomness(__u32 mouse_data) +{ + add_timer_randomness(&mouse_timer_state, mouse_data); +} + +EXPORT_SYMBOL(add_mouse_randomness); + +void add_interrupt_randomness(int irq) +{ + if (irq >= NR_IRQS || irq_timer_state[irq] == 0) + return; + + add_timer_randomness(irq_timer_state[irq], 0x100+irq); +} + +EXPORT_SYMBOL(add_interrupt_randomness); + +void add_disk_randomness(struct gendisk *disk) +{ + if (!disk || !disk->random) + return; + /* first major is 1, so we get >= 0x200 here */ + add_timer_randomness(disk->random, + 0x100+MKDEV(disk->major, disk->first_minor)); +} + +EXPORT_SYMBOL(add_disk_randomness); + +/********************************************************************* + * + * Entropy extraction routines + * + *********************************************************************/ + +#define EXTRACT_ENTROPY_USER 1 +#define EXTRACT_ENTROPY_LIMIT 4 + +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags); + +static inline void increment_iv(unsigned char *iv, const unsigned int IVsize) { + switch (IVsize) { + case 8: + if (++((u32*)iv)[0]) + ++((u32*)iv)[1]; + break; + + case 16: + if (++((u32*)iv)[0]) + if (++((u32*)iv)[1]) + if (++((u32*)iv)[2]) + ++((u32*)iv)[3]; + break; + + default: + { + int i; + for (i=0; i<IVsize; i++) + if (++iv[i]) + break; + } + break; + } +} + +/* + * Fortuna's Reseed + * + * Key' = hash(Key || hash(pool[a0]) || hash(pool[a1]) || ...) + * where {a0,a1,...} are facators of r->reseed_count+1 which are of the form + * 2^j, 0<=j. + * Prevents backtracking attacks and with event inputs, supports forward + * secrecy + */ +static void random_reseed(struct entropy_store *r, size_t nbytes, int flags) { + struct scatterlist sg[1]; + unsigned int i, deduct; + unsigned char tmp[RANDOM_MAX_DIGEST_SIZE]; + unsigned long cpuflags; + + deduct = (r->keysize < r->digestsize) ? r->keysize : r->digestsize; + + /* Hold lock while accounting */ + spin_lock_irqsave(&r->lock, cpuflags); + + DEBUG_ENT("%04d : trying to extract %d bits\n", + random_state->entropy_count, + deduct * 8); + + /* + * Don't extract more data than in the entropy in the pooling system + */ + if (flags & EXTRACT_ENTROPY_LIMIT && nbytes >= r->entropy_count / 8) { + nbytes = r->entropy_count / 8; + } + + if (deduct*8 <= r->entropy_count) { + r->entropy_count -= deduct*8; + } else { + r->entropy_count = 0; + } + + if (r->entropy_count < random_write_wakeup_thresh) + wake_up_interruptible(&random_write_wait); + + DEBUG_ENT("%04d : debiting %d bits%s\n", + random_state->entropy_count, + deduct * 8, + flags & EXTRACT_ENTROPY_LIMIT ? "" : " (unlimited)"); + + r->reseed_count++; + r->pool0_len = 0; + + /* Entropy accounting done, release lock. */ + spin_unlock_irqrestore(&r->lock, cpuflags); + + DEBUG_PRINTK("random_reseed count=%u\n", r->reseed_count); + + crypto_digest_init(r->reseedHash); + + sg[0].page = virt_to_page(r->key); + sg[0].offset = offset_in_page(r->key); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + +#define TESTBIT(VAL, N)\ + ( ((VAL) >> (N)) & 1 ) + for (i=0; i<(1<<r->pool_number); i++) { + /* using pool[i] if r->reseed_count is divisible by 2^i + * since 2^0 == 1, we always use pool[0] + */ + if ( (i==0) || TESTBIT(r->reseed_count,i)==0 ) { + crypto_digest_final(r->pools[i], tmp); + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + + crypto_digest_init(r->pools[i]); + /* Each pool carries its past state forward */ + crypto_digest_update(r->pools[i], sg, 1); + } else { + /* pool j is only used once every 2^j times */ + break; + } + } +#undef TESTBIT + + crypto_digest_final(r->reseedHash, r->key); + crypto_cipher_setkey(r->cipher, r->key, r->keysize); + increment_iv(r->iv, r->blocksize); +} + + +/* + * This function extracts randomness from the "entropy pool", and + * returns it in a buffer. This function computes how many remaining + * bits of entropy are left in the pool, but it does not restrict the + * number of bytes that are actually obtained. If the EXTRACT_ENTROPY_USER + * flag is given, then the buf pointer is assumed to be in user space. + */ +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags) +{ + ssize_t ret, i, deduct; + __u32 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgiv[1], sgtmp[1]; + struct timeval tv; + time_t nowtime; + + /* Redundant, but just in case... */ + if (r->entropy_count > POOLBITS) + r->entropy_count = POOLBITS; + + /* + * To keep the possibility of collisions down, limit the number of + * output bytes per block cipher key. + */ + if (RANDOM_MAX_EXTRACT_SIZE < nbytes) + nbytes = RANDOM_MAX_EXTRACT_SIZE; + + /* + * The size of block you read from at a go is directly related to + * the number of Fortuna-reseeds you perform. And thus, the amount + * of entropy you draw from the pooling system. + * + * Reading from /dev/urandom, you can specify any block size, + * the larger the less Fortuna-reseeds, the faster the output. + * + * Reading from /dev/random however, we limit this to the amount of + * entropy to deduct from our estimate. This estimate is most + * naturally updated from inside Fortuna-reseed, so we limit our block + * size here. + * + * At most, Fortuna will use e=min(r->digestsize, r->keysize) of + * entropy to reseed. + */ + deduct = (r->keysize < r->digestsize) ? r->keysize : r->digestsize; + if (flags & EXTRACT_ENTROPY_LIMIT && deduct < nbytes) { + nbytes = deduct; + } + + /* + * If reading in non-blocking mode, pace ourselves in using up the pool + * system's entropy. + */ + if (flags & EXTRACT_ENTROPY_LIMIT) { + do_gettimeofday(&tv); + nowtime = (tv.tv_sec * 1000) + (tv.tv_usec / 1000); + random_reseed(r, nbytes, flags); + r->reseed_time = nowtime; + } else { + do_gettimeofday(&tv); + nowtime = (tv.tv_sec * 1000) + (tv.tv_usec / 1000); + if (r->pool0_len > 64 + && (nowtime - r->reseed_time) > RANDOM_RESEED_INTERVAL) { + random_reseed(r, nbytes, flags); + r->reseed_time = nowtime; + } + } + + sgiv[0].page = virt_to_page(r->iv); + sgiv[0].offset = offset_in_page(r->iv); + sgiv[0].length = r->blocksize; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = r->blocksize; + + ret = 0; + while (nbytes) { + /* + * Check if we need to break out or reschedule.... + */ + if ((flags & EXTRACT_ENTROPY_USER) && need_resched()) { + if (signal_pending(current)) { + if (ret == 0) + ret = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : extract sleeping (%d bytes left)\n", + random_state->entropy_count, + nbytes); + + schedule(); + + /* + * when we wakeup, there will be more data in our + * pooling system so we will reseed + */ + random_reseed(r, nbytes, flags); + + DEBUG_ENT("%04d : extract woke up\n", + random_state->entropy_count); + } + + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, r->blocksize); + increment_iv(r->iv, r->blocksize); + + /* Copy data to destination buffer */ + i = (nbytes < r->blocksize) ? nbytes : r->blocksize; + if (flags & EXTRACT_ENTROPY_USER) { + i -= copy_to_user(buf, (__u8 const *)tmp, i); + if (!i) { + ret = -EFAULT; + break; + } + } else + memcpy(buf, (__u8 const *)tmp, i); + nbytes -= i; + buf += i; + ret += i; + } + + /* generate a new key */ + /* take into account the possibility that keysize >= blocksize */ + for (i=0; i+r->blocksize<=r->keysize; i+=r->blocksize) { + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + } + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize-i; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* Wipe data just returned from memory */ + memset(tmp, 0, sizeof(tmp)); + + return ret; +} + +/* + * This function is the exported kernel interface. It returns some + * number of good random numbers, suitable for seeding TCP sequence + * numbers, etc. + */ +void get_random_bytes(void *buf, int nbytes) +{ + if (random_state) + extract_entropy(random_state, (char *) buf, nbytes, 0); + else + printk(KERN_NOTICE "get_random_bytes called before " + "random driver initialization\n"); +} + +EXPORT_SYMBOL(get_random_bytes); + +/********************************************************************* + * + * Functions to interface with Linux + * + *********************************************************************/ + +/* + * Initialize the random pool with standard stuff. + * This is not secure random data, but it can't hurt us and people scream + * when you try to remove it. + * + * NOTE: This is an OS-dependent function. + */ +static void init_std_data(struct entropy_store *r) +{ + struct timeval tv; + __u32 words[2]; + char *p; + int i; + + do_gettimeofday(&tv); + words[0] = tv.tv_sec; + words[1] = tv.tv_usec; + add_entropy_words(r, words, 2, -1); + + /* + * This doesn't lock system.utsname. However, we are generating + * entropy so a race with a name set here is fine. + */ + p = (char *) &system_utsname; + for (i = sizeof(system_utsname) / sizeof(words); i; i--) { + memcpy(words, p, sizeof(words)); + add_entropy_words(r, words, sizeof(words)/4, -1); + p += sizeof(words); + } +} + +static int __init rand_initialize(void) +{ + int i; + + if (create_entropy_store(DEFAULT_POOL_NUMBER, &random_state)) + goto err; + if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state)) + goto err; + init_std_data(random_state); +#ifdef CONFIG_SYSCTL + sysctl_init_random(random_state); +#endif + for (i = 0; i < NR_IRQS; i++) + irq_timer_state[i] = NULL; + memset(&keyboard_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&mouse_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&extract_timer_state, 0, sizeof(struct timer_rand_state)); + extract_timer_state.dont_count_entropy = 1; + return 0; +err: + return -1; +} +module_init(rand_initialize); + +void rand_initialize_irq(int irq) +{ + struct timer_rand_state *state; + + if (irq >= NR_IRQS || irq_timer_state[irq]) + return; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + irq_timer_state[irq] = state; + } +} + +void rand_initialize_disk(struct gendisk *disk) +{ + struct timer_rand_state *state; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + disk->random = state; + } +} + +static ssize_t +random_read(struct file * file, char __user * buf, size_t nbytes, loff_t *ppos) +{ + DECLARE_WAITQUEUE(wait, current); + ssize_t n, retval = 0, count = 0, + max_xfer_size; + + if (nbytes == 0) + return 0; + + /* + * only read out of extract_entropy() the minimum bits of pool + * matirial we can deduce from the output if we could attack our + * block cipher and message digest functions in Fortuna + */ + max_xfer_size = (random_state->digestsize < random_state->keysize) + ? random_state->keysize + : random_state->digestsize; + + while (nbytes > 0) { + n = nbytes; + if (n > max_xfer_size) + n = max_xfer_size; + + DEBUG_ENT("%04d : reading %d bits, p: %d s: %d\n", + random_state->entropy_count, + n*8, random_state->entropy_count, + random_state->entropy_count); + + n = extract_entropy(random_state, buf, n, + EXTRACT_ENTROPY_USER | + EXTRACT_ENTROPY_LIMIT); + + DEBUG_ENT("%04d : read got %d bits (%d needed, reseeds=%d)\n", + random_state->entropy_count, + random_state->reseed_count, + n*8, (nbytes-n)*8); + + if (n == 0) { + if (file->f_flags & O_NONBLOCK) { + retval = -EAGAIN; + break; + } + if (signal_pending(current)) { + retval = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : sleeping?\n", + random_state->entropy_count); + + set_current_state(TASK_INTERRUPTIBLE); + add_wait_queue(&random_read_wait, &wait); + + if (random_state->entropy_count / 8 == 0 + || random_state->reseed_count == 0) + schedule(); + + set_current_state(TASK_RUNNING); + remove_wait_queue(&random_read_wait, &wait); + + DEBUG_ENT("%04d : waking up\n", + random_state->entropy_count); + + continue; + } + + if (n < 0) { + retval = n; + break; + } + count += n; + buf += n; + nbytes -= n; + break; /* This break makes the device work */ + /* like a named pipe */ + } + + /* + * If we gave the user some bytes, update the access time. + */ + if (count) + file_accessed(file); + + return (count ? count : retval); +} + +static ssize_t +urandom_read(struct file * file, char __user * buf, + size_t nbytes, loff_t *ppos) +{ + /* Don't return anything untill we've reseeded at least once */ + if (random_state->reseed_count == 0) + return 0; + + return extract_entropy(random_state, buf, nbytes, + EXTRACT_ENTROPY_USER); +} + +static unsigned int +random_poll(struct file *file, poll_table * wait) +{ + unsigned int mask; + + poll_wait(file, &random_read_wait, wait); + poll_wait(file, &random_write_wait, wait); + mask = 0; + if (random_state->entropy_count >= random_read_wakeup_thresh) + mask |= POLLIN | POLLRDNORM; + if (random_state->entropy_count < random_write_wakeup_thresh) + mask |= POLLOUT | POLLWRNORM; + return mask; +} + +static ssize_t +random_write(struct file * file, const char __user * buffer, + size_t count, loff_t *ppos) +{ + static int idx = 0; + int ret = 0; + size_t bytes; + __u32 buf[16]; + const char __user *p = buffer; + size_t c = count; + + while (c > 0) { + bytes = min(c, sizeof(buf)); + + bytes -= copy_from_user(&buf, p, bytes); + if (!bytes) { + ret = -EFAULT; + break; + } + c -= bytes; + p += bytes; + + /* + * Use input data rotates though the pools independantly of + * system-events. + * + * idx = (idx + 1) mod ( (2^N)-1 ) + */ + idx = (idx + 1) & ((1<<random_state->pool_number)-1); + add_entropy_words(random_state, buf, bytes, idx); + } + if (p == buffer) { + return (ssize_t)ret; + } else { + file->f_dentry->d_inode->i_mtime = CURRENT_TIME; + mark_inode_dirty(file->f_dentry->d_inode); + return (ssize_t)(p - buffer); + } +} + +static int +random_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, unsigned long arg) +{ + int size, ent_count; + int __user *p = (int __user *)arg; + int retval; + + switch (cmd) { + case RNDGETENTCNT: + ent_count = random_state->entropy_count; + if (put_user(ent_count, p)) + return -EFAULT; + return 0; + case RNDADDTOENTCNT: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p)) + return -EFAULT; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDGETPOOL: + /* can't do this anymore */ + return 0; + case RNDADDENTROPY: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p++)) + return -EFAULT; + if (ent_count < 0) + return -EINVAL; + if (get_user(size, p++)) + return -EFAULT; + retval = random_write(file, (const char __user *) p, + size, &file->f_pos); + if (retval < 0) + return retval; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDZAPENTCNT: + /* Can't do this anymore */ + return 0; + case RNDCLEARPOOL: + /* Can't to this anymore */ + return 0; + default: + return -EINVAL; + } +} + +struct file_operations random_fops = { + .read = random_read, + .write = random_write, + .poll = random_poll, + .ioctl = random_ioctl, +}; + +struct file_operations urandom_fops = { + .read = urandom_read, + .write = random_write, + .ioctl = random_ioctl, +}; + +/*************************************************************** + * Random UUID interface + * + * Used here for a Boot ID, but can be useful for other kernel + * drivers. + ***************************************************************/ + +/* + * Generate random UUID + */ +void generate_random_uuid(unsigned char uuid_out[16]) +{ + get_random_bytes(uuid_out, 16); + /* Set UUID version to 4 --- truely random generation */ + uuid_out[6] = (uuid_out[6] & 0x0F) | 0x40; + /* Set the UUID variant to DCE */ + uuid_out[8] = (uuid_out[8] & 0x3F) | 0x80; +} + +EXPORT_SYMBOL(generate_random_uuid); + +/******************************************************************** + * + * Sysctl interface + * + ********************************************************************/ + +#ifdef CONFIG_SYSCTL + +#include <linux/sysctl.h> + +static int sysctl_poolsize; +static int min_read_thresh, max_read_thresh; +static int min_write_thresh, max_write_thresh; +static char sysctl_bootid[16]; + +static int proc_do_poolsize(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + int ret; + + sysctl_poolsize = POOLBITS; + + ret = proc_dointvec(table, write, filp, buffer, lenp, ppos); + if (ret || !write || + (sysctl_poolsize == POOLBITS)) + return ret; + + return ret; /* can't change the pool size in fortuna */ +} + +static int poolsize_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + int len; + + sysctl_poolsize = POOLBITS; + + /* + * We only handle the write case, since the read case gets + * handled by the default handler (and we don't care if the + * write case happens twice; it's harmless). + */ + if (newval && newlen) { + len = newlen; + if (len > table->maxlen) + len = table->maxlen; + if (copy_from_user(table->data, newval, len)) + return -EFAULT; + } + + return 0; +} + +/* + * These functions is used to return both the bootid UUID, and random + * UUID. The difference is in whether table->data is NULL; if it is, + * then a new UUID is generated and returned to the user. + * + * If the user accesses this via the proc interface, it will be returned + * as an ASCII string in the standard UUID format. If accesses via the + * sysctl system call, it is returned as 16 bytes of binary data. + */ +static int proc_do_uuid(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + ctl_table fake_table; + unsigned char buf[64], tmp_uuid[16], *uuid; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + sprintf(buf, "%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x%02x-" + "%02x%02x%02x%02x%02x%02x", + uuid[0], uuid[1], uuid[2], uuid[3], + uuid[4], uuid[5], uuid[6], uuid[7], + uuid[8], uuid[9], uuid[10], uuid[11], + uuid[12], uuid[13], uuid[14], uuid[15]); + fake_table.data = buf; + fake_table.maxlen = sizeof(buf); + + return proc_dostring(&fake_table, write, filp, buffer, lenp, ppos); +} + +static int uuid_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + unsigned char tmp_uuid[16], *uuid; + unsigned int len; + + if (!oldval || !oldlenp) + return 1; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + if (get_user(len, oldlenp)) + return -EFAULT; + if (len) { + if (len > 16) + len = 16; + if (copy_to_user(oldval, uuid, len) || + put_user(len, oldlenp)) + return -EFAULT; + } + return 1; +} + +ctl_table random_table[] = { + { + .ctl_name = RANDOM_POOLSIZE, + .procname = "poolsize", + .data = &sysctl_poolsize, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_do_poolsize, + .strategy = &poolsize_strategy, + }, + { + .ctl_name = RANDOM_ENTROPY_COUNT, + .procname = "entropy_avail", + .maxlen = sizeof(int), + .mode = 0444, + .proc_handler = &proc_dointvec, + }, + { + .ctl_name = RANDOM_READ_THRESH, + .procname = "read_wakeup_threshold", + .data = &random_read_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_read_thresh, + .extra2 = &max_read_thresh, + }, + { + .ctl_name = RANDOM_WRITE_THRESH, + .procname = "write_wakeup_threshold", + .data = &random_write_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_write_thresh, + .extra2 = &max_write_thresh, + }, + { + .ctl_name = RANDOM_BOOT_ID, + .procname = "boot_id", + .data = &sysctl_bootid, + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_UUID, + .procname = "uuid", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_DIGEST_ALGO, + .procname = "digest_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { + .ctl_name = RANDOM_CIPHER_ALGO, + .procname = "cipher_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { .ctl_name = 0 } +}; + +static void sysctl_init_random(struct entropy_store *random_state) +{ + int i; + + /* If the sys-admin doesn't want people to know how fast + * random events are happening, he can set the read-threshhold + * down to zero so /dev/random never blocks. Default is to block. + * This is for the paranoid loonies who think frequency analysis + * would lead to something. + */ + min_read_thresh = 0; + min_write_thresh = 0; + max_read_thresh = max_write_thresh = POOLBITS; + for (i=0; random_table[i].ctl_name!=0; i++) { + switch (random_table[i].ctl_name) { + case RANDOM_ENTROPY_COUNT: + random_table[i].data = &random_state->entropy_count; + break; + + case RANDOM_DIGEST_ALGO: + random_table[i].data = (void*)random_state->digestAlgo; + break; + + case RANDOM_CIPHER_ALGO: + random_table[i].data = (void*)random_state->cipherAlgo; + break; + + default: + break; + } + } +} +#endif /* CONFIG_SYSCTL */ + +/******************************************************************** + * + * Random funtions for networking + * + ********************************************************************/ + +/* + * TCP initial sequence number picking. This uses the random number + * generator to pick an initial secret value. This value is encrypted + * with the TCP endpoint information to provide a unique starting point + * for each pair of TCP endpoints. This defeats attacks which rely on + * guessing the initial TCP sequence number. This algorithm was + * suggested by Steve Bellovin, modified by Jean-Luc Cooke. + * + * Using a very strong hash was taking an appreciable amount of the total + * TCP connection establishment time, so this is a weaker hash, + * compensated for by changing the secret periodically. This was changed + * again by Jean-Luc Cooke to use AES256-CBC encryption which is faster + * still (see `/usr/bin/openssl speed md4 sha1 aes`) + */ + +/* This should not be decreased so low that ISNs wrap too fast. */ +#define REKEY_INTERVAL 300 +/* + * Bit layout of the tcp sequence numbers (before adding current time): + * bit 24-31: increased after every key exchange + * bit 0-23: hash(source,dest) + * + * The implementation is similar to the algorithm described + * in the Appendix of RFC 1185, except that + * - it uses a 1 MHz clock instead of a 250 kHz clock + * - it performs a rekey every 5 minutes, which is equivalent + * to a (source,dest) tulple dependent forward jump of the + * clock by 0..2^(HASH_BITS+1) + * + * Thus the average ISN wraparound time is 68 minutes instead of + * 4.55 hours. + * + * SMP cleanup and lock avoidance with poor man's RCU. + * Manfred Spraul <manfred@colorfullife.com> + * + */ +#define COUNT_BITS 8 +#define COUNT_MASK ( (1<<COUNT_BITS)-1) +#define HASH_BITS 24 +#define HASH_MASK ( (1<<HASH_BITS)-1 ) + +static spinlock_t ip_lock = SPIN_LOCK_UNLOCKED; +static unsigned int ip_cnt, network_count; + +static void __check_and_rekey(time_t time) +{ + u8 tmp[RANDOM_MAX_KEY_SIZE]; + spin_lock_bh(&ip_lock); + + get_random_bytes(tmp, random_state->keysize); + crypto_cipher_setkey(random_state->networkCipher, + (const u8*)tmp, + random_state->keysize); + random_state->networkCipher_ready = 1; + network_count = (ip_cnt & COUNT_MASK) << HASH_BITS; + mb(); + ip_cnt++; + + spin_unlock_bh(&ip_lock); + return; +} + +static inline void check_and_rekey(time_t time) +{ + static time_t rekey_time=0; + + rmb(); + if (!rekey_time || (time - rekey_time) > REKEY_INTERVAL) { + __check_and_rekey(time); + rekey_time = time; + } + + return; +} + +#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) +__u32 secure_tcpv6_sequence_number(__u32 *saddr, __u32 *daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * The procedure is the same as for IPv4, but addresses are longer. + * Thus we must use two AES operations. + */ + + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 2 block in CBC with + * and IV={(sport)<<16 | dport, 0, 0, 0} + * + * seq = ct[0], ct = Enc-CBC(Key, {ports}, {daddr, saddr}); + * = Enc(Key, saddr xor Enc(Key, daddr)) + */ + + /* PT0 = daddr */ + memcpy(tmp, daddr, random_state->blocksize); + /* IV = {ports,0,0,0} */ + tmp[0] ^= (sport<<16) | dport; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + /* PT1 = saddr */ + random_state->networkCipher->crt_cipher.cit_xor_block(tmp, (const u8*)saddr); + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + seq += tv.tv_usec + tv.tv_sec*1000000; + + return seq; +} +EXPORT_SYMBOL(secure_tcpv6_sequence_number); + +__u32 secure_ipv6_id(__u32 *daddr) +{ + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + check_and_rekey(get_seconds()); + + memcpy(tmp, daddr, random_state->blocksize); + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* id = tmp[0], tmp = Enc(Key, daddr); */ + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +EXPORT_SYMBOL(secure_ipv6_id); +#endif + + +__u32 secure_tcp_sequence_number(__u32 saddr, __u32 daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * Pick a random secret every REKEY_INTERVAL seconds. + */ + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + /* + * Pick a unique starting offset for each TCP connection endpoints + * (saddr, daddr, sport, dport). + * Note that the words are placed into the starting vector, which is + * then mixed with a partial MD4 over random data. + */ + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 1 block + * + * seq = ct[0], ct = Enc(Key, {(sport<<16)|dport, daddr, saddr, 0}) + */ + + tmp[0] = (sport<<16) | dport; + tmp[1] = daddr; + tmp[2] = saddr; + tmp[3] = 0; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + /* + * As close as possible to RFC 793, which + * suggests using a 250 kHz clock. + * Further reading shows this assumes 2 Mb/s networks. + * For 10 Mb/s Ethernet, a 1 MHz clock is appropriate. + * That's funny, Linux has one built in! Use it! + * (Networks are faster now - should this be increased?) + */ + seq += tv.tv_usec + tv.tv_sec*1000000; + +#if 0 + printk("init_seq(%lx, %lx, %d, %d) = %d\n", + saddr, daddr, sport, dport, seq); +#endif + return seq; +} + +EXPORT_SYMBOL(secure_tcp_sequence_number); + +/* The code below is shamelessly stolen from secure_tcp_sequence_number(). + * All blames to Andrey V. Savochkin <saw@msu.ru>. + * Changed by Jean-Luc Cooke <jlcooke@certainkey.com> to use AES & C.A.P.I. + */ +__u32 secure_ip_id(__u32 daddr) +{ + struct scatterlist sgtmp[1]; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + + check_and_rekey(get_seconds()); + + /* + * Pick a unique starting offset for each IP destination. + * id = ct[0], ct = Enc(Key, {daddr,0,0,0}); + */ + tmp[0] = daddr; + tmp[1] = 0; + tmp[2] = 0; + tmp[3] = 0; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +#ifdef CONFIG_SYN_COOKIES +/* + * Secure SYN cookie computation. This is the algorithm worked out by + * Dan Bernstein and Eric Schenk. + * + * For linux I implement the 1 minute counter by looking at the jiffies clock. + * The count is passed in as a parameter, so this code doesn't much care. + * + * SYN cookie (and seq# & id#) Changed in 2004 by Jean-Luc Cooke + * <jlcooke@certainkey.com> to use the C.A.P.I. and AES256. + */ + +#define COOKIEBITS 24 /* Upper bits store count */ +#define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1) + +__u32 secure_tcp_syn_cookie(__u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 data) +{ + struct scatterlist sg[1]; + __u32 tmp[4]; + + /* + * Compute the secure sequence number. + * + * Output is the 32bit tag of a CBC-MAC of + * PT={count,0,0,0} with IV={addr,daddr,sport|dport,sseq} + * cookie = {<8bit count>, + * truncate_24bit( + * Encrypt(Sec, {saddr,daddr,sport|dport,sseq}) + * ) + * } + * + * DJB wrote (http://cr.yp.to/syncookies/archive) about how to do this + * with hash algorithms. + * - we can replace two SHA1s used in the previous kernel with 1 AES + * and make things 5x faster + * - I'd like to propose we remove the use of two whittenings with a + * single operation since we were only using addition modulo 2^32 of + * all these values anyways. Not to mention the hashs differ only in + * that the second processes more data... why drop the first hash? + * We did learn that addition is commutative and associative long ago. + * - by replacing two SHA1s and addition modulo 2^32 with encryption of + * a 32bit value using CAPI we've made it 1,000,000,000 times easier + * to understand what is going on. + */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + if (!random_state->networkCipher_ready) { + check_and_rekey(get_seconds()); + } + /* tmp[]/sg[0] = Enc(Sec, {saddr,daddr,sport|dport,sseq}) */ + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* cookie = CTR encrypt of 8-bit-count and 24-bit-data */ + return tmp[0] ^ ( (count << COOKIEBITS) | (data & COOKIEMASK) ); +} + +/* + * This retrieves the small "data" value from the syncookie. + * If the syncookie is bad, the data returned will be out of + * range. This must be checked by the caller. + * + * The count value used to generate the cookie must be within + * "maxdiff" if the current (passed-in) "count". The return value + * is (__u32)-1 if this test fails. + */ +__u32 check_tcp_syn_cookie(__u32 cookie, __u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 maxdiff) +{ + struct scatterlist sg[1]; + __u32 tmp[4], thiscount, diff; + + if (random_state == NULL || !random_state->networkCipher_ready) + return (__u32)-1; /* Well, duh! */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* CTR decrypt the cookie */ + cookie ^= tmp[0]; + + /* top 8 bits are 'count' */ + thiscount = cookie >> COOKIEBITS; + + diff = count - thiscount; + if (diff >= maxdiff) + return (__u32)-1; + + /* bottom 24 bits are 'data' */ + return cookie & COOKIEMASK; +} +#endif diff -X exclude -Nur linux-2.6.8.1/drivers/char/random.c linux-2.6.8.1-rand2/drivers/char/random.c --- linux-2.6.8.1/drivers/char/random.c 2004-09-27 16:04:53.000000000 -0400 +++ linux-2.6.8.1-rand2/drivers/char/random.c 2004-09-28 23:25:46.000000000 -0400 @@ -261,6 +261,17 @@ #include <asm/io.h> /* + * In September 2004, Jean-Luc Cooke wrote a Fortuna RNG for Linux + * which was non-blocking and used the Cryptographic API. + * We use it now if the user wishes. + */ +#ifdef CONFIG_CRYPTO_RANDOM_FORTUNA + #warning using the Fortuna PRNG for /dev/random + #include "../crypto/random-fortuna.c" +#else /* CONFIG_CRYPTO_RANDOM_FORTUNA */ + #warning using the Linux Legacy PRNG for /dev/random + +/* * Configuration information */ #define DEFAULT_POOL_SIZE 512 @@ -2483,3 +2494,5 @@ return (cookie - tmp[17]) & COOKIEMASK; /* Leaving the data behind */ } #endif + +#endif /* CONFIG_CRYPTO_RANDOM_FORTUNA */ diff -X exclude -Nur linux-2.6.8.1/include/linux/sysctl.h linux-2.6.8.1-rand2/include/linux/sysctl.h --- linux-2.6.8.1/include/linux/sysctl.h 2004-08-14 06:55:33.000000000 -0400 +++ linux-2.6.8.1-rand2/include/linux/sysctl.h 2004-09-29 10:45:20.592695040 -0400 @@ -198,7 +198,9 @@ RANDOM_READ_THRESH=3, RANDOM_WRITE_THRESH=4, RANDOM_BOOT_ID=5, - RANDOM_UUID=6 + RANDOM_UUID=6, + RANDOM_DIGEST_ALGO=7, + RANDOM_CIPHER_ALGO=8 }; /* /proc/sys/kernel/pty */ ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-30 0:21 ` Jean-Luc Cooke @ 2004-09-30 4:23 ` Jean-Luc Cooke 2004-09-30 6:50 ` James Morris ` (2 more replies) 0 siblings, 3 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-30 4:23 UTC (permalink / raw) To: Theodore Ts'o, linux, linux-kernel, cryptoapi; +Cc: jmorris [-- Attachment #1: Type: text/plain, Size: 351 bytes --] This should be the last one for a while. v2.1.4 crypto/random-fortuna.c Ted, since this is a crypto-API feature as well as an optional replacement to /dev/random, should I be passing this threw James or both of you? Cheers, JLC On Wed, Sep 29, 2004 at 08:21:00PM -0400, Jean-Luc Cooke wrote: > Damn, > > Need to eat me some brain-food. > > JLC [-- Attachment #2: fortuna-2.6.8.1.patch --] [-- Type: text/plain, Size: 64718 bytes --] diff -X exclude -Nur linux-2.6.8.1/crypto/Kconfig linux-2.6.8.1-rand2/crypto/Kconfig --- linux-2.6.8.1/crypto/Kconfig 2004-08-14 06:56:22.000000000 -0400 +++ linux-2.6.8.1-rand2/crypto/Kconfig 2004-09-28 23:30:04.000000000 -0400 @@ -9,6 +9,15 @@ help This option provides the core Cryptographic API. +config CRYPTO_RANDOM_FORTUNA + bool "The Fortuna RNG" + help + Replaces the legacy Linux RNG with one using the crypto API + and Fortuna by Ferguson and Schneier. Entropy estimation, and + a throttled /dev/random remain. Improvements include faster + /dev/urandom output and event input mixing. + Note: Requires AES and SHA256 to be built-in. + config CRYPTO_HMAC bool "HMAC support" depends on CRYPTO diff -X exclude -Nur linux-2.6.8.1/crypto/random-fortuna.c linux-2.6.8.1-rand2/crypto/random-fortuna.c --- linux-2.6.8.1/crypto/random-fortuna.c 1969-12-31 19:00:00.000000000 -0500 +++ linux-2.6.8.1-rand2/crypto/random-fortuna.c 2004-09-30 00:16:14.753826744 -0400 @@ -0,0 +1,2092 @@ +/* + * random-fortuna.c -- A cryptographically strong random number generator + * using Fortuna. + * + * Version 2.1.4, last modified 30-Sep-2004 + * Change log: + * v2.1.4: + * - Fixed flaw where some situations, /dev/random would not block. + * v2.1.3: + * - Added a seperate round-robin index for use inputs. Avoids a + * super-cleaver user from forcing all system (unknown) random + * events from being fed into, say, pool-31. + * - Added a "can only extract RANDOM_MAX_EXTRACT_SIZE bytes at a time" + * to extract_entropy() + * v2.1.2: + * - Ts'o's (I love writting that!) recomendation to force reseeds + * to be at least 0.1 ms apart. + * v2.1.1: + * - Re-worked to keep the blocking /dev/random. Yes I finally gave + * in to what everyone's been telling me. + * - Entropy accounting is *only* done on events going into pool-0 + * since it's used for every reseed. Those who expect /dev/random + * to only output data when the system is confident it has + * info-theoretic entropy to justify this output, this is the only + * sensible method to count entropy. + * v2.0: + * - Inital version + * + * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All + * rights reserved. + * Copyright Jean-Luc Cooke, 2004. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, and the entire permission notice in its entirety, + * including the disclaimer of warranties. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. The name of the author may not be used to endorse or promote + * products derived from this software without specific prior + * written permission. + * + * ALTERNATIVELY, this product may be distributed under the terms of + * the GNU General Public License, in which case the provisions of the GPL are + * required INSTEAD OF the above restrictions. (This clause is + * necessary due to a potential bad interaction between the GPL and + * the restrictions contained in a BSD-style copyright.) + * + * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF + * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT + * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE + * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH + * DAMAGE. + */ + +/* + * Taken from random.c, updated by Jean-Luc Cooke <jlcooke@certainkey.com> + * (now, with legal B.S. out of the way.....) + * + * This routine gathers environmental noise from device drivers, etc., + * and returns good random numbers, suitable for cryptographic use. + * Besides the obvious cryptographic uses, these numbers are also good + * for seeding TCP sequence numbers, and other places where it is + * desirable to have numbers which are not only random, but hard to + * predict by an attacker. + * + * Theory of operation + * =================== + * + * Computers are very predictable devices. Hence it is extremely hard + * to produce truly random numbers on a computer --- as opposed to + * pseudo-random numbers, which can easily generated by using a + * algorithm. Unfortunately, it is very easy for attackers to guess + * the sequence of pseudo-random number generators, and for some + * applications this is not acceptable. So instead, we must try to + * gather "environmental noise" from the computer's environment, which + * must be hard for outside attackers to observe, and use that to + * generate random numbers. In a Unix environment, this is best done + * from inside the kernel. + * + * Sources of randomness from the environment include inter-keyboard + * timings, inter-interrupt timings from some interrupts, and other + * events which are both (a) non-deterministic and (b) hard for an + * outside observer to measure. Randomness from these sources are + * added to an "entropy pool", which is mixed. + * As random bytes are mixed into the entropy pool, the routines keep + * an *estimate* of how many bits of randomness have been stored into + * the random number generator's internal state. + * + * Even if it is possible to analyze Fortuna in some clever way, as + * long as the amount of data returned from the generator is less than + * the inherent entropy we've estimated in the pool, the output data + * is totally unpredictable. For this reason, the routine decreases + * its internal estimate of how many bits of "true randomness" are + * contained in the entropy pool as it outputs random numbers. + * + * If this estimate goes to zero, the routine can still generate + * random numbers; however, an attacker may (at least in theory) be + * able to infer the future output of the generator from prior + * outputs. This requires successful cryptanalysis of Fortuna, which is + * not believed to be feasible, but there is a remote possibility. + * Nonetheless, these numbers should be useful for the vast majority + * of purposes. + * + * Exported interfaces ---- output + * =============================== + * + * There are three exported interfaces; the first is one designed to + * be used from within the kernel: + * + * void get_random_bytes(void *buf, int nbytes); + * + * This interface will return the requested number of random bytes, + * and place it in the requested buffer. + * + * The two other interfaces are two character devices /dev/random and + * /dev/urandom. /dev/random is suitable for use when very high + * quality randomness is desired (for example, for key generation or + * one-time pads), as it will only return a maximum of the number of + * bits of randomness (as estimated by the random number generator) + * contained in the entropy pool. + * + * The /dev/urandom device does not have this limit, and will return + * as many bytes as are requested. As more and more random bytes are + * requested without giving time for the entropy pool to recharge, + * this will result in random numbers that are merely cryptographically + * strong. For many applications, however, this is acceptable. + * + * Exported interfaces ---- input + * ============================== + * + * The current exported interfaces for gathering environmental noise + * from the devices are: + * + * void add_keyboard_randomness(unsigned char scancode); + * void add_mouse_randomness(__u32 mouse_data); + * void add_interrupt_randomness(int irq); + * + * add_keyboard_randomness() uses the inter-keypress timing, as well as the + * scancode as random inputs into the "entropy pool". + * + * add_mouse_randomness() uses the mouse interrupt timing, as well as + * the reported position of the mouse from the hardware. + * + * add_interrupt_randomness() uses the inter-interrupt timing as random + * inputs to the entropy pool. Note that not all interrupts are good + * sources of randomness! For example, the timer interrupts is not a + * good choice, because the periodicity of the interrupts is too + * regular, and hence predictable to an attacker. Disk interrupts are + * a better measure, since the timing of the disk interrupts are more + * unpredictable. + * + * All of these routines try to estimate how many bits of randomness a + * particular randomness source. They do this by keeping track of the + * first and second order deltas of the event timings. + * + * Ensuring unpredictability at system startup + * ============================================ + * + * When any operating system starts up, it will go through a sequence + * of actions that are fairly predictable by an adversary, especially + * if the start-up does not involve interaction with a human operator. + * This reduces the actual number of bits of unpredictability in the + * entropy pool below the value in entropy_count. In order to + * counteract this effect, it helps to carry information in the + * entropy pool across shut-downs and start-ups. To do this, put the + * following lines an appropriate script which is run during the boot + * sequence: + * + * echo "Initializing random number generator..." + * random_seed=/var/run/random-seed + * # Carry a random seed from start-up to start-up + * # Load and then save the whole entropy pool + * if [ -f $random_seed ]; then + * cat $random_seed >/dev/urandom + * else + * touch $random_seed + * fi + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * and the following lines in an appropriate script which is run as + * the system is shutdown: + * + * # Carry a random seed from shut-down to start-up + * # Save the whole entropy pool + * # Fortuna resists using all of its pool matirial, so we need to + * # draw 8 seperate times (count=8) to ensure we get the entropy + * # from pool[0,1,2,3]'s entropy. count=2048 pool[0 .. 10], etc. + * echo "Saving random seed..." + * random_seed=/var/run/random-seed + * touch $random_seed + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * For example, on most modern systems using the System V init + * scripts, such code fragments would be found in + * /etc/rc.d/init.d/random. On older Linux systems, the correct script + * location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0. + * + * Effectively, these commands cause the contents of the entropy pool + * to be saved at shut-down time and reloaded into the entropy pool at + * start-up. (The 'dd' in the addition to the bootup script is to + * make sure that /etc/random-seed is different for every start-up, + * even if the system crashes without executing rc.0.) Even with + * complete knowledge of the start-up activities, predicting the state + * of the entropy pool requires knowledge of the previous history of + * the system. + * + * Configuring the /dev/random driver under Linux + * ============================================== + * + * The /dev/random driver under Linux uses minor numbers 8 and 9 of + * the /dev/mem major number (#1). So if your system does not have + * /dev/random and /dev/urandom created already, they can be created + * by using the commands: + * + * mknod /dev/random c 1 8 + * mknod /dev/urandom c 1 9 + * + * Acknowledgements: + * ================= + * + * Ideas for constructing this random number generator were derived + * from Pretty Good Privacy's random number generator, and from private + * discussions with Phil Karn. Colin Plumb provided a faster random + * number generator, which speed up the mixing function of the entropy + * pool, taken from PGPfone. Dale Worley has also contributed many + * useful ideas and suggestions to improve this driver. + * + * Any flaws in the design are solely my (jlcooke) responsibility, and + * should not be attributed to the Phil, Colin, or any of authors of PGP + * or the legacy random.c (Ted Ts'o). + * + * Further background information on this topic may be obtained from + * RFC 1750, "Randomness Recommendations for Security", by Donald + * Eastlake, Steve Crocker, and Jeff Schiller. And Chapter 10 of + * Practical Cryptography by Ferguson and Schneier. + */ + +#include <linux/utsname.h> +#include <linux/config.h> +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/major.h> +#include <linux/string.h> +#include <linux/fcntl.h> +#include <linux/slab.h> +#include <linux/random.h> +#include <linux/poll.h> +#include <linux/init.h> +#include <linux/fs.h> +#include <linux/workqueue.h> +#include <linux/genhd.h> +#include <linux/interrupt.h> +#include <linux/spinlock.h> +#include <linux/percpu.h> +#include <linux/crypto.h> +#include <../crypto/internal.h> + +#include <asm/scatterlist.h> +#include <asm/processor.h> +#include <asm/uaccess.h> +#include <asm/irq.h> +#include <asm/io.h> + + +/* + * Configuration information + */ +#define BATCH_ENTROPY_SIZE 256 +/* milli-seconds between random_reseeds for non-blocking reads */ +#define RANDOM_RESEED_INTERVAL 100 +/* + * Number of bytes you can extract at a time, 1MB is recomended in + * Practical Cryptography rev-0 + */ +#define RANDOM_MAX_EXTRACT_SIZE (1<<20) +#define USE_SHA256 +#define USE_AES + +/* + * Compile-time checking for our desired message digest + */ +#if defined USE_SHA256 + #if !CONFIG_CRYPTO_SHA256 + #error SHA256 not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "sha256" +#elif defined USE_WHIRLPOOL + #if !CONFIG_CRYPTO_WHIRLPOOL + #error WHIRLPOOL not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "whirlpool" +#else + #error Desired message digest algorithm not found +#endif + +/* + * Compile-time checking for our desired block cipher + */ +#if defined USE_AES + #if (!CONFIG_CRYPTO_AES && !CONFIG_CRYPTO_AES_586) + #error AES not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "aes" +#elif defined USE_TWOFISH + #if (!CONFIG_CRYPTO_TWOFISH && !CONFIG_CRYPTO_TWOFISH_586) + #error TWOFISH not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "twofish" +#else + #error Desired block cipher algorithm not found +#endif /* USE_AES */ + +#define DEFAULT_POOL_NUMBER 5 /* 2^{5} = 32 pools */ +#define DEFAULT_POOL_SIZE ( (1<<DEFAULT_POOL_NUMBER) * 256) +/* largest block of random data to extract at a time when in blocking-mode */ +#define TMP_BUF_SIZE 512 +/* SHA512/WHIRLPOOL have 64bytes == 512 bits */ +#define RANDOM_MAX_DIGEST_SIZE 64 +/* AES256 has 16byte blocks == 128 bits */ +#define RANDOM_MAX_BLOCK_SIZE 16 +/* AES256 has 32byte keys == 256 bits */ +#define RANDOM_MAX_KEY_SIZE 32 + +/* + * The minimum number of bits of entropy before we wake up a read on + * /dev/random. We also wait for reseed_count>0 and we do a + * random_reseed() once we do wake up. + */ +static int random_read_wakeup_thresh = 64; + +/* + * If the entropy count falls under this number of bits, then we + * should wake up processes which are selecting or polling on write + * access to /dev/random. + */ +static int random_write_wakeup_thresh = 128; + +/* + * When the input pool goes over trickle_thresh, start dropping most + * samples to avoid wasting CPU time and reduce lock contention. + */ + +static int trickle_thresh = DEFAULT_POOL_SIZE * 7; + +static DEFINE_PER_CPU(int, trickle_count) = 0; + +#define POOLBYTES\ + ( (1<<random_state->pool_number) * random_state->digestsize ) +#define POOLBITS ( POOLBYTES * 8 ) + +/* + * Linux 2.2 compatibility + */ +#ifndef DECLARE_WAITQUEUE +#define DECLARE_WAITQUEUE(WAIT, PTR) struct wait_queue WAIT = { PTR, NULL } +#endif +#ifndef DECLARE_WAIT_QUEUE_HEAD +#define DECLARE_WAIT_QUEUE_HEAD(WAIT) struct wait_queue *WAIT +#endif + +/* + * Static global variables + */ +static struct entropy_store *random_state; /* The default global store */ +static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); +static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); + +/* + * Forward procedure declarations + */ +#ifdef CONFIG_SYSCTL +static void sysctl_init_random(struct entropy_store *random_state); +#endif + +/***************************************************************** + * + * Utility functions, with some ASM defined functions for speed + * purposes + * + *****************************************************************/ + +/* + * More asm magic.... + * + * For entropy estimation, we need to do an integral base 2 + * logarithm. + * + * Note the "12bits" suffix - this is used for numbers between + * 0 and 4095 only. This allows a few shortcuts. + */ +#if 0 /* Slow but clear version */ +static inline __u32 int_ln_12bits(__u32 word) +{ + __u32 nbits = 0; + + while (word >>= 1) + nbits++; + return nbits; +} +#else /* Faster (more clever) version, courtesy Colin Plumb */ +static inline __u32 int_ln_12bits(__u32 word) +{ + /* Smear msbit right to make an n-bit mask */ + word |= word >> 8; + word |= word >> 4; + word |= word >> 2; + word |= word >> 1; + /* Remove one bit to make this a logarithm */ + word >>= 1; + /* Count the bits set in the word */ + word -= (word >> 1) & 0x555; + word = (word & 0x333) + ((word >> 2) & 0x333); + word += (word >> 4); + word += (word >> 8); + return word & 15; +} +#endif + +#if 0 + #define DEBUG_ENT(fmt, arg...) printk("random: " fmt, ## arg) +#else + #define DEBUG_ENT(fmt, arg...) do {} while (0) +#endif +#if 0 + #define STATS_ENT(fmt, arg...) printk("random-stats: " fmt, ## arg) +#else + #define STATS_ENT(fmt, arg...) do {} while (0) +#endif + + +/********************************************************************** + * + * OS independent entropy store. Here are the functions which handle + * storing entropy in an entropy pool. + * + **********************************************************************/ + +struct entropy_store { + const char *digestAlgo; + unsigned int digestsize; + struct crypto_tfm *pools[1<<DEFAULT_POOL_NUMBER]; + /* optional, handy for statistics */ + unsigned int pools_bytes[1<<DEFAULT_POOL_NUMBER]; + + const char *cipherAlgo; + /* the key */ + unsigned char key[RANDOM_MAX_DIGEST_SIZE]; + unsigned int keysize; + /* the CTR value */ + unsigned char iv[16]; + unsigned int blocksize; + struct crypto_tfm *cipher; + + /* 2^pool_number # of pools */ + unsigned int pool_number; + /* current pool to add into */ + unsigned int pool_index; + /* size of the first pool */ + unsigned int pool0_len; + /* number of time we have reset */ + unsigned int reseed_count; + /* time in msec of the last reseed */ + time_t reseed_time; + /* digest used during random_reseed() */ + struct crypto_tfm *reseedHash; + /* cipher used for network randomness */ + struct crypto_tfm *networkCipher; + /* flag indicating if networkCipher has been seeded */ + char networkCipher_ready; + + /* read-write data: */ + spinlock_t lock ____cacheline_aligned_in_smp; + int entropy_count; +}; + +/* + * Initialize the entropy store. The input argument is the size of + * the random pool. + * + * Returns an negative error if there is a problem. + */ +static int create_entropy_store(int poolnum, struct entropy_store **ret_bucket) +{ + struct entropy_store *r; + unsigned long pool_number; + int keysize, i, j; + + pool_number = poolnum; + + r = kmalloc(sizeof(struct entropy_store), GFP_KERNEL); + if (!r) { + return -ENOMEM; + } + + memset (r, 0, sizeof(struct entropy_store)); + r->pool_number = pool_number; + r->digestAlgo = RANDOM_DEFAULT_DIGEST_ALGO; + +DEBUG_ENT("create_entropy_store() pools=%u index=%u\n", + 1<<pool_number, r->pool_index); + for (i=0; i<(1<<pool_number); i++) { +DEBUG_ENT("create_entropy_store() i=%i index=%u\n", i, r->pool_index); + r->pools[i] = crypto_alloc_tfm(r->digestAlgo, 0); + if (r->pools[i] == NULL) { + for (j=0; j<i; j++) { + if (r->pools[j] != NULL) { + kfree(r->pools[j]); + } + } + kfree(r); + return -ENOMEM; + } + crypto_digest_init( r->pools[i] ); + } + r->lock = SPIN_LOCK_UNLOCKED; + *ret_bucket = r; + + r->cipherAlgo = RANDOM_DEFAULT_CIPHER_ALGO; + if ((r->cipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + /* If the HASH's output is greater then the cipher's keysize, truncate + * to the cipher's keysize */ + keysize = crypto_tfm_alg_max_keysize(r->cipher); + r->digestsize = crypto_tfm_alg_digestsize(r->pools[0]); + r->blocksize = crypto_tfm_alg_blocksize(r->cipher); + + r->keysize = (keysize < r->digestsize) ? keysize : r->digestsize; +DEBUG_ENT("create_RANDOM %u %u %u\n", keysize, r->digestsize, r->keysize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* digest used duing random-reseed() */ + if ((r->reseedHash=crypto_alloc_tfm(r->digestAlgo, 0)) == NULL) { + return -ENOMEM; + } + /* cipher used for network randomness */ + if ((r->networkCipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + return 0; +} + +/* + * This function adds a byte into the entropy "pool". It does not + * update the entropy estimate. The caller should call + * credit_entropy_store if this is appropriate. + */ +static void add_entropy_words(struct entropy_store *r, const __u32 *in, + int nwords, int dst_pool) +{ + unsigned long flags; + struct scatterlist sg[1]; + static unsigned int totalBytes=0; + + if (r == NULL) { + return; + } + + spin_lock_irqsave(&r->lock, flags); + + totalBytes += nwords * sizeof(__u32); + + sg[0].page = virt_to_page(in); + sg[0].offset = offset_in_page(in); + sg[0].length = nwords*sizeof(__u32); + + if (dst_pool == -1) { + r->pools_bytes[r->pool_index] += nwords * sizeof(__u32); + crypto_digest_update(r->pools[r->pool_index], sg, 1); + if (r->pool_index == 0) { + r->pool0_len += nwords*sizeof(__u32); + } + /* idx = (idx + 1) mod ( (2^N)-1 ) */ + r->pool_index = (r->pool_index + 1) + & ((1<<random_state->pool_number)-1); + } else { + /* Let's make sure nothing mean is happening... */ + dst_pool &= (1<<random_state->pool_number) - 1; + r->pools_bytes[dst_pool] += nwords * sizeof(__u32); + crypto_digest_update(r->pools[dst_pool], sg, 1); + } +DEBUG_ENT("r->pool0_len = %u\n", r->pool0_len); + + + spin_unlock_irqrestore(&r->lock, flags); +DEBUG_ENT("0 add_entropy_words() nwords=%u pool[i].bytes=%u total=%u\n", + nwords, r->pools_bytes[r->pool_index], totalBytes); +} + +/* + * Credit (or debit) the entropy store with n bits of entropy + */ +static void credit_entropy_store(struct entropy_store *r, int nbits) +{ + unsigned long flags; + + spin_lock_irqsave(&r->lock, flags); + + if (r->entropy_count + nbits < 0) { + DEBUG_ENT("negative entropy/overflow (%d+%d)\n", + r->entropy_count, nbits); + r->entropy_count = 0; + } else if (r->entropy_count + nbits > POOLBITS) { + r->entropy_count = POOLBITS; + } else { + r->entropy_count += nbits; + if (nbits) + DEBUG_ENT("%04d : added %d bits\n", + r->entropy_count, + nbits); + } + + spin_unlock_irqrestore(&r->lock, flags); +} + +/********************************************************************** + * + * Entropy batch input management + * + * We batch entropy to be added to avoid increasing interrupt latency + * + **********************************************************************/ + +struct sample { + __u32 data[2]; + int credit; +}; + +static struct sample *batch_entropy_pool, *batch_entropy_copy; +static int batch_head, batch_tail; +static spinlock_t batch_lock = SPIN_LOCK_UNLOCKED; + +static int batch_max; +static void batch_entropy_process(void *private_); +static DECLARE_WORK(batch_work, batch_entropy_process, NULL); + +/* note: the size must be a power of 2 */ +static int __init batch_entropy_init(int size, struct entropy_store *r) +{ + batch_entropy_pool = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_pool) + return -1; + batch_entropy_copy = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_copy) { + kfree(batch_entropy_pool); + return -1; + } + batch_head = batch_tail = 0; + batch_work.data = r; + batch_max = size; + return 0; +} + +/* + * Changes to the entropy data is put into a queue rather than being added to + * the entropy counts directly. This is presumably to avoid doing heavy + * hashing calculations during an interrupt in add_timer_randomness(). + * Instead, the entropy is only added to the pool by keventd. + */ +void batch_entropy_store(u32 a, u32 b, int num) +{ + int new; + unsigned long flags; + + if (!batch_max) + return; + + spin_lock_irqsave(&batch_lock, flags); + + batch_entropy_pool[batch_head].data[0] = a; + batch_entropy_pool[batch_head].data[1] = b; + batch_entropy_pool[batch_head].credit = num; + + if (((batch_head - batch_tail) & (batch_max-1)) >= (batch_max / 2)) { + /* + * Schedule it for the next timer tick: + */ + schedule_delayed_work(&batch_work, 1); + } + + new = (batch_head+1) & (batch_max-1); + if (new == batch_tail) { + DEBUG_ENT("batch entropy buffer full\n"); + } else { + batch_head = new; + } + + spin_unlock_irqrestore(&batch_lock, flags); +} + +EXPORT_SYMBOL(batch_entropy_store); + +/* + * Flush out the accumulated entropy operations, adding entropy to the passed + * store (normally random_state). If that store has enough entropy, alternate + * between randomizing the data of the primary and secondary stores. + */ +static void batch_entropy_process(void *private_) +{ + int max_entropy = POOLBITS; + unsigned head, tail; + + /* Mixing into the pool is expensive, so copy over the batch + * data and release the batch lock. The pool is at least half + * full, so don't worry too much about copying only the used + * part. + */ + spin_lock_irq(&batch_lock); + + memcpy(batch_entropy_copy, batch_entropy_pool, + batch_max*sizeof(struct sample)); + + head = batch_head; + tail = batch_tail; + batch_tail = batch_head; + + spin_unlock_irq(&batch_lock); + + while (head != tail) { + if (random_state->entropy_count >= max_entropy) { + max_entropy = POOLBITS; + } + /* + * Only credit if we're feeding into pool[0] + * Otherwise we'd be assuming entropy in pool[31] would be + * usable when we read. This is conservative, but it'll + * not over-credit our entropy estimate for users of + * /dev/random, /dev/urandom will not be effected. + */ + if (random_state->pool_index == 0) { + credit_entropy_store(random_state, + batch_entropy_copy[tail].credit); + } + add_entropy_words(random_state, + batch_entropy_copy[tail].data, 2, -1); +; + + tail = (tail+1) & (batch_max-1); + } + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); +} + +/********************************************************************* + * + * Entropy input management + * + *********************************************************************/ + +/* There is one of these per entropy source */ +struct timer_rand_state { + __u32 last_time; + __s32 last_delta,last_delta2; + int dont_count_entropy:1; +}; + +static struct timer_rand_state keyboard_timer_state; +static struct timer_rand_state mouse_timer_state; +static struct timer_rand_state extract_timer_state; +static struct timer_rand_state *irq_timer_state[NR_IRQS]; + +/* + * This function adds entropy to the entropy "pool" by using timing + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. + * + * The number "num" is also added to the pool - it should somehow describe + * the type of event which just happened. This is currently 0-255 for + * keyboard scan codes, and 256 upwards for interrupts. + * On the i386, this is assumed to be at most 16 bits, and the high bits + * are used for a high-resolution timer. + * + */ +static void add_timer_randomness(struct timer_rand_state *state, unsigned num) +{ + __u32 time; + __s32 delta, delta2, delta3; + int entropy = 0; + + /* if over the trickle threshold, use only 1 in 4096 samples */ + if ( random_state->entropy_count > trickle_thresh && + (__get_cpu_var(trickle_count)++ & 0xfff)) + return; + +#if defined (__i386__) || defined (__x86_64__) + if (cpu_has_tsc) { + __u32 high; + rdtsc(time, high); + num ^= high; + } else { + time = jiffies; + } +#elif defined (__sparc_v9__) + unsigned long tick = tick_ops->get_tick(); + + time = (unsigned int) tick; + num ^= (tick >> 32UL); +#else + time = jiffies; +#endif + + /* + * Calculate number of bits of randomness we probably added. + * We take into account the first, second and third-order deltas + * in order to make our estimate. + */ + if (!state->dont_count_entropy) { + delta = time - state->last_time; + state->last_time = time; + + delta2 = delta - state->last_delta; + state->last_delta = delta; + + delta3 = delta2 - state->last_delta2; + state->last_delta2 = delta2; + + if (delta < 0) + delta = -delta; + if (delta2 < 0) + delta2 = -delta2; + if (delta3 < 0) + delta3 = -delta3; + if (delta > delta2) + delta = delta2; + if (delta > delta3) + delta = delta3; + + /* + * delta is now minimum absolute delta. + * Round down by 1 bit on general principles, + * and limit entropy entimate to 12 bits. + */ + delta >>= 1; + delta &= (1 << 12) - 1; + + entropy = int_ln_12bits(delta); + } + batch_entropy_store(num, time, entropy); +} + +void add_keyboard_randomness(unsigned char scancode) +{ + static unsigned char last_scancode; + /* ignore autorepeat (multiple key down w/o key up) */ + if (scancode != last_scancode) { + last_scancode = scancode; + add_timer_randomness(&keyboard_timer_state, scancode); + } +} + +EXPORT_SYMBOL(add_keyboard_randomness); + +void add_mouse_randomness(__u32 mouse_data) +{ + add_timer_randomness(&mouse_timer_state, mouse_data); +} + +EXPORT_SYMBOL(add_mouse_randomness); + +void add_interrupt_randomness(int irq) +{ + if (irq >= NR_IRQS || irq_timer_state[irq] == 0) + return; + + add_timer_randomness(irq_timer_state[irq], 0x100+irq); +} + +EXPORT_SYMBOL(add_interrupt_randomness); + +void add_disk_randomness(struct gendisk *disk) +{ + if (!disk || !disk->random) + return; + /* first major is 1, so we get >= 0x200 here */ + add_timer_randomness(disk->random, + 0x100+MKDEV(disk->major, disk->first_minor)); +} + +EXPORT_SYMBOL(add_disk_randomness); + +/********************************************************************* + * + * Entropy extraction routines + * + *********************************************************************/ + +#define EXTRACT_ENTROPY_USER 1 +#define EXTRACT_ENTROPY_LIMIT 4 + +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags); + +static inline void increment_iv(unsigned char *iv, const unsigned int IVsize) { + switch (IVsize) { + case 8: + if (++((u32*)iv)[0]) + ++((u32*)iv)[1]; + break; + + case 16: + if (++((u32*)iv)[0]) + if (++((u32*)iv)[1]) + if (++((u32*)iv)[2]) + ++((u32*)iv)[3]; + break; + + default: + { + int i; + for (i=0; i<IVsize; i++) + if (++iv[i]) + break; + } + break; + } +} + +/* + * Fortuna's Reseed + * + * Key' = hash(Key || hash(pool[a0]) || hash(pool[a1]) || ...) + * where {a0,a1,...} are facators of r->reseed_count+1 which are of the form + * 2^j, 0<=j. + * Prevents backtracking attacks and with event inputs, supports forward + * secrecy + */ +static void random_reseed(struct entropy_store *r, size_t nbytes, int flags) { + struct scatterlist sg[1]; + unsigned int i, deduct; + unsigned char tmp[RANDOM_MAX_DIGEST_SIZE]; + unsigned long cpuflags; + + deduct = (r->keysize < r->digestsize) ? r->keysize : r->digestsize; + + /* Hold lock while accounting */ + spin_lock_irqsave(&r->lock, cpuflags); + + DEBUG_ENT("%04d : trying to extract %d bits\n", + random_state->entropy_count, + deduct * 8); + + /* + * Don't extract more data than in the entropy in the pooling system + */ + if (flags & EXTRACT_ENTROPY_LIMIT && nbytes >= r->entropy_count / 8) { + nbytes = r->entropy_count / 8; + } + + if (deduct*8 <= r->entropy_count) { + r->entropy_count -= deduct*8; + } else { + r->entropy_count = 0; + } + + if (r->entropy_count < random_write_wakeup_thresh) + wake_up_interruptible(&random_write_wait); + + DEBUG_ENT("%04d : debiting %d bits%s\n", + random_state->entropy_count, + deduct * 8, + flags & EXTRACT_ENTROPY_LIMIT ? "" : " (unlimited)"); + + r->reseed_count++; + r->pool0_len = 0; + + /* Entropy accounting done, release lock. */ + spin_unlock_irqrestore(&r->lock, cpuflags); + + DEBUG_ENT("random_reseed count=%u\n", r->reseed_count); + + crypto_digest_init(r->reseedHash); + + sg[0].page = virt_to_page(r->key); + sg[0].offset = offset_in_page(r->key); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + +#define TESTBIT(VAL, N)\ + ( ((VAL) >> (N)) & 1 ) + for (i=0; i<(1<<r->pool_number); i++) { + /* using pool[i] if r->reseed_count is divisible by 2^i + * since 2^0 == 1, we always use pool[0] + */ + if ( (i==0) || TESTBIT(r->reseed_count,i)==0 ) { + crypto_digest_final(r->pools[i], tmp); + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + + crypto_digest_init(r->pools[i]); + /* Each pool carries its past state forward */ + crypto_digest_update(r->pools[i], sg, 1); + } else { + /* pool j is only used once every 2^j times */ + break; + } + } +#undef TESTBIT + + crypto_digest_final(r->reseedHash, r->key); + crypto_cipher_setkey(r->cipher, r->key, r->keysize); + increment_iv(r->iv, r->blocksize); +} + +static inline time_t get_msectime(void) { + struct timeval tv; + do_gettimeofday(&tv); + return (tv.tv_sec * 1000) + (tv.tv_usec / 1000); +} + +/* + * This function extracts randomness from the "entropy pool", and + * returns it in a buffer. This function computes how many remaining + * bits of entropy are left in the pool, but it does not restrict the + * number of bytes that are actually obtained. If the EXTRACT_ENTROPY_USER + * flag is given, then the buf pointer is assumed to be in user space. + */ +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags) +{ + ssize_t ret, i; + __u32 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgiv[1], sgtmp[1]; + time_t nowtime; + + /* Redundant, but just in case... */ + if (r->entropy_count > POOLBITS) + r->entropy_count = POOLBITS; + + /* + * To keep the possibility of collisions down, limit the number of + * output bytes per block cipher key. + */ + if (RANDOM_MAX_EXTRACT_SIZE < nbytes) + nbytes = RANDOM_MAX_EXTRACT_SIZE; + + if (flags & EXTRACT_ENTROPY_LIMIT) { + /* if in blocking, only output upto the entropy estimate */ + if (r->entropy_count/8 < nbytes) + nbytes = r->entropy_count/8; + /* + * if blocking and there is no entropy by our estimate, + * break out now. + */ + if (nbytes == 0) + return 0; + } + + /* + * If reading in non-blocking mode, pace ourselves in using up the pool + * system's entropy. + */ + if (! (flags & EXTRACT_ENTROPY_LIMIT) ) { + nowtime = get_msectime(); + if (r->pool0_len > 64 + && (nowtime - r->reseed_time) > RANDOM_RESEED_INTERVAL) { + random_reseed(r, nbytes, flags); + r->reseed_time = nowtime; + } + } + + sgiv[0].page = virt_to_page(r->iv); + sgiv[0].offset = offset_in_page(r->iv); + sgiv[0].length = r->blocksize; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = r->blocksize; + + ret = 0; + while (nbytes) { + /* + * Check if we need to break out or reschedule.... + */ + if ((flags & EXTRACT_ENTROPY_USER) && need_resched()) { + if (signal_pending(current)) { + if (ret == 0) + ret = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : extract sleeping (%d bytes left)\n", + random_state->entropy_count, + nbytes); + + schedule(); + + /* + * when we wakeup, there will be more data in our + * pooling system so we will reseed + */ + nowtime = get_msectime(); + if (r->pool0_len > 64 + && (nowtime-r->reseed_time) > RANDOM_RESEED_INTERVAL) { + random_reseed(r, nbytes, flags); + r->reseed_time = nowtime; + } + + DEBUG_ENT("%04d : extract woke up\n", + random_state->entropy_count); + } + + /* + * Reading from /dev/random, we limit this to the amount + * of entropy to deduct from our estimate. This estimate is + * most naturally updated from inside Fortuna-reseed, so we + * limit our block size here. + * + * At most, Fortuna will use e=min(r->digestsize, r->keysize) of + * entropy to reseed. + */ + if (flags & EXTRACT_ENTROPY_LIMIT) { + r->reseed_time = get_msectime(); + random_reseed(r, nbytes, flags); + } + + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, r->blocksize); + increment_iv(r->iv, r->blocksize); + + /* Copy data to destination buffer */ + i = (nbytes < r->blocksize) ? nbytes : r->blocksize; + if (flags & EXTRACT_ENTROPY_USER) { + i -= copy_to_user(buf, (__u8 const *)tmp, i); + if (!i) { + ret = -EFAULT; + break; + } + } else + memcpy(buf, (__u8 const *)tmp, i); + nbytes -= i; + buf += i; + ret += i; + } + + /* generate a new key */ + /* take into account the possibility that keysize >= blocksize */ + for (i=0; i+r->blocksize<=r->keysize; i+=r->blocksize) { + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + } + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize-i; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* Wipe data just returned from memory */ + memset(tmp, 0, sizeof(tmp)); + + return ret; +} + +/* + * This function is the exported kernel interface. It returns some + * number of good random numbers, suitable for seeding TCP sequence + * numbers, etc. + */ +void get_random_bytes(void *buf, int nbytes) +{ + if (random_state) + extract_entropy(random_state, (char *) buf, nbytes, 0); + else + printk(KERN_NOTICE "get_random_bytes called before " + "random driver initialization\n"); +} + +EXPORT_SYMBOL(get_random_bytes); + +/********************************************************************* + * + * Functions to interface with Linux + * + *********************************************************************/ + +/* + * Initialize the random pool with standard stuff. + * This is not secure random data, but it can't hurt us and people scream + * when you try to remove it. + * + * NOTE: This is an OS-dependent function. + */ +static void init_std_data(struct entropy_store *r) +{ + struct timeval tv; + __u32 words[2]; + char *p; + int i; + + do_gettimeofday(&tv); + words[0] = tv.tv_sec; + words[1] = tv.tv_usec; + add_entropy_words(r, words, 2, -1); + + /* + * This doesn't lock system.utsname. However, we are generating + * entropy so a race with a name set here is fine. + */ + p = (char *) &system_utsname; + for (i = sizeof(system_utsname) / sizeof(words); i; i--) { + memcpy(words, p, sizeof(words)); + add_entropy_words(r, words, sizeof(words)/4, -1); + p += sizeof(words); + } +} + +static int __init rand_initialize(void) +{ + int i; + + if (create_entropy_store(DEFAULT_POOL_NUMBER, &random_state)) + goto err; + if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state)) + goto err; + init_std_data(random_state); +#ifdef CONFIG_SYSCTL + sysctl_init_random(random_state); +#endif + for (i = 0; i < NR_IRQS; i++) + irq_timer_state[i] = NULL; + memset(&keyboard_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&mouse_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&extract_timer_state, 0, sizeof(struct timer_rand_state)); + extract_timer_state.dont_count_entropy = 1; + return 0; +err: + return -1; +} +module_init(rand_initialize); + +void rand_initialize_irq(int irq) +{ + struct timer_rand_state *state; + + if (irq >= NR_IRQS || irq_timer_state[irq]) + return; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + irq_timer_state[irq] = state; + } +} + +void rand_initialize_disk(struct gendisk *disk) +{ + struct timer_rand_state *state; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + disk->random = state; + } +} + +static ssize_t +random_read(struct file * file, char __user * buf, size_t nbytes, loff_t *ppos) +{ + DECLARE_WAITQUEUE(wait, current); + ssize_t n, retval = 0, count = 0; + + if (nbytes == 0) + return 0; + + while (nbytes > 0) { + n = nbytes; + + DEBUG_ENT("%04d : reading %d bits, p: %d s: %d\n", + random_state->entropy_count, + n*8, random_state->entropy_count, + random_state->entropy_count); + + n = extract_entropy(random_state, buf, n, + EXTRACT_ENTROPY_USER | + EXTRACT_ENTROPY_LIMIT); + + DEBUG_ENT("%04d : read got %d bits (%d needed, reseeds=%d)\n", + random_state->entropy_count, + random_state->reseed_count, + n*8, (nbytes-n)*8); + + if (n == 0) { + if (file->f_flags & O_NONBLOCK) { + retval = -EAGAIN; + break; + } + if (signal_pending(current)) { + retval = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : sleeping?\n", + random_state->entropy_count); + + set_current_state(TASK_INTERRUPTIBLE); + add_wait_queue(&random_read_wait, &wait); + + if (random_state->entropy_count / 8 == 0 + || random_state->reseed_count == 0) + schedule(); + + set_current_state(TASK_RUNNING); + remove_wait_queue(&random_read_wait, &wait); + + DEBUG_ENT("%04d : waking up\n", + random_state->entropy_count); + + continue; + } + + if (n < 0) { + retval = n; + break; + } + count += n; + buf += n; + nbytes -= n; + break; /* This break makes the device work */ + /* like a named pipe */ + } + + /* + * If we gave the user some bytes, update the access time. + */ + if (count) + file_accessed(file); + + return (count ? count : retval); +} + +static ssize_t +urandom_read(struct file * file, char __user * buf, + size_t nbytes, loff_t *ppos) +{ + /* Don't return anything untill we've reseeded at least once */ + if (random_state->reseed_count == 0) + return 0; + + return extract_entropy(random_state, buf, nbytes, + EXTRACT_ENTROPY_USER); +} + +static unsigned int +random_poll(struct file *file, poll_table * wait) +{ + unsigned int mask; + + poll_wait(file, &random_read_wait, wait); + poll_wait(file, &random_write_wait, wait); + mask = 0; + if (random_state->entropy_count >= random_read_wakeup_thresh) + mask |= POLLIN | POLLRDNORM; + if (random_state->entropy_count < random_write_wakeup_thresh) + mask |= POLLOUT | POLLWRNORM; + return mask; +} + +static ssize_t +random_write(struct file * file, const char __user * buffer, + size_t count, loff_t *ppos) +{ + static int idx = 0; + int ret = 0; + size_t bytes; + __u32 buf[16]; + const char __user *p = buffer; + size_t c = count; + + while (c > 0) { + bytes = min(c, sizeof(buf)); + + bytes -= copy_from_user(&buf, p, bytes); + if (!bytes) { + ret = -EFAULT; + break; + } + c -= bytes; + p += bytes; + + /* + * Use input data rotates though the pools independantly of + * system-events. + * + * idx = (idx + 1) mod ( (2^N)-1 ) + */ + idx = (idx + 1) & ((1<<random_state->pool_number)-1); + add_entropy_words(random_state, buf, bytes, idx); + } + if (p == buffer) { + return (ssize_t)ret; + } else { + file->f_dentry->d_inode->i_mtime = CURRENT_TIME; + mark_inode_dirty(file->f_dentry->d_inode); + return (ssize_t)(p - buffer); + } +} + +static int +random_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, unsigned long arg) +{ + int size, ent_count; + int __user *p = (int __user *)arg; + int retval; + + switch (cmd) { + case RNDGETENTCNT: + ent_count = random_state->entropy_count; + if (put_user(ent_count, p)) + return -EFAULT; + return 0; + case RNDADDTOENTCNT: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p)) + return -EFAULT; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDGETPOOL: + /* can't do this anymore */ + return 0; + case RNDADDENTROPY: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p++)) + return -EFAULT; + if (ent_count < 0) + return -EINVAL; + if (get_user(size, p++)) + return -EFAULT; + retval = random_write(file, (const char __user *) p, + size, &file->f_pos); + if (retval < 0) + return retval; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDZAPENTCNT: + /* Can't do this anymore */ + return 0; + case RNDCLEARPOOL: + /* Can't to this anymore */ + return 0; + default: + return -EINVAL; + } +} + +struct file_operations random_fops = { + .read = random_read, + .write = random_write, + .poll = random_poll, + .ioctl = random_ioctl, +}; + +struct file_operations urandom_fops = { + .read = urandom_read, + .write = random_write, + .ioctl = random_ioctl, +}; + +/*************************************************************** + * Random UUID interface + * + * Used here for a Boot ID, but can be useful for other kernel + * drivers. + ***************************************************************/ + +/* + * Generate random UUID + */ +void generate_random_uuid(unsigned char uuid_out[16]) +{ + get_random_bytes(uuid_out, 16); + /* Set UUID version to 4 --- truely random generation */ + uuid_out[6] = (uuid_out[6] & 0x0F) | 0x40; + /* Set the UUID variant to DCE */ + uuid_out[8] = (uuid_out[8] & 0x3F) | 0x80; +} + +EXPORT_SYMBOL(generate_random_uuid); + +/******************************************************************** + * + * Sysctl interface + * + ********************************************************************/ + +#ifdef CONFIG_SYSCTL + +#include <linux/sysctl.h> + +static int sysctl_poolsize; +static int min_read_thresh, max_read_thresh; +static int min_write_thresh, max_write_thresh; +static char sysctl_bootid[16]; + +static int proc_do_poolsize(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + int ret; + + sysctl_poolsize = POOLBITS; + + ret = proc_dointvec(table, write, filp, buffer, lenp, ppos); + if (ret || !write || + (sysctl_poolsize == POOLBITS)) + return ret; + + return ret; /* can't change the pool size in fortuna */ +} + +static int poolsize_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + int len; + + sysctl_poolsize = POOLBITS; + + /* + * We only handle the write case, since the read case gets + * handled by the default handler (and we don't care if the + * write case happens twice; it's harmless). + */ + if (newval && newlen) { + len = newlen; + if (len > table->maxlen) + len = table->maxlen; + if (copy_from_user(table->data, newval, len)) + return -EFAULT; + } + + return 0; +} + +/* + * These functions is used to return both the bootid UUID, and random + * UUID. The difference is in whether table->data is NULL; if it is, + * then a new UUID is generated and returned to the user. + * + * If the user accesses this via the proc interface, it will be returned + * as an ASCII string in the standard UUID format. If accesses via the + * sysctl system call, it is returned as 16 bytes of binary data. + */ +static int proc_do_uuid(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + ctl_table fake_table; + unsigned char buf[64], tmp_uuid[16], *uuid; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + sprintf(buf, "%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x%02x-" + "%02x%02x%02x%02x%02x%02x", + uuid[0], uuid[1], uuid[2], uuid[3], + uuid[4], uuid[5], uuid[6], uuid[7], + uuid[8], uuid[9], uuid[10], uuid[11], + uuid[12], uuid[13], uuid[14], uuid[15]); + fake_table.data = buf; + fake_table.maxlen = sizeof(buf); + + return proc_dostring(&fake_table, write, filp, buffer, lenp, ppos); +} + +static int uuid_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + unsigned char tmp_uuid[16], *uuid; + unsigned int len; + + if (!oldval || !oldlenp) + return 1; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + if (get_user(len, oldlenp)) + return -EFAULT; + if (len) { + if (len > 16) + len = 16; + if (copy_to_user(oldval, uuid, len) || + put_user(len, oldlenp)) + return -EFAULT; + } + return 1; +} + +ctl_table random_table[] = { + { + .ctl_name = RANDOM_POOLSIZE, + .procname = "poolsize", + .data = &sysctl_poolsize, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_do_poolsize, + .strategy = &poolsize_strategy, + }, + { + .ctl_name = RANDOM_ENTROPY_COUNT, + .procname = "entropy_avail", + .maxlen = sizeof(int), + .mode = 0444, + .proc_handler = &proc_dointvec, + }, + { + .ctl_name = RANDOM_READ_THRESH, + .procname = "read_wakeup_threshold", + .data = &random_read_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_read_thresh, + .extra2 = &max_read_thresh, + }, + { + .ctl_name = RANDOM_WRITE_THRESH, + .procname = "write_wakeup_threshold", + .data = &random_write_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_write_thresh, + .extra2 = &max_write_thresh, + }, + { + .ctl_name = RANDOM_BOOT_ID, + .procname = "boot_id", + .data = &sysctl_bootid, + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_UUID, + .procname = "uuid", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_DIGEST_ALGO, + .procname = "digest_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { + .ctl_name = RANDOM_CIPHER_ALGO, + .procname = "cipher_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { .ctl_name = 0 } +}; + +static void sysctl_init_random(struct entropy_store *random_state) +{ + int i; + + /* If the sys-admin doesn't want people to know how fast + * random events are happening, he can set the read-threshhold + * down to zero so /dev/random never blocks. Default is to block. + * This is for the paranoid loonies who think frequency analysis + * would lead to something. + */ + min_read_thresh = 0; + min_write_thresh = 0; + max_read_thresh = max_write_thresh = POOLBITS; + for (i=0; random_table[i].ctl_name!=0; i++) { + switch (random_table[i].ctl_name) { + case RANDOM_ENTROPY_COUNT: + random_table[i].data = &random_state->entropy_count; + break; + + case RANDOM_DIGEST_ALGO: + random_table[i].data = (void*)random_state->digestAlgo; + break; + + case RANDOM_CIPHER_ALGO: + random_table[i].data = (void*)random_state->cipherAlgo; + break; + + default: + break; + } + } +} +#endif /* CONFIG_SYSCTL */ + +/******************************************************************** + * + * Random funtions for networking + * + ********************************************************************/ + +/* + * TCP initial sequence number picking. This uses the random number + * generator to pick an initial secret value. This value is encrypted + * with the TCP endpoint information to provide a unique starting point + * for each pair of TCP endpoints. This defeats attacks which rely on + * guessing the initial TCP sequence number. This algorithm was + * suggested by Steve Bellovin, modified by Jean-Luc Cooke. + * + * Using a very strong hash was taking an appreciable amount of the total + * TCP connection establishment time, so this is a weaker hash, + * compensated for by changing the secret periodically. This was changed + * again by Jean-Luc Cooke to use AES256-CBC encryption which is faster + * still (see `/usr/bin/openssl speed md4 sha1 aes`) + */ + +/* This should not be decreased so low that ISNs wrap too fast. */ +#define REKEY_INTERVAL 300 +/* + * Bit layout of the tcp sequence numbers (before adding current time): + * bit 24-31: increased after every key exchange + * bit 0-23: hash(source,dest) + * + * The implementation is similar to the algorithm described + * in the Appendix of RFC 1185, except that + * - it uses a 1 MHz clock instead of a 250 kHz clock + * - it performs a rekey every 5 minutes, which is equivalent + * to a (source,dest) tulple dependent forward jump of the + * clock by 0..2^(HASH_BITS+1) + * + * Thus the average ISN wraparound time is 68 minutes instead of + * 4.55 hours. + * + * SMP cleanup and lock avoidance with poor man's RCU. + * Manfred Spraul <manfred@colorfullife.com> + * + */ +#define COUNT_BITS 8 +#define COUNT_MASK ( (1<<COUNT_BITS)-1) +#define HASH_BITS 24 +#define HASH_MASK ( (1<<HASH_BITS)-1 ) + +static spinlock_t ip_lock = SPIN_LOCK_UNLOCKED; +static unsigned int ip_cnt, network_count; + +static void __check_and_rekey(time_t time) +{ + u8 tmp[RANDOM_MAX_KEY_SIZE]; + spin_lock_bh(&ip_lock); + + get_random_bytes(tmp, random_state->keysize); + crypto_cipher_setkey(random_state->networkCipher, + (const u8*)tmp, + random_state->keysize); + random_state->networkCipher_ready = 1; + network_count = (ip_cnt & COUNT_MASK) << HASH_BITS; + mb(); + ip_cnt++; + + spin_unlock_bh(&ip_lock); + return; +} + +static inline void check_and_rekey(time_t time) +{ + static time_t rekey_time=0; + + rmb(); + if (!rekey_time || (time - rekey_time) > REKEY_INTERVAL) { + __check_and_rekey(time); + rekey_time = time; + } + + return; +} + +#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) +__u32 secure_tcpv6_sequence_number(__u32 *saddr, __u32 *daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * The procedure is the same as for IPv4, but addresses are longer. + * Thus we must use two AES operations. + */ + + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 2 block in CBC with + * and IV={(sport)<<16 | dport, 0, 0, 0} + * + * seq = ct[0], ct = Enc-CBC(Key, {ports}, {daddr, saddr}); + * = Enc(Key, saddr xor Enc(Key, daddr)) + */ + + /* PT0 = daddr */ + memcpy(tmp, daddr, random_state->blocksize); + /* IV = {ports,0,0,0} */ + tmp[0] ^= (sport<<16) | dport; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + /* PT1 = saddr */ + random_state->networkCipher->crt_cipher.cit_xor_block(tmp, (const u8*)saddr); + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + seq += tv.tv_usec + tv.tv_sec*1000000; + + return seq; +} +EXPORT_SYMBOL(secure_tcpv6_sequence_number); + +__u32 secure_ipv6_id(__u32 *daddr) +{ + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + check_and_rekey(get_seconds()); + + memcpy(tmp, daddr, random_state->blocksize); + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* id = tmp[0], tmp = Enc(Key, daddr); */ + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +EXPORT_SYMBOL(secure_ipv6_id); +#endif + + +__u32 secure_tcp_sequence_number(__u32 saddr, __u32 daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * Pick a random secret every REKEY_INTERVAL seconds. + */ + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + /* + * Pick a unique starting offset for each TCP connection endpoints + * (saddr, daddr, sport, dport). + * Note that the words are placed into the starting vector, which is + * then mixed with a partial MD4 over random data. + */ + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 1 block + * + * seq = ct[0], ct = Enc(Key, {(sport<<16)|dport, daddr, saddr, 0}) + */ + + tmp[0] = (sport<<16) | dport; + tmp[1] = daddr; + tmp[2] = saddr; + tmp[3] = 0; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + /* + * As close as possible to RFC 793, which + * suggests using a 250 kHz clock. + * Further reading shows this assumes 2 Mb/s networks. + * For 10 Mb/s Ethernet, a 1 MHz clock is appropriate. + * That's funny, Linux has one built in! Use it! + * (Networks are faster now - should this be increased?) + */ + seq += tv.tv_usec + tv.tv_sec*1000000; + +#if 0 + printk("init_seq(%lx, %lx, %d, %d) = %d\n", + saddr, daddr, sport, dport, seq); +#endif + return seq; +} + +EXPORT_SYMBOL(secure_tcp_sequence_number); + +/* The code below is shamelessly stolen from secure_tcp_sequence_number(). + * All blames to Andrey V. Savochkin <saw@msu.ru>. + * Changed by Jean-Luc Cooke <jlcooke@certainkey.com> to use AES & C.A.P.I. + */ +__u32 secure_ip_id(__u32 daddr) +{ + struct scatterlist sgtmp[1]; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + + check_and_rekey(get_seconds()); + + /* + * Pick a unique starting offset for each IP destination. + * id = ct[0], ct = Enc(Key, {daddr,0,0,0}); + */ + tmp[0] = daddr; + tmp[1] = 0; + tmp[2] = 0; + tmp[3] = 0; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +#ifdef CONFIG_SYN_COOKIES +/* + * Secure SYN cookie computation. This is the algorithm worked out by + * Dan Bernstein and Eric Schenk. + * + * For linux I implement the 1 minute counter by looking at the jiffies clock. + * The count is passed in as a parameter, so this code doesn't much care. + * + * SYN cookie (and seq# & id#) Changed in 2004 by Jean-Luc Cooke + * <jlcooke@certainkey.com> to use the C.A.P.I. and AES256. + */ + +#define COOKIEBITS 24 /* Upper bits store count */ +#define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1) + +__u32 secure_tcp_syn_cookie(__u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 data) +{ + struct scatterlist sg[1]; + __u32 tmp[4]; + + /* + * Compute the secure sequence number. + * + * Output is the 32bit tag of a CBC-MAC of + * PT={count,0,0,0} with IV={addr,daddr,sport|dport,sseq} + * cookie = {<8bit count>, + * truncate_24bit( + * Encrypt(Sec, {saddr,daddr,sport|dport,sseq}) + * ) + * } + * + * DJB wrote (http://cr.yp.to/syncookies/archive) about how to do this + * with hash algorithms. + * - we can replace two SHA1s used in the previous kernel with 1 AES + * and make things 5x faster + * - I'd like to propose we remove the use of two whittenings with a + * single operation since we were only using addition modulo 2^32 of + * all these values anyways. Not to mention the hashs differ only in + * that the second processes more data... why drop the first hash? + * We did learn that addition is commutative and associative long ago. + * - by replacing two SHA1s and addition modulo 2^32 with encryption of + * a 32bit value using CAPI we've made it 1,000,000,000 times easier + * to understand what is going on. + */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + if (!random_state->networkCipher_ready) { + check_and_rekey(get_seconds()); + } + /* tmp[]/sg[0] = Enc(Sec, {saddr,daddr,sport|dport,sseq}) */ + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* cookie = CTR encrypt of 8-bit-count and 24-bit-data */ + return tmp[0] ^ ( (count << COOKIEBITS) | (data & COOKIEMASK) ); +} + +/* + * This retrieves the small "data" value from the syncookie. + * If the syncookie is bad, the data returned will be out of + * range. This must be checked by the caller. + * + * The count value used to generate the cookie must be within + * "maxdiff" if the current (passed-in) "count". The return value + * is (__u32)-1 if this test fails. + */ +__u32 check_tcp_syn_cookie(__u32 cookie, __u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 maxdiff) +{ + struct scatterlist sg[1]; + __u32 tmp[4], thiscount, diff; + + if (random_state == NULL || !random_state->networkCipher_ready) + return (__u32)-1; /* Well, duh! */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* CTR decrypt the cookie */ + cookie ^= tmp[0]; + + /* top 8 bits are 'count' */ + thiscount = cookie >> COOKIEBITS; + + diff = count - thiscount; + if (diff >= maxdiff) + return (__u32)-1; + + /* bottom 24 bits are 'data' */ + return cookie & COOKIEMASK; +} +#endif diff -X exclude -Nur linux-2.6.8.1/drivers/char/random.c linux-2.6.8.1-rand2/drivers/char/random.c --- linux-2.6.8.1/drivers/char/random.c 2004-09-27 16:04:53.000000000 -0400 +++ linux-2.6.8.1-rand2/drivers/char/random.c 2004-09-28 23:25:46.000000000 -0400 @@ -261,6 +261,17 @@ #include <asm/io.h> /* + * In September 2004, Jean-Luc Cooke wrote a Fortuna RNG for Linux + * which was non-blocking and used the Cryptographic API. + * We use it now if the user wishes. + */ +#ifdef CONFIG_CRYPTO_RANDOM_FORTUNA + #warning using the Fortuna PRNG for /dev/random + #include "../crypto/random-fortuna.c" +#else /* CONFIG_CRYPTO_RANDOM_FORTUNA */ + #warning using the Linux Legacy PRNG for /dev/random + +/* * Configuration information */ #define DEFAULT_POOL_SIZE 512 @@ -2483,3 +2494,5 @@ return (cookie - tmp[17]) & COOKIEMASK; /* Leaving the data behind */ } #endif + +#endif /* CONFIG_CRYPTO_RANDOM_FORTUNA */ diff -X exclude -Nur linux-2.6.8.1/include/linux/sysctl.h linux-2.6.8.1-rand2/include/linux/sysctl.h --- linux-2.6.8.1/include/linux/sysctl.h 2004-08-14 06:55:33.000000000 -0400 +++ linux-2.6.8.1-rand2/include/linux/sysctl.h 2004-09-29 10:45:20.592695040 -0400 @@ -198,7 +198,9 @@ RANDOM_READ_THRESH=3, RANDOM_WRITE_THRESH=4, RANDOM_BOOT_ID=5, - RANDOM_UUID=6 + RANDOM_UUID=6, + RANDOM_DIGEST_ALGO=7, + RANDOM_CIPHER_ALGO=8 }; /* /proc/sys/kernel/pty */ ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-30 4:23 ` Jean-Luc Cooke @ 2004-09-30 6:50 ` James Morris 2004-09-30 9:03 ` Felipe Alfaro Solana 2004-09-30 10:46 ` Jan-Benedict Glaw 2 siblings, 0 replies; 28+ messages in thread From: James Morris @ 2004-09-30 6:50 UTC (permalink / raw) To: Jean-Luc Cooke; +Cc: Theodore Ts'o, linux, linux-kernel, cryptoapi On Thu, 30 Sep 2004, Jean-Luc Cooke wrote: > This should be the last one for a while. > > v2.1.4 crypto/random-fortuna.c > > Ted, since this is a crypto-API feature as well as an optional replacement to > /dev/random, should I be passing this threw James or both of you? Whatever the case, I would follow Ted's advice on this code. - James -- James Morris <jmorris@redhat.com> ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-30 4:23 ` Jean-Luc Cooke 2004-09-30 6:50 ` James Morris @ 2004-09-30 9:03 ` Felipe Alfaro Solana 2004-09-30 13:36 ` Jean-Luc Cooke 2004-09-30 10:46 ` Jan-Benedict Glaw 2 siblings, 1 reply; 28+ messages in thread From: Felipe Alfaro Solana @ 2004-09-30 9:03 UTC (permalink / raw) To: Jean-Luc Cooke; +Cc: linux, Theodore Ts'o, jmorris, linux-kernel, cryptoapi On Sep 30, 2004, at 06:23, Jean-Luc Cooke wrote: > <fortuna-2.6.8.1.patch> You said AES and SHA-256 _must_ be built-in, but I can't see any code on your patch that enforces selection of those config options. Thus, it's possible to compile the kernel when CONFIG_CRYPTO_SHA256=n and CONFIG_CRYPTO_AES=n although, of course, it will fail. ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-30 9:03 ` Felipe Alfaro Solana @ 2004-09-30 13:36 ` Jean-Luc Cooke 2004-10-01 12:56 ` Jean-Luc Cooke 0 siblings, 1 reply; 28+ messages in thread From: Jean-Luc Cooke @ 2004-09-30 13:36 UTC (permalink / raw) To: Felipe Alfaro Solana, Jan-Benedict Glaw Cc: jmorris, cryptoapi, Theodore Ts'o, linux-kernel, linux [-- Attachment #1: Type: text/plain, Size: 673 bytes --] In the random-fortuna.c file I have some "#if !defined CONFIG_CRYPTO_SHA256" As Jan-Benedict Glaw pointed out, I could just manually select the algorithms, which is what I did. Updated patch, only changes are to crypto/Kconfig. Cheers, JLC On Thu, Sep 30, 2004 at 11:03:52AM +0200, Felipe Alfaro Solana wrote: > On Sep 30, 2004, at 06:23, Jean-Luc Cooke wrote: > > ><fortuna-2.6.8.1.patch> > > You said AES and SHA-256 _must_ be built-in, but I can't see any code > on your patch that enforces selection of those config options. Thus, > it's possible to compile the kernel when CONFIG_CRYPTO_SHA256=n and > CONFIG_CRYPTO_AES=n although, of course, it will fail. [-- Attachment #2: fortuna-2.6.8.1.patch --] [-- Type: text/plain, Size: 64761 bytes --] --- linux-2.6.8.1/crypto/Kconfig 2004-08-14 06:56:22.000000000 -0400 +++ linux-2.6.8.1-rand2/crypto/Kconfig 2004-09-30 09:33:39.775410632 -0400 @@ -9,6 +9,17 @@ help This option provides the core Cryptographic API. +config CRYPTO_RANDOM_FORTUNA + bool "The Fortuna RNG" + select CRYPTO_SHA256 + select CRYPTO_AES + help + Replaces the legacy Linux RNG with one using the crypto API + and Fortuna by Ferguson and Schneier. Entropy estimation, and + a throttled /dev/random remain. Improvements include faster + /dev/urandom output and event input mixing. + Note: Requires AES and SHA256 to be built-in. + config CRYPTO_HMAC bool "HMAC support" depends on CRYPTO diff -X exclude -Nur linux-2.6.8.1/crypto/random-fortuna.c linux-2.6.8.1-rand2/crypto/random-fortuna.c --- linux-2.6.8.1/crypto/random-fortuna.c 1969-12-31 19:00:00.000000000 -0500 +++ linux-2.6.8.1-rand2/crypto/random-fortuna.c 2004-09-30 00:16:14.753826744 -0400 @@ -0,0 +1,2092 @@ +/* + * random-fortuna.c -- A cryptographically strong random number generator + * using Fortuna. + * + * Version 2.1.4, last modified 30-Sep-2004 + * Change log: + * v2.1.4: + * - Fixed flaw where some situations, /dev/random would not block. + * v2.1.3: + * - Added a seperate round-robin index for use inputs. Avoids a + * super-cleaver user from forcing all system (unknown) random + * events from being fed into, say, pool-31. + * - Added a "can only extract RANDOM_MAX_EXTRACT_SIZE bytes at a time" + * to extract_entropy() + * v2.1.2: + * - Ts'o's (I love writting that!) recomendation to force reseeds + * to be at least 0.1 ms apart. + * v2.1.1: + * - Re-worked to keep the blocking /dev/random. Yes I finally gave + * in to what everyone's been telling me. + * - Entropy accounting is *only* done on events going into pool-0 + * since it's used for every reseed. Those who expect /dev/random + * to only output data when the system is confident it has + * info-theoretic entropy to justify this output, this is the only + * sensible method to count entropy. + * v2.0: + * - Inital version + * + * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All + * rights reserved. + * Copyright Jean-Luc Cooke, 2004. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, and the entire permission notice in its entirety, + * including the disclaimer of warranties. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. The name of the author may not be used to endorse or promote + * products derived from this software without specific prior + * written permission. + * + * ALTERNATIVELY, this product may be distributed under the terms of + * the GNU General Public License, in which case the provisions of the GPL are + * required INSTEAD OF the above restrictions. (This clause is + * necessary due to a potential bad interaction between the GPL and + * the restrictions contained in a BSD-style copyright.) + * + * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF + * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT + * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE + * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH + * DAMAGE. + */ + +/* + * Taken from random.c, updated by Jean-Luc Cooke <jlcooke@certainkey.com> + * (now, with legal B.S. out of the way.....) + * + * This routine gathers environmental noise from device drivers, etc., + * and returns good random numbers, suitable for cryptographic use. + * Besides the obvious cryptographic uses, these numbers are also good + * for seeding TCP sequence numbers, and other places where it is + * desirable to have numbers which are not only random, but hard to + * predict by an attacker. + * + * Theory of operation + * =================== + * + * Computers are very predictable devices. Hence it is extremely hard + * to produce truly random numbers on a computer --- as opposed to + * pseudo-random numbers, which can easily generated by using a + * algorithm. Unfortunately, it is very easy for attackers to guess + * the sequence of pseudo-random number generators, and for some + * applications this is not acceptable. So instead, we must try to + * gather "environmental noise" from the computer's environment, which + * must be hard for outside attackers to observe, and use that to + * generate random numbers. In a Unix environment, this is best done + * from inside the kernel. + * + * Sources of randomness from the environment include inter-keyboard + * timings, inter-interrupt timings from some interrupts, and other + * events which are both (a) non-deterministic and (b) hard for an + * outside observer to measure. Randomness from these sources are + * added to an "entropy pool", which is mixed. + * As random bytes are mixed into the entropy pool, the routines keep + * an *estimate* of how many bits of randomness have been stored into + * the random number generator's internal state. + * + * Even if it is possible to analyze Fortuna in some clever way, as + * long as the amount of data returned from the generator is less than + * the inherent entropy we've estimated in the pool, the output data + * is totally unpredictable. For this reason, the routine decreases + * its internal estimate of how many bits of "true randomness" are + * contained in the entropy pool as it outputs random numbers. + * + * If this estimate goes to zero, the routine can still generate + * random numbers; however, an attacker may (at least in theory) be + * able to infer the future output of the generator from prior + * outputs. This requires successful cryptanalysis of Fortuna, which is + * not believed to be feasible, but there is a remote possibility. + * Nonetheless, these numbers should be useful for the vast majority + * of purposes. + * + * Exported interfaces ---- output + * =============================== + * + * There are three exported interfaces; the first is one designed to + * be used from within the kernel: + * + * void get_random_bytes(void *buf, int nbytes); + * + * This interface will return the requested number of random bytes, + * and place it in the requested buffer. + * + * The two other interfaces are two character devices /dev/random and + * /dev/urandom. /dev/random is suitable for use when very high + * quality randomness is desired (for example, for key generation or + * one-time pads), as it will only return a maximum of the number of + * bits of randomness (as estimated by the random number generator) + * contained in the entropy pool. + * + * The /dev/urandom device does not have this limit, and will return + * as many bytes as are requested. As more and more random bytes are + * requested without giving time for the entropy pool to recharge, + * this will result in random numbers that are merely cryptographically + * strong. For many applications, however, this is acceptable. + * + * Exported interfaces ---- input + * ============================== + * + * The current exported interfaces for gathering environmental noise + * from the devices are: + * + * void add_keyboard_randomness(unsigned char scancode); + * void add_mouse_randomness(__u32 mouse_data); + * void add_interrupt_randomness(int irq); + * + * add_keyboard_randomness() uses the inter-keypress timing, as well as the + * scancode as random inputs into the "entropy pool". + * + * add_mouse_randomness() uses the mouse interrupt timing, as well as + * the reported position of the mouse from the hardware. + * + * add_interrupt_randomness() uses the inter-interrupt timing as random + * inputs to the entropy pool. Note that not all interrupts are good + * sources of randomness! For example, the timer interrupts is not a + * good choice, because the periodicity of the interrupts is too + * regular, and hence predictable to an attacker. Disk interrupts are + * a better measure, since the timing of the disk interrupts are more + * unpredictable. + * + * All of these routines try to estimate how many bits of randomness a + * particular randomness source. They do this by keeping track of the + * first and second order deltas of the event timings. + * + * Ensuring unpredictability at system startup + * ============================================ + * + * When any operating system starts up, it will go through a sequence + * of actions that are fairly predictable by an adversary, especially + * if the start-up does not involve interaction with a human operator. + * This reduces the actual number of bits of unpredictability in the + * entropy pool below the value in entropy_count. In order to + * counteract this effect, it helps to carry information in the + * entropy pool across shut-downs and start-ups. To do this, put the + * following lines an appropriate script which is run during the boot + * sequence: + * + * echo "Initializing random number generator..." + * random_seed=/var/run/random-seed + * # Carry a random seed from start-up to start-up + * # Load and then save the whole entropy pool + * if [ -f $random_seed ]; then + * cat $random_seed >/dev/urandom + * else + * touch $random_seed + * fi + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * and the following lines in an appropriate script which is run as + * the system is shutdown: + * + * # Carry a random seed from shut-down to start-up + * # Save the whole entropy pool + * # Fortuna resists using all of its pool matirial, so we need to + * # draw 8 seperate times (count=8) to ensure we get the entropy + * # from pool[0,1,2,3]'s entropy. count=2048 pool[0 .. 10], etc. + * echo "Saving random seed..." + * random_seed=/var/run/random-seed + * touch $random_seed + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * For example, on most modern systems using the System V init + * scripts, such code fragments would be found in + * /etc/rc.d/init.d/random. On older Linux systems, the correct script + * location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0. + * + * Effectively, these commands cause the contents of the entropy pool + * to be saved at shut-down time and reloaded into the entropy pool at + * start-up. (The 'dd' in the addition to the bootup script is to + * make sure that /etc/random-seed is different for every start-up, + * even if the system crashes without executing rc.0.) Even with + * complete knowledge of the start-up activities, predicting the state + * of the entropy pool requires knowledge of the previous history of + * the system. + * + * Configuring the /dev/random driver under Linux + * ============================================== + * + * The /dev/random driver under Linux uses minor numbers 8 and 9 of + * the /dev/mem major number (#1). So if your system does not have + * /dev/random and /dev/urandom created already, they can be created + * by using the commands: + * + * mknod /dev/random c 1 8 + * mknod /dev/urandom c 1 9 + * + * Acknowledgements: + * ================= + * + * Ideas for constructing this random number generator were derived + * from Pretty Good Privacy's random number generator, and from private + * discussions with Phil Karn. Colin Plumb provided a faster random + * number generator, which speed up the mixing function of the entropy + * pool, taken from PGPfone. Dale Worley has also contributed many + * useful ideas and suggestions to improve this driver. + * + * Any flaws in the design are solely my (jlcooke) responsibility, and + * should not be attributed to the Phil, Colin, or any of authors of PGP + * or the legacy random.c (Ted Ts'o). + * + * Further background information on this topic may be obtained from + * RFC 1750, "Randomness Recommendations for Security", by Donald + * Eastlake, Steve Crocker, and Jeff Schiller. And Chapter 10 of + * Practical Cryptography by Ferguson and Schneier. + */ + +#include <linux/utsname.h> +#include <linux/config.h> +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/major.h> +#include <linux/string.h> +#include <linux/fcntl.h> +#include <linux/slab.h> +#include <linux/random.h> +#include <linux/poll.h> +#include <linux/init.h> +#include <linux/fs.h> +#include <linux/workqueue.h> +#include <linux/genhd.h> +#include <linux/interrupt.h> +#include <linux/spinlock.h> +#include <linux/percpu.h> +#include <linux/crypto.h> +#include <../crypto/internal.h> + +#include <asm/scatterlist.h> +#include <asm/processor.h> +#include <asm/uaccess.h> +#include <asm/irq.h> +#include <asm/io.h> + + +/* + * Configuration information + */ +#define BATCH_ENTROPY_SIZE 256 +/* milli-seconds between random_reseeds for non-blocking reads */ +#define RANDOM_RESEED_INTERVAL 100 +/* + * Number of bytes you can extract at a time, 1MB is recomended in + * Practical Cryptography rev-0 + */ +#define RANDOM_MAX_EXTRACT_SIZE (1<<20) +#define USE_SHA256 +#define USE_AES + +/* + * Compile-time checking for our desired message digest + */ +#if defined USE_SHA256 + #if !CONFIG_CRYPTO_SHA256 + #error SHA256 not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "sha256" +#elif defined USE_WHIRLPOOL + #if !CONFIG_CRYPTO_WHIRLPOOL + #error WHIRLPOOL not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "whirlpool" +#else + #error Desired message digest algorithm not found +#endif + +/* + * Compile-time checking for our desired block cipher + */ +#if defined USE_AES + #if (!CONFIG_CRYPTO_AES && !CONFIG_CRYPTO_AES_586) + #error AES not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "aes" +#elif defined USE_TWOFISH + #if (!CONFIG_CRYPTO_TWOFISH && !CONFIG_CRYPTO_TWOFISH_586) + #error TWOFISH not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "twofish" +#else + #error Desired block cipher algorithm not found +#endif /* USE_AES */ + +#define DEFAULT_POOL_NUMBER 5 /* 2^{5} = 32 pools */ +#define DEFAULT_POOL_SIZE ( (1<<DEFAULT_POOL_NUMBER) * 256) +/* largest block of random data to extract at a time when in blocking-mode */ +#define TMP_BUF_SIZE 512 +/* SHA512/WHIRLPOOL have 64bytes == 512 bits */ +#define RANDOM_MAX_DIGEST_SIZE 64 +/* AES256 has 16byte blocks == 128 bits */ +#define RANDOM_MAX_BLOCK_SIZE 16 +/* AES256 has 32byte keys == 256 bits */ +#define RANDOM_MAX_KEY_SIZE 32 + +/* + * The minimum number of bits of entropy before we wake up a read on + * /dev/random. We also wait for reseed_count>0 and we do a + * random_reseed() once we do wake up. + */ +static int random_read_wakeup_thresh = 64; + +/* + * If the entropy count falls under this number of bits, then we + * should wake up processes which are selecting or polling on write + * access to /dev/random. + */ +static int random_write_wakeup_thresh = 128; + +/* + * When the input pool goes over trickle_thresh, start dropping most + * samples to avoid wasting CPU time and reduce lock contention. + */ + +static int trickle_thresh = DEFAULT_POOL_SIZE * 7; + +static DEFINE_PER_CPU(int, trickle_count) = 0; + +#define POOLBYTES\ + ( (1<<random_state->pool_number) * random_state->digestsize ) +#define POOLBITS ( POOLBYTES * 8 ) + +/* + * Linux 2.2 compatibility + */ +#ifndef DECLARE_WAITQUEUE +#define DECLARE_WAITQUEUE(WAIT, PTR) struct wait_queue WAIT = { PTR, NULL } +#endif +#ifndef DECLARE_WAIT_QUEUE_HEAD +#define DECLARE_WAIT_QUEUE_HEAD(WAIT) struct wait_queue *WAIT +#endif + +/* + * Static global variables + */ +static struct entropy_store *random_state; /* The default global store */ +static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); +static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); + +/* + * Forward procedure declarations + */ +#ifdef CONFIG_SYSCTL +static void sysctl_init_random(struct entropy_store *random_state); +#endif + +/***************************************************************** + * + * Utility functions, with some ASM defined functions for speed + * purposes + * + *****************************************************************/ + +/* + * More asm magic.... + * + * For entropy estimation, we need to do an integral base 2 + * logarithm. + * + * Note the "12bits" suffix - this is used for numbers between + * 0 and 4095 only. This allows a few shortcuts. + */ +#if 0 /* Slow but clear version */ +static inline __u32 int_ln_12bits(__u32 word) +{ + __u32 nbits = 0; + + while (word >>= 1) + nbits++; + return nbits; +} +#else /* Faster (more clever) version, courtesy Colin Plumb */ +static inline __u32 int_ln_12bits(__u32 word) +{ + /* Smear msbit right to make an n-bit mask */ + word |= word >> 8; + word |= word >> 4; + word |= word >> 2; + word |= word >> 1; + /* Remove one bit to make this a logarithm */ + word >>= 1; + /* Count the bits set in the word */ + word -= (word >> 1) & 0x555; + word = (word & 0x333) + ((word >> 2) & 0x333); + word += (word >> 4); + word += (word >> 8); + return word & 15; +} +#endif + +#if 0 + #define DEBUG_ENT(fmt, arg...) printk("random: " fmt, ## arg) +#else + #define DEBUG_ENT(fmt, arg...) do {} while (0) +#endif +#if 0 + #define STATS_ENT(fmt, arg...) printk("random-stats: " fmt, ## arg) +#else + #define STATS_ENT(fmt, arg...) do {} while (0) +#endif + + +/********************************************************************** + * + * OS independent entropy store. Here are the functions which handle + * storing entropy in an entropy pool. + * + **********************************************************************/ + +struct entropy_store { + const char *digestAlgo; + unsigned int digestsize; + struct crypto_tfm *pools[1<<DEFAULT_POOL_NUMBER]; + /* optional, handy for statistics */ + unsigned int pools_bytes[1<<DEFAULT_POOL_NUMBER]; + + const char *cipherAlgo; + /* the key */ + unsigned char key[RANDOM_MAX_DIGEST_SIZE]; + unsigned int keysize; + /* the CTR value */ + unsigned char iv[16]; + unsigned int blocksize; + struct crypto_tfm *cipher; + + /* 2^pool_number # of pools */ + unsigned int pool_number; + /* current pool to add into */ + unsigned int pool_index; + /* size of the first pool */ + unsigned int pool0_len; + /* number of time we have reset */ + unsigned int reseed_count; + /* time in msec of the last reseed */ + time_t reseed_time; + /* digest used during random_reseed() */ + struct crypto_tfm *reseedHash; + /* cipher used for network randomness */ + struct crypto_tfm *networkCipher; + /* flag indicating if networkCipher has been seeded */ + char networkCipher_ready; + + /* read-write data: */ + spinlock_t lock ____cacheline_aligned_in_smp; + int entropy_count; +}; + +/* + * Initialize the entropy store. The input argument is the size of + * the random pool. + * + * Returns an negative error if there is a problem. + */ +static int create_entropy_store(int poolnum, struct entropy_store **ret_bucket) +{ + struct entropy_store *r; + unsigned long pool_number; + int keysize, i, j; + + pool_number = poolnum; + + r = kmalloc(sizeof(struct entropy_store), GFP_KERNEL); + if (!r) { + return -ENOMEM; + } + + memset (r, 0, sizeof(struct entropy_store)); + r->pool_number = pool_number; + r->digestAlgo = RANDOM_DEFAULT_DIGEST_ALGO; + +DEBUG_ENT("create_entropy_store() pools=%u index=%u\n", + 1<<pool_number, r->pool_index); + for (i=0; i<(1<<pool_number); i++) { +DEBUG_ENT("create_entropy_store() i=%i index=%u\n", i, r->pool_index); + r->pools[i] = crypto_alloc_tfm(r->digestAlgo, 0); + if (r->pools[i] == NULL) { + for (j=0; j<i; j++) { + if (r->pools[j] != NULL) { + kfree(r->pools[j]); + } + } + kfree(r); + return -ENOMEM; + } + crypto_digest_init( r->pools[i] ); + } + r->lock = SPIN_LOCK_UNLOCKED; + *ret_bucket = r; + + r->cipherAlgo = RANDOM_DEFAULT_CIPHER_ALGO; + if ((r->cipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + /* If the HASH's output is greater then the cipher's keysize, truncate + * to the cipher's keysize */ + keysize = crypto_tfm_alg_max_keysize(r->cipher); + r->digestsize = crypto_tfm_alg_digestsize(r->pools[0]); + r->blocksize = crypto_tfm_alg_blocksize(r->cipher); + + r->keysize = (keysize < r->digestsize) ? keysize : r->digestsize; +DEBUG_ENT("create_RANDOM %u %u %u\n", keysize, r->digestsize, r->keysize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* digest used duing random-reseed() */ + if ((r->reseedHash=crypto_alloc_tfm(r->digestAlgo, 0)) == NULL) { + return -ENOMEM; + } + /* cipher used for network randomness */ + if ((r->networkCipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + return 0; +} + +/* + * This function adds a byte into the entropy "pool". It does not + * update the entropy estimate. The caller should call + * credit_entropy_store if this is appropriate. + */ +static void add_entropy_words(struct entropy_store *r, const __u32 *in, + int nwords, int dst_pool) +{ + unsigned long flags; + struct scatterlist sg[1]; + static unsigned int totalBytes=0; + + if (r == NULL) { + return; + } + + spin_lock_irqsave(&r->lock, flags); + + totalBytes += nwords * sizeof(__u32); + + sg[0].page = virt_to_page(in); + sg[0].offset = offset_in_page(in); + sg[0].length = nwords*sizeof(__u32); + + if (dst_pool == -1) { + r->pools_bytes[r->pool_index] += nwords * sizeof(__u32); + crypto_digest_update(r->pools[r->pool_index], sg, 1); + if (r->pool_index == 0) { + r->pool0_len += nwords*sizeof(__u32); + } + /* idx = (idx + 1) mod ( (2^N)-1 ) */ + r->pool_index = (r->pool_index + 1) + & ((1<<random_state->pool_number)-1); + } else { + /* Let's make sure nothing mean is happening... */ + dst_pool &= (1<<random_state->pool_number) - 1; + r->pools_bytes[dst_pool] += nwords * sizeof(__u32); + crypto_digest_update(r->pools[dst_pool], sg, 1); + } +DEBUG_ENT("r->pool0_len = %u\n", r->pool0_len); + + + spin_unlock_irqrestore(&r->lock, flags); +DEBUG_ENT("0 add_entropy_words() nwords=%u pool[i].bytes=%u total=%u\n", + nwords, r->pools_bytes[r->pool_index], totalBytes); +} + +/* + * Credit (or debit) the entropy store with n bits of entropy + */ +static void credit_entropy_store(struct entropy_store *r, int nbits) +{ + unsigned long flags; + + spin_lock_irqsave(&r->lock, flags); + + if (r->entropy_count + nbits < 0) { + DEBUG_ENT("negative entropy/overflow (%d+%d)\n", + r->entropy_count, nbits); + r->entropy_count = 0; + } else if (r->entropy_count + nbits > POOLBITS) { + r->entropy_count = POOLBITS; + } else { + r->entropy_count += nbits; + if (nbits) + DEBUG_ENT("%04d : added %d bits\n", + r->entropy_count, + nbits); + } + + spin_unlock_irqrestore(&r->lock, flags); +} + +/********************************************************************** + * + * Entropy batch input management + * + * We batch entropy to be added to avoid increasing interrupt latency + * + **********************************************************************/ + +struct sample { + __u32 data[2]; + int credit; +}; + +static struct sample *batch_entropy_pool, *batch_entropy_copy; +static int batch_head, batch_tail; +static spinlock_t batch_lock = SPIN_LOCK_UNLOCKED; + +static int batch_max; +static void batch_entropy_process(void *private_); +static DECLARE_WORK(batch_work, batch_entropy_process, NULL); + +/* note: the size must be a power of 2 */ +static int __init batch_entropy_init(int size, struct entropy_store *r) +{ + batch_entropy_pool = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_pool) + return -1; + batch_entropy_copy = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_copy) { + kfree(batch_entropy_pool); + return -1; + } + batch_head = batch_tail = 0; + batch_work.data = r; + batch_max = size; + return 0; +} + +/* + * Changes to the entropy data is put into a queue rather than being added to + * the entropy counts directly. This is presumably to avoid doing heavy + * hashing calculations during an interrupt in add_timer_randomness(). + * Instead, the entropy is only added to the pool by keventd. + */ +void batch_entropy_store(u32 a, u32 b, int num) +{ + int new; + unsigned long flags; + + if (!batch_max) + return; + + spin_lock_irqsave(&batch_lock, flags); + + batch_entropy_pool[batch_head].data[0] = a; + batch_entropy_pool[batch_head].data[1] = b; + batch_entropy_pool[batch_head].credit = num; + + if (((batch_head - batch_tail) & (batch_max-1)) >= (batch_max / 2)) { + /* + * Schedule it for the next timer tick: + */ + schedule_delayed_work(&batch_work, 1); + } + + new = (batch_head+1) & (batch_max-1); + if (new == batch_tail) { + DEBUG_ENT("batch entropy buffer full\n"); + } else { + batch_head = new; + } + + spin_unlock_irqrestore(&batch_lock, flags); +} + +EXPORT_SYMBOL(batch_entropy_store); + +/* + * Flush out the accumulated entropy operations, adding entropy to the passed + * store (normally random_state). If that store has enough entropy, alternate + * between randomizing the data of the primary and secondary stores. + */ +static void batch_entropy_process(void *private_) +{ + int max_entropy = POOLBITS; + unsigned head, tail; + + /* Mixing into the pool is expensive, so copy over the batch + * data and release the batch lock. The pool is at least half + * full, so don't worry too much about copying only the used + * part. + */ + spin_lock_irq(&batch_lock); + + memcpy(batch_entropy_copy, batch_entropy_pool, + batch_max*sizeof(struct sample)); + + head = batch_head; + tail = batch_tail; + batch_tail = batch_head; + + spin_unlock_irq(&batch_lock); + + while (head != tail) { + if (random_state->entropy_count >= max_entropy) { + max_entropy = POOLBITS; + } + /* + * Only credit if we're feeding into pool[0] + * Otherwise we'd be assuming entropy in pool[31] would be + * usable when we read. This is conservative, but it'll + * not over-credit our entropy estimate for users of + * /dev/random, /dev/urandom will not be effected. + */ + if (random_state->pool_index == 0) { + credit_entropy_store(random_state, + batch_entropy_copy[tail].credit); + } + add_entropy_words(random_state, + batch_entropy_copy[tail].data, 2, -1); +; + + tail = (tail+1) & (batch_max-1); + } + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); +} + +/********************************************************************* + * + * Entropy input management + * + *********************************************************************/ + +/* There is one of these per entropy source */ +struct timer_rand_state { + __u32 last_time; + __s32 last_delta,last_delta2; + int dont_count_entropy:1; +}; + +static struct timer_rand_state keyboard_timer_state; +static struct timer_rand_state mouse_timer_state; +static struct timer_rand_state extract_timer_state; +static struct timer_rand_state *irq_timer_state[NR_IRQS]; + +/* + * This function adds entropy to the entropy "pool" by using timing + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. + * + * The number "num" is also added to the pool - it should somehow describe + * the type of event which just happened. This is currently 0-255 for + * keyboard scan codes, and 256 upwards for interrupts. + * On the i386, this is assumed to be at most 16 bits, and the high bits + * are used for a high-resolution timer. + * + */ +static void add_timer_randomness(struct timer_rand_state *state, unsigned num) +{ + __u32 time; + __s32 delta, delta2, delta3; + int entropy = 0; + + /* if over the trickle threshold, use only 1 in 4096 samples */ + if ( random_state->entropy_count > trickle_thresh && + (__get_cpu_var(trickle_count)++ & 0xfff)) + return; + +#if defined (__i386__) || defined (__x86_64__) + if (cpu_has_tsc) { + __u32 high; + rdtsc(time, high); + num ^= high; + } else { + time = jiffies; + } +#elif defined (__sparc_v9__) + unsigned long tick = tick_ops->get_tick(); + + time = (unsigned int) tick; + num ^= (tick >> 32UL); +#else + time = jiffies; +#endif + + /* + * Calculate number of bits of randomness we probably added. + * We take into account the first, second and third-order deltas + * in order to make our estimate. + */ + if (!state->dont_count_entropy) { + delta = time - state->last_time; + state->last_time = time; + + delta2 = delta - state->last_delta; + state->last_delta = delta; + + delta3 = delta2 - state->last_delta2; + state->last_delta2 = delta2; + + if (delta < 0) + delta = -delta; + if (delta2 < 0) + delta2 = -delta2; + if (delta3 < 0) + delta3 = -delta3; + if (delta > delta2) + delta = delta2; + if (delta > delta3) + delta = delta3; + + /* + * delta is now minimum absolute delta. + * Round down by 1 bit on general principles, + * and limit entropy entimate to 12 bits. + */ + delta >>= 1; + delta &= (1 << 12) - 1; + + entropy = int_ln_12bits(delta); + } + batch_entropy_store(num, time, entropy); +} + +void add_keyboard_randomness(unsigned char scancode) +{ + static unsigned char last_scancode; + /* ignore autorepeat (multiple key down w/o key up) */ + if (scancode != last_scancode) { + last_scancode = scancode; + add_timer_randomness(&keyboard_timer_state, scancode); + } +} + +EXPORT_SYMBOL(add_keyboard_randomness); + +void add_mouse_randomness(__u32 mouse_data) +{ + add_timer_randomness(&mouse_timer_state, mouse_data); +} + +EXPORT_SYMBOL(add_mouse_randomness); + +void add_interrupt_randomness(int irq) +{ + if (irq >= NR_IRQS || irq_timer_state[irq] == 0) + return; + + add_timer_randomness(irq_timer_state[irq], 0x100+irq); +} + +EXPORT_SYMBOL(add_interrupt_randomness); + +void add_disk_randomness(struct gendisk *disk) +{ + if (!disk || !disk->random) + return; + /* first major is 1, so we get >= 0x200 here */ + add_timer_randomness(disk->random, + 0x100+MKDEV(disk->major, disk->first_minor)); +} + +EXPORT_SYMBOL(add_disk_randomness); + +/********************************************************************* + * + * Entropy extraction routines + * + *********************************************************************/ + +#define EXTRACT_ENTROPY_USER 1 +#define EXTRACT_ENTROPY_LIMIT 4 + +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags); + +static inline void increment_iv(unsigned char *iv, const unsigned int IVsize) { + switch (IVsize) { + case 8: + if (++((u32*)iv)[0]) + ++((u32*)iv)[1]; + break; + + case 16: + if (++((u32*)iv)[0]) + if (++((u32*)iv)[1]) + if (++((u32*)iv)[2]) + ++((u32*)iv)[3]; + break; + + default: + { + int i; + for (i=0; i<IVsize; i++) + if (++iv[i]) + break; + } + break; + } +} + +/* + * Fortuna's Reseed + * + * Key' = hash(Key || hash(pool[a0]) || hash(pool[a1]) || ...) + * where {a0,a1,...} are facators of r->reseed_count+1 which are of the form + * 2^j, 0<=j. + * Prevents backtracking attacks and with event inputs, supports forward + * secrecy + */ +static void random_reseed(struct entropy_store *r, size_t nbytes, int flags) { + struct scatterlist sg[1]; + unsigned int i, deduct; + unsigned char tmp[RANDOM_MAX_DIGEST_SIZE]; + unsigned long cpuflags; + + deduct = (r->keysize < r->digestsize) ? r->keysize : r->digestsize; + + /* Hold lock while accounting */ + spin_lock_irqsave(&r->lock, cpuflags); + + DEBUG_ENT("%04d : trying to extract %d bits\n", + random_state->entropy_count, + deduct * 8); + + /* + * Don't extract more data than in the entropy in the pooling system + */ + if (flags & EXTRACT_ENTROPY_LIMIT && nbytes >= r->entropy_count / 8) { + nbytes = r->entropy_count / 8; + } + + if (deduct*8 <= r->entropy_count) { + r->entropy_count -= deduct*8; + } else { + r->entropy_count = 0; + } + + if (r->entropy_count < random_write_wakeup_thresh) + wake_up_interruptible(&random_write_wait); + + DEBUG_ENT("%04d : debiting %d bits%s\n", + random_state->entropy_count, + deduct * 8, + flags & EXTRACT_ENTROPY_LIMIT ? "" : " (unlimited)"); + + r->reseed_count++; + r->pool0_len = 0; + + /* Entropy accounting done, release lock. */ + spin_unlock_irqrestore(&r->lock, cpuflags); + + DEBUG_ENT("random_reseed count=%u\n", r->reseed_count); + + crypto_digest_init(r->reseedHash); + + sg[0].page = virt_to_page(r->key); + sg[0].offset = offset_in_page(r->key); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + +#define TESTBIT(VAL, N)\ + ( ((VAL) >> (N)) & 1 ) + for (i=0; i<(1<<r->pool_number); i++) { + /* using pool[i] if r->reseed_count is divisible by 2^i + * since 2^0 == 1, we always use pool[0] + */ + if ( (i==0) || TESTBIT(r->reseed_count,i)==0 ) { + crypto_digest_final(r->pools[i], tmp); + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + + crypto_digest_init(r->pools[i]); + /* Each pool carries its past state forward */ + crypto_digest_update(r->pools[i], sg, 1); + } else { + /* pool j is only used once every 2^j times */ + break; + } + } +#undef TESTBIT + + crypto_digest_final(r->reseedHash, r->key); + crypto_cipher_setkey(r->cipher, r->key, r->keysize); + increment_iv(r->iv, r->blocksize); +} + +static inline time_t get_msectime(void) { + struct timeval tv; + do_gettimeofday(&tv); + return (tv.tv_sec * 1000) + (tv.tv_usec / 1000); +} + +/* + * This function extracts randomness from the "entropy pool", and + * returns it in a buffer. This function computes how many remaining + * bits of entropy are left in the pool, but it does not restrict the + * number of bytes that are actually obtained. If the EXTRACT_ENTROPY_USER + * flag is given, then the buf pointer is assumed to be in user space. + */ +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags) +{ + ssize_t ret, i; + __u32 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgiv[1], sgtmp[1]; + time_t nowtime; + + /* Redundant, but just in case... */ + if (r->entropy_count > POOLBITS) + r->entropy_count = POOLBITS; + + /* + * To keep the possibility of collisions down, limit the number of + * output bytes per block cipher key. + */ + if (RANDOM_MAX_EXTRACT_SIZE < nbytes) + nbytes = RANDOM_MAX_EXTRACT_SIZE; + + if (flags & EXTRACT_ENTROPY_LIMIT) { + /* if in blocking, only output upto the entropy estimate */ + if (r->entropy_count/8 < nbytes) + nbytes = r->entropy_count/8; + /* + * if blocking and there is no entropy by our estimate, + * break out now. + */ + if (nbytes == 0) + return 0; + } + + /* + * If reading in non-blocking mode, pace ourselves in using up the pool + * system's entropy. + */ + if (! (flags & EXTRACT_ENTROPY_LIMIT) ) { + nowtime = get_msectime(); + if (r->pool0_len > 64 + && (nowtime - r->reseed_time) > RANDOM_RESEED_INTERVAL) { + random_reseed(r, nbytes, flags); + r->reseed_time = nowtime; + } + } + + sgiv[0].page = virt_to_page(r->iv); + sgiv[0].offset = offset_in_page(r->iv); + sgiv[0].length = r->blocksize; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = r->blocksize; + + ret = 0; + while (nbytes) { + /* + * Check if we need to break out or reschedule.... + */ + if ((flags & EXTRACT_ENTROPY_USER) && need_resched()) { + if (signal_pending(current)) { + if (ret == 0) + ret = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : extract sleeping (%d bytes left)\n", + random_state->entropy_count, + nbytes); + + schedule(); + + /* + * when we wakeup, there will be more data in our + * pooling system so we will reseed + */ + nowtime = get_msectime(); + if (r->pool0_len > 64 + && (nowtime-r->reseed_time) > RANDOM_RESEED_INTERVAL) { + random_reseed(r, nbytes, flags); + r->reseed_time = nowtime; + } + + DEBUG_ENT("%04d : extract woke up\n", + random_state->entropy_count); + } + + /* + * Reading from /dev/random, we limit this to the amount + * of entropy to deduct from our estimate. This estimate is + * most naturally updated from inside Fortuna-reseed, so we + * limit our block size here. + * + * At most, Fortuna will use e=min(r->digestsize, r->keysize) of + * entropy to reseed. + */ + if (flags & EXTRACT_ENTROPY_LIMIT) { + r->reseed_time = get_msectime(); + random_reseed(r, nbytes, flags); + } + + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, r->blocksize); + increment_iv(r->iv, r->blocksize); + + /* Copy data to destination buffer */ + i = (nbytes < r->blocksize) ? nbytes : r->blocksize; + if (flags & EXTRACT_ENTROPY_USER) { + i -= copy_to_user(buf, (__u8 const *)tmp, i); + if (!i) { + ret = -EFAULT; + break; + } + } else + memcpy(buf, (__u8 const *)tmp, i); + nbytes -= i; + buf += i; + ret += i; + } + + /* generate a new key */ + /* take into account the possibility that keysize >= blocksize */ + for (i=0; i+r->blocksize<=r->keysize; i+=r->blocksize) { + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + } + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize-i; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* Wipe data just returned from memory */ + memset(tmp, 0, sizeof(tmp)); + + return ret; +} + +/* + * This function is the exported kernel interface. It returns some + * number of good random numbers, suitable for seeding TCP sequence + * numbers, etc. + */ +void get_random_bytes(void *buf, int nbytes) +{ + if (random_state) + extract_entropy(random_state, (char *) buf, nbytes, 0); + else + printk(KERN_NOTICE "get_random_bytes called before " + "random driver initialization\n"); +} + +EXPORT_SYMBOL(get_random_bytes); + +/********************************************************************* + * + * Functions to interface with Linux + * + *********************************************************************/ + +/* + * Initialize the random pool with standard stuff. + * This is not secure random data, but it can't hurt us and people scream + * when you try to remove it. + * + * NOTE: This is an OS-dependent function. + */ +static void init_std_data(struct entropy_store *r) +{ + struct timeval tv; + __u32 words[2]; + char *p; + int i; + + do_gettimeofday(&tv); + words[0] = tv.tv_sec; + words[1] = tv.tv_usec; + add_entropy_words(r, words, 2, -1); + + /* + * This doesn't lock system.utsname. However, we are generating + * entropy so a race with a name set here is fine. + */ + p = (char *) &system_utsname; + for (i = sizeof(system_utsname) / sizeof(words); i; i--) { + memcpy(words, p, sizeof(words)); + add_entropy_words(r, words, sizeof(words)/4, -1); + p += sizeof(words); + } +} + +static int __init rand_initialize(void) +{ + int i; + + if (create_entropy_store(DEFAULT_POOL_NUMBER, &random_state)) + goto err; + if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state)) + goto err; + init_std_data(random_state); +#ifdef CONFIG_SYSCTL + sysctl_init_random(random_state); +#endif + for (i = 0; i < NR_IRQS; i++) + irq_timer_state[i] = NULL; + memset(&keyboard_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&mouse_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&extract_timer_state, 0, sizeof(struct timer_rand_state)); + extract_timer_state.dont_count_entropy = 1; + return 0; +err: + return -1; +} +module_init(rand_initialize); + +void rand_initialize_irq(int irq) +{ + struct timer_rand_state *state; + + if (irq >= NR_IRQS || irq_timer_state[irq]) + return; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + irq_timer_state[irq] = state; + } +} + +void rand_initialize_disk(struct gendisk *disk) +{ + struct timer_rand_state *state; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + disk->random = state; + } +} + +static ssize_t +random_read(struct file * file, char __user * buf, size_t nbytes, loff_t *ppos) +{ + DECLARE_WAITQUEUE(wait, current); + ssize_t n, retval = 0, count = 0; + + if (nbytes == 0) + return 0; + + while (nbytes > 0) { + n = nbytes; + + DEBUG_ENT("%04d : reading %d bits, p: %d s: %d\n", + random_state->entropy_count, + n*8, random_state->entropy_count, + random_state->entropy_count); + + n = extract_entropy(random_state, buf, n, + EXTRACT_ENTROPY_USER | + EXTRACT_ENTROPY_LIMIT); + + DEBUG_ENT("%04d : read got %d bits (%d needed, reseeds=%d)\n", + random_state->entropy_count, + random_state->reseed_count, + n*8, (nbytes-n)*8); + + if (n == 0) { + if (file->f_flags & O_NONBLOCK) { + retval = -EAGAIN; + break; + } + if (signal_pending(current)) { + retval = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : sleeping?\n", + random_state->entropy_count); + + set_current_state(TASK_INTERRUPTIBLE); + add_wait_queue(&random_read_wait, &wait); + + if (random_state->entropy_count / 8 == 0 + || random_state->reseed_count == 0) + schedule(); + + set_current_state(TASK_RUNNING); + remove_wait_queue(&random_read_wait, &wait); + + DEBUG_ENT("%04d : waking up\n", + random_state->entropy_count); + + continue; + } + + if (n < 0) { + retval = n; + break; + } + count += n; + buf += n; + nbytes -= n; + break; /* This break makes the device work */ + /* like a named pipe */ + } + + /* + * If we gave the user some bytes, update the access time. + */ + if (count) + file_accessed(file); + + return (count ? count : retval); +} + +static ssize_t +urandom_read(struct file * file, char __user * buf, + size_t nbytes, loff_t *ppos) +{ + /* Don't return anything untill we've reseeded at least once */ + if (random_state->reseed_count == 0) + return 0; + + return extract_entropy(random_state, buf, nbytes, + EXTRACT_ENTROPY_USER); +} + +static unsigned int +random_poll(struct file *file, poll_table * wait) +{ + unsigned int mask; + + poll_wait(file, &random_read_wait, wait); + poll_wait(file, &random_write_wait, wait); + mask = 0; + if (random_state->entropy_count >= random_read_wakeup_thresh) + mask |= POLLIN | POLLRDNORM; + if (random_state->entropy_count < random_write_wakeup_thresh) + mask |= POLLOUT | POLLWRNORM; + return mask; +} + +static ssize_t +random_write(struct file * file, const char __user * buffer, + size_t count, loff_t *ppos) +{ + static int idx = 0; + int ret = 0; + size_t bytes; + __u32 buf[16]; + const char __user *p = buffer; + size_t c = count; + + while (c > 0) { + bytes = min(c, sizeof(buf)); + + bytes -= copy_from_user(&buf, p, bytes); + if (!bytes) { + ret = -EFAULT; + break; + } + c -= bytes; + p += bytes; + + /* + * Use input data rotates though the pools independantly of + * system-events. + * + * idx = (idx + 1) mod ( (2^N)-1 ) + */ + idx = (idx + 1) & ((1<<random_state->pool_number)-1); + add_entropy_words(random_state, buf, bytes, idx); + } + if (p == buffer) { + return (ssize_t)ret; + } else { + file->f_dentry->d_inode->i_mtime = CURRENT_TIME; + mark_inode_dirty(file->f_dentry->d_inode); + return (ssize_t)(p - buffer); + } +} + +static int +random_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, unsigned long arg) +{ + int size, ent_count; + int __user *p = (int __user *)arg; + int retval; + + switch (cmd) { + case RNDGETENTCNT: + ent_count = random_state->entropy_count; + if (put_user(ent_count, p)) + return -EFAULT; + return 0; + case RNDADDTOENTCNT: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p)) + return -EFAULT; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDGETPOOL: + /* can't do this anymore */ + return 0; + case RNDADDENTROPY: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p++)) + return -EFAULT; + if (ent_count < 0) + return -EINVAL; + if (get_user(size, p++)) + return -EFAULT; + retval = random_write(file, (const char __user *) p, + size, &file->f_pos); + if (retval < 0) + return retval; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDZAPENTCNT: + /* Can't do this anymore */ + return 0; + case RNDCLEARPOOL: + /* Can't to this anymore */ + return 0; + default: + return -EINVAL; + } +} + +struct file_operations random_fops = { + .read = random_read, + .write = random_write, + .poll = random_poll, + .ioctl = random_ioctl, +}; + +struct file_operations urandom_fops = { + .read = urandom_read, + .write = random_write, + .ioctl = random_ioctl, +}; + +/*************************************************************** + * Random UUID interface + * + * Used here for a Boot ID, but can be useful for other kernel + * drivers. + ***************************************************************/ + +/* + * Generate random UUID + */ +void generate_random_uuid(unsigned char uuid_out[16]) +{ + get_random_bytes(uuid_out, 16); + /* Set UUID version to 4 --- truely random generation */ + uuid_out[6] = (uuid_out[6] & 0x0F) | 0x40; + /* Set the UUID variant to DCE */ + uuid_out[8] = (uuid_out[8] & 0x3F) | 0x80; +} + +EXPORT_SYMBOL(generate_random_uuid); + +/******************************************************************** + * + * Sysctl interface + * + ********************************************************************/ + +#ifdef CONFIG_SYSCTL + +#include <linux/sysctl.h> + +static int sysctl_poolsize; +static int min_read_thresh, max_read_thresh; +static int min_write_thresh, max_write_thresh; +static char sysctl_bootid[16]; + +static int proc_do_poolsize(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + int ret; + + sysctl_poolsize = POOLBITS; + + ret = proc_dointvec(table, write, filp, buffer, lenp, ppos); + if (ret || !write || + (sysctl_poolsize == POOLBITS)) + return ret; + + return ret; /* can't change the pool size in fortuna */ +} + +static int poolsize_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + int len; + + sysctl_poolsize = POOLBITS; + + /* + * We only handle the write case, since the read case gets + * handled by the default handler (and we don't care if the + * write case happens twice; it's harmless). + */ + if (newval && newlen) { + len = newlen; + if (len > table->maxlen) + len = table->maxlen; + if (copy_from_user(table->data, newval, len)) + return -EFAULT; + } + + return 0; +} + +/* + * These functions is used to return both the bootid UUID, and random + * UUID. The difference is in whether table->data is NULL; if it is, + * then a new UUID is generated and returned to the user. + * + * If the user accesses this via the proc interface, it will be returned + * as an ASCII string in the standard UUID format. If accesses via the + * sysctl system call, it is returned as 16 bytes of binary data. + */ +static int proc_do_uuid(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + ctl_table fake_table; + unsigned char buf[64], tmp_uuid[16], *uuid; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + sprintf(buf, "%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x%02x-" + "%02x%02x%02x%02x%02x%02x", + uuid[0], uuid[1], uuid[2], uuid[3], + uuid[4], uuid[5], uuid[6], uuid[7], + uuid[8], uuid[9], uuid[10], uuid[11], + uuid[12], uuid[13], uuid[14], uuid[15]); + fake_table.data = buf; + fake_table.maxlen = sizeof(buf); + + return proc_dostring(&fake_table, write, filp, buffer, lenp, ppos); +} + +static int uuid_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + unsigned char tmp_uuid[16], *uuid; + unsigned int len; + + if (!oldval || !oldlenp) + return 1; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + if (get_user(len, oldlenp)) + return -EFAULT; + if (len) { + if (len > 16) + len = 16; + if (copy_to_user(oldval, uuid, len) || + put_user(len, oldlenp)) + return -EFAULT; + } + return 1; +} + +ctl_table random_table[] = { + { + .ctl_name = RANDOM_POOLSIZE, + .procname = "poolsize", + .data = &sysctl_poolsize, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_do_poolsize, + .strategy = &poolsize_strategy, + }, + { + .ctl_name = RANDOM_ENTROPY_COUNT, + .procname = "entropy_avail", + .maxlen = sizeof(int), + .mode = 0444, + .proc_handler = &proc_dointvec, + }, + { + .ctl_name = RANDOM_READ_THRESH, + .procname = "read_wakeup_threshold", + .data = &random_read_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_read_thresh, + .extra2 = &max_read_thresh, + }, + { + .ctl_name = RANDOM_WRITE_THRESH, + .procname = "write_wakeup_threshold", + .data = &random_write_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_write_thresh, + .extra2 = &max_write_thresh, + }, + { + .ctl_name = RANDOM_BOOT_ID, + .procname = "boot_id", + .data = &sysctl_bootid, + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_UUID, + .procname = "uuid", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_DIGEST_ALGO, + .procname = "digest_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { + .ctl_name = RANDOM_CIPHER_ALGO, + .procname = "cipher_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { .ctl_name = 0 } +}; + +static void sysctl_init_random(struct entropy_store *random_state) +{ + int i; + + /* If the sys-admin doesn't want people to know how fast + * random events are happening, he can set the read-threshhold + * down to zero so /dev/random never blocks. Default is to block. + * This is for the paranoid loonies who think frequency analysis + * would lead to something. + */ + min_read_thresh = 0; + min_write_thresh = 0; + max_read_thresh = max_write_thresh = POOLBITS; + for (i=0; random_table[i].ctl_name!=0; i++) { + switch (random_table[i].ctl_name) { + case RANDOM_ENTROPY_COUNT: + random_table[i].data = &random_state->entropy_count; + break; + + case RANDOM_DIGEST_ALGO: + random_table[i].data = (void*)random_state->digestAlgo; + break; + + case RANDOM_CIPHER_ALGO: + random_table[i].data = (void*)random_state->cipherAlgo; + break; + + default: + break; + } + } +} +#endif /* CONFIG_SYSCTL */ + +/******************************************************************** + * + * Random funtions for networking + * + ********************************************************************/ + +/* + * TCP initial sequence number picking. This uses the random number + * generator to pick an initial secret value. This value is encrypted + * with the TCP endpoint information to provide a unique starting point + * for each pair of TCP endpoints. This defeats attacks which rely on + * guessing the initial TCP sequence number. This algorithm was + * suggested by Steve Bellovin, modified by Jean-Luc Cooke. + * + * Using a very strong hash was taking an appreciable amount of the total + * TCP connection establishment time, so this is a weaker hash, + * compensated for by changing the secret periodically. This was changed + * again by Jean-Luc Cooke to use AES256-CBC encryption which is faster + * still (see `/usr/bin/openssl speed md4 sha1 aes`) + */ + +/* This should not be decreased so low that ISNs wrap too fast. */ +#define REKEY_INTERVAL 300 +/* + * Bit layout of the tcp sequence numbers (before adding current time): + * bit 24-31: increased after every key exchange + * bit 0-23: hash(source,dest) + * + * The implementation is similar to the algorithm described + * in the Appendix of RFC 1185, except that + * - it uses a 1 MHz clock instead of a 250 kHz clock + * - it performs a rekey every 5 minutes, which is equivalent + * to a (source,dest) tulple dependent forward jump of the + * clock by 0..2^(HASH_BITS+1) + * + * Thus the average ISN wraparound time is 68 minutes instead of + * 4.55 hours. + * + * SMP cleanup and lock avoidance with poor man's RCU. + * Manfred Spraul <manfred@colorfullife.com> + * + */ +#define COUNT_BITS 8 +#define COUNT_MASK ( (1<<COUNT_BITS)-1) +#define HASH_BITS 24 +#define HASH_MASK ( (1<<HASH_BITS)-1 ) + +static spinlock_t ip_lock = SPIN_LOCK_UNLOCKED; +static unsigned int ip_cnt, network_count; + +static void __check_and_rekey(time_t time) +{ + u8 tmp[RANDOM_MAX_KEY_SIZE]; + spin_lock_bh(&ip_lock); + + get_random_bytes(tmp, random_state->keysize); + crypto_cipher_setkey(random_state->networkCipher, + (const u8*)tmp, + random_state->keysize); + random_state->networkCipher_ready = 1; + network_count = (ip_cnt & COUNT_MASK) << HASH_BITS; + mb(); + ip_cnt++; + + spin_unlock_bh(&ip_lock); + return; +} + +static inline void check_and_rekey(time_t time) +{ + static time_t rekey_time=0; + + rmb(); + if (!rekey_time || (time - rekey_time) > REKEY_INTERVAL) { + __check_and_rekey(time); + rekey_time = time; + } + + return; +} + +#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) +__u32 secure_tcpv6_sequence_number(__u32 *saddr, __u32 *daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * The procedure is the same as for IPv4, but addresses are longer. + * Thus we must use two AES operations. + */ + + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 2 block in CBC with + * and IV={(sport)<<16 | dport, 0, 0, 0} + * + * seq = ct[0], ct = Enc-CBC(Key, {ports}, {daddr, saddr}); + * = Enc(Key, saddr xor Enc(Key, daddr)) + */ + + /* PT0 = daddr */ + memcpy(tmp, daddr, random_state->blocksize); + /* IV = {ports,0,0,0} */ + tmp[0] ^= (sport<<16) | dport; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + /* PT1 = saddr */ + random_state->networkCipher->crt_cipher.cit_xor_block(tmp, (const u8*)saddr); + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + seq += tv.tv_usec + tv.tv_sec*1000000; + + return seq; +} +EXPORT_SYMBOL(secure_tcpv6_sequence_number); + +__u32 secure_ipv6_id(__u32 *daddr) +{ + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + check_and_rekey(get_seconds()); + + memcpy(tmp, daddr, random_state->blocksize); + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* id = tmp[0], tmp = Enc(Key, daddr); */ + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +EXPORT_SYMBOL(secure_ipv6_id); +#endif + + +__u32 secure_tcp_sequence_number(__u32 saddr, __u32 daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * Pick a random secret every REKEY_INTERVAL seconds. + */ + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + /* + * Pick a unique starting offset for each TCP connection endpoints + * (saddr, daddr, sport, dport). + * Note that the words are placed into the starting vector, which is + * then mixed with a partial MD4 over random data. + */ + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 1 block + * + * seq = ct[0], ct = Enc(Key, {(sport<<16)|dport, daddr, saddr, 0}) + */ + + tmp[0] = (sport<<16) | dport; + tmp[1] = daddr; + tmp[2] = saddr; + tmp[3] = 0; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + /* + * As close as possible to RFC 793, which + * suggests using a 250 kHz clock. + * Further reading shows this assumes 2 Mb/s networks. + * For 10 Mb/s Ethernet, a 1 MHz clock is appropriate. + * That's funny, Linux has one built in! Use it! + * (Networks are faster now - should this be increased?) + */ + seq += tv.tv_usec + tv.tv_sec*1000000; + +#if 0 + printk("init_seq(%lx, %lx, %d, %d) = %d\n", + saddr, daddr, sport, dport, seq); +#endif + return seq; +} + +EXPORT_SYMBOL(secure_tcp_sequence_number); + +/* The code below is shamelessly stolen from secure_tcp_sequence_number(). + * All blames to Andrey V. Savochkin <saw@msu.ru>. + * Changed by Jean-Luc Cooke <jlcooke@certainkey.com> to use AES & C.A.P.I. + */ +__u32 secure_ip_id(__u32 daddr) +{ + struct scatterlist sgtmp[1]; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + + check_and_rekey(get_seconds()); + + /* + * Pick a unique starting offset for each IP destination. + * id = ct[0], ct = Enc(Key, {daddr,0,0,0}); + */ + tmp[0] = daddr; + tmp[1] = 0; + tmp[2] = 0; + tmp[3] = 0; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +#ifdef CONFIG_SYN_COOKIES +/* + * Secure SYN cookie computation. This is the algorithm worked out by + * Dan Bernstein and Eric Schenk. + * + * For linux I implement the 1 minute counter by looking at the jiffies clock. + * The count is passed in as a parameter, so this code doesn't much care. + * + * SYN cookie (and seq# & id#) Changed in 2004 by Jean-Luc Cooke + * <jlcooke@certainkey.com> to use the C.A.P.I. and AES256. + */ + +#define COOKIEBITS 24 /* Upper bits store count */ +#define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1) + +__u32 secure_tcp_syn_cookie(__u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 data) +{ + struct scatterlist sg[1]; + __u32 tmp[4]; + + /* + * Compute the secure sequence number. + * + * Output is the 32bit tag of a CBC-MAC of + * PT={count,0,0,0} with IV={addr,daddr,sport|dport,sseq} + * cookie = {<8bit count>, + * truncate_24bit( + * Encrypt(Sec, {saddr,daddr,sport|dport,sseq}) + * ) + * } + * + * DJB wrote (http://cr.yp.to/syncookies/archive) about how to do this + * with hash algorithms. + * - we can replace two SHA1s used in the previous kernel with 1 AES + * and make things 5x faster + * - I'd like to propose we remove the use of two whittenings with a + * single operation since we were only using addition modulo 2^32 of + * all these values anyways. Not to mention the hashs differ only in + * that the second processes more data... why drop the first hash? + * We did learn that addition is commutative and associative long ago. + * - by replacing two SHA1s and addition modulo 2^32 with encryption of + * a 32bit value using CAPI we've made it 1,000,000,000 times easier + * to understand what is going on. + */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + if (!random_state->networkCipher_ready) { + check_and_rekey(get_seconds()); + } + /* tmp[]/sg[0] = Enc(Sec, {saddr,daddr,sport|dport,sseq}) */ + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* cookie = CTR encrypt of 8-bit-count and 24-bit-data */ + return tmp[0] ^ ( (count << COOKIEBITS) | (data & COOKIEMASK) ); +} + +/* + * This retrieves the small "data" value from the syncookie. + * If the syncookie is bad, the data returned will be out of + * range. This must be checked by the caller. + * + * The count value used to generate the cookie must be within + * "maxdiff" if the current (passed-in) "count". The return value + * is (__u32)-1 if this test fails. + */ +__u32 check_tcp_syn_cookie(__u32 cookie, __u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 maxdiff) +{ + struct scatterlist sg[1]; + __u32 tmp[4], thiscount, diff; + + if (random_state == NULL || !random_state->networkCipher_ready) + return (__u32)-1; /* Well, duh! */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* CTR decrypt the cookie */ + cookie ^= tmp[0]; + + /* top 8 bits are 'count' */ + thiscount = cookie >> COOKIEBITS; + + diff = count - thiscount; + if (diff >= maxdiff) + return (__u32)-1; + + /* bottom 24 bits are 'data' */ + return cookie & COOKIEMASK; +} +#endif diff -X exclude -Nur linux-2.6.8.1/drivers/char/random.c linux-2.6.8.1-rand2/drivers/char/random.c --- linux-2.6.8.1/drivers/char/random.c 2004-09-27 16:04:53.000000000 -0400 +++ linux-2.6.8.1-rand2/drivers/char/random.c 2004-09-28 23:25:46.000000000 -0400 @@ -261,6 +261,17 @@ #include <asm/io.h> /* + * In September 2004, Jean-Luc Cooke wrote a Fortuna RNG for Linux + * which was non-blocking and used the Cryptographic API. + * We use it now if the user wishes. + */ +#ifdef CONFIG_CRYPTO_RANDOM_FORTUNA + #warning using the Fortuna PRNG for /dev/random + #include "../crypto/random-fortuna.c" +#else /* CONFIG_CRYPTO_RANDOM_FORTUNA */ + #warning using the Linux Legacy PRNG for /dev/random + +/* * Configuration information */ #define DEFAULT_POOL_SIZE 512 @@ -2483,3 +2494,5 @@ return (cookie - tmp[17]) & COOKIEMASK; /* Leaving the data behind */ } #endif + +#endif /* CONFIG_CRYPTO_RANDOM_FORTUNA */ diff -X exclude -Nur linux-2.6.8.1/include/linux/sysctl.h linux-2.6.8.1-rand2/include/linux/sysctl.h --- linux-2.6.8.1/include/linux/sysctl.h 2004-08-14 06:55:33.000000000 -0400 +++ linux-2.6.8.1-rand2/include/linux/sysctl.h 2004-09-29 10:45:20.592695040 -0400 @@ -198,7 +198,9 @@ RANDOM_READ_THRESH=3, RANDOM_WRITE_THRESH=4, RANDOM_BOOT_ID=5, - RANDOM_UUID=6 + RANDOM_UUID=6, + RANDOM_DIGEST_ALGO=7, + RANDOM_CIPHER_ALGO=8 }; /* /proc/sys/kernel/pty */ ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-30 13:36 ` Jean-Luc Cooke @ 2004-10-01 12:56 ` Jean-Luc Cooke 0 siblings, 0 replies; 28+ messages in thread From: Jean-Luc Cooke @ 2004-10-01 12:56 UTC (permalink / raw) To: Felipe Alfaro Solana, Jan-Benedict Glaw Cc: jmorris, cryptoapi, Theodore Ts'o, linux-kernel, linux [-- Attachment #1: Type: text/plain, Size: 1055 bytes --] Chris Han pointed out that my #include of "random-fortuna.c" could be done much cleaner if the drivers/char/Makefile had some ifeq-else-endif logic in it. He also pointed our my #include of crypto/internal.h was not needed anymore. Here is an update. v2.1.5 1-Oct-2004 JLC On Thu, Sep 30, 2004 at 09:36:49AM -0400, Jean-Luc Cooke wrote: > In the random-fortuna.c file I have some "#if !defined CONFIG_CRYPTO_SHA256" > > As Jan-Benedict Glaw pointed out, I could just manually select the > algorithms, which is what I did. > > Updated patch, only changes are to crypto/Kconfig. > > Cheers, > > JLC > > On Thu, Sep 30, 2004 at 11:03:52AM +0200, Felipe Alfaro Solana wrote: > > On Sep 30, 2004, at 06:23, Jean-Luc Cooke wrote: > > > > ><fortuna-2.6.8.1.patch> > > > > You said AES and SHA-256 _must_ be built-in, but I can't see any code > > on your patch that enforces selection of those config options. Thus, > > it's possible to compile the kernel when CONFIG_CRYPTO_SHA256=n and > > CONFIG_CRYPTO_AES=n although, of course, it will fail. [-- Attachment #2: fortuna-2.6.8.1.patch --] [-- Type: text/plain, Size: 64643 bytes --] --- linux-2.6.8.1/crypto/Kconfig 2004-08-14 06:56:22.000000000 -0400 +++ linux-2.6.8.1-rand2/crypto/Kconfig 2004-09-30 09:33:39.775410632 -0400 @@ -9,6 +9,17 @@ help This option provides the core Cryptographic API. +config CRYPTO_RANDOM_FORTUNA + bool "The Fortuna RNG" + select CRYPTO_SHA256 + select CRYPTO_AES + help + Replaces the legacy Linux RNG with one using the crypto API + and Fortuna by Ferguson and Schneier. Entropy estimation, and + a throttled /dev/random remain. Improvements include faster + /dev/urandom output and event input mixing. + Note: Requires AES and SHA256 to be built-in. + config CRYPTO_HMAC bool "HMAC support" depends on CRYPTO diff -X exclude -Nur linux-2.6.8.1/include/linux/sysctl.h linux-2.6.8.1-rand2/include/linux/sysctl.h --- linux-2.6.8.1/include/linux/sysctl.h 2004-08-14 06:55:33.000000000 -0400 +++ linux-2.6.8.1-rand2/include/linux/sysctl.h 2004-09-29 10:45:20.592695040 -0400 @@ -198,7 +198,9 @@ RANDOM_READ_THRESH=3, RANDOM_WRITE_THRESH=4, RANDOM_BOOT_ID=5, - RANDOM_UUID=6 + RANDOM_UUID=6, + RANDOM_DIGEST_ALGO=7, + RANDOM_CIPHER_ALGO=8 }; /* /proc/sys/kernel/pty */ diff -X exclude -Nur linux-2.6.8.1/drivers/char/Makefile linux-2.6.8.1-rand2/drivers/char/Makefile --- linux-2.6.8.1/drivers/char/Makefile 2004-08-14 06:56:22.000000000 -0400 +++ linux-2.6.8.1-rand2/drivers/char/Makefile 2004-10-01 08:50:06.419933088 -0400 @@ -7,7 +7,13 @@ # FONTMAPFILE = cp437.uni -obj-y += mem.o random.o tty_io.o n_tty.o tty_ioctl.o pty.o misc.o +obj-y += mem.o +ifeq ($(CONFIG_CRYPTO_RANDOM_FORTUNA),y) + obj-y += random-fortuna.o + else + obj-y += random.o +endif +obj-y += tty_io.o n_tty.o tty_ioctl.o pty.o misc.o obj-$(CONFIG_VT) += vt_ioctl.o vc_screen.o consolemap.o \ consolemap_deftbl.o selection.o keyboard.o diff -X exclude -Nur linux-2.6.8.1/drivers/char/random-fortuna.c linux-2.6.8.1-rand2/drivers/char/random-fortuna.c --- linux-2.6.8.1/drivers/char/random-fortuna.c 1969-12-31 19:00:00.000000000 -0500 +++ linux-2.6.8.1-rand2/drivers/char/random-fortuna.c 2004-10-01 08:56:36.030703248 -0400 @@ -0,0 +1,2094 @@ +/* + * random-fortuna.c -- A cryptographically strong random number generator + * using Fortuna. + * + * Version 2.1.5, last modified 1-Oct-2004 + * Change log: + * v2.1.5: + * - random-fortuna.c is no longer #include'd from random.c, the + * drivers/char/Makefile takes care of this now thanks to Chris Han + * v2.1.4: + * - Fixed flaw where some situations, /dev/random would not block. + * v2.1.3: + * - Added a seperate round-robin index for use inputs. Avoids a + * super-cleaver user from forcing all system (unknown) random + * events from being fed into, say, pool-31. + * - Added a "can only extract RANDOM_MAX_EXTRACT_SIZE bytes at a time" + * to extract_entropy() + * v2.1.2: + * - Ts'o's (I love writting that!) recomendation to force reseeds + * to be at least 0.1 ms apart. + * v2.1.1: + * - Re-worked to keep the blocking /dev/random. Yes I finally gave + * in to what everyone's been telling me. + * - Entropy accounting is *only* done on events going into pool-0 + * since it's used for every reseed. Those who expect /dev/random + * to only output data when the system is confident it has + * info-theoretic entropy to justify this output, this is the only + * sensible method to count entropy. + * v2.0: + * - Inital version + * + * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All + * rights reserved. + * Copyright Jean-Luc Cooke, 2004. All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + * notice, and the entire permission notice in its entirety, + * including the disclaimer of warranties. + * 2. Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * 3. The name of the author may not be used to endorse or promote + * products derived from this software without specific prior + * written permission. + * + * ALTERNATIVELY, this product may be distributed under the terms of + * the GNU General Public License, in which case the provisions of the GPL are + * required INSTEAD OF the above restrictions. (This clause is + * necessary due to a potential bad interaction between the GPL and + * the restrictions contained in a BSD-style copyright.) + * + * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED + * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES + * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, ALL OF + * WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT + * OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR + * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE + * USE OF THIS SOFTWARE, EVEN IF NOT ADVISED OF THE POSSIBILITY OF SUCH + * DAMAGE. + */ + +/* + * Taken from random.c, updated by Jean-Luc Cooke <jlcooke@certainkey.com> + * (now, with legal B.S. out of the way.....) + * + * This routine gathers environmental noise from device drivers, etc., + * and returns good random numbers, suitable for cryptographic use. + * Besides the obvious cryptographic uses, these numbers are also good + * for seeding TCP sequence numbers, and other places where it is + * desirable to have numbers which are not only random, but hard to + * predict by an attacker. + * + * Theory of operation + * =================== + * + * Computers are very predictable devices. Hence it is extremely hard + * to produce truly random numbers on a computer --- as opposed to + * pseudo-random numbers, which can easily generated by using a + * algorithm. Unfortunately, it is very easy for attackers to guess + * the sequence of pseudo-random number generators, and for some + * applications this is not acceptable. So instead, we must try to + * gather "environmental noise" from the computer's environment, which + * must be hard for outside attackers to observe, and use that to + * generate random numbers. In a Unix environment, this is best done + * from inside the kernel. + * + * Sources of randomness from the environment include inter-keyboard + * timings, inter-interrupt timings from some interrupts, and other + * events which are both (a) non-deterministic and (b) hard for an + * outside observer to measure. Randomness from these sources are + * added to an "entropy pool", which is mixed. + * As random bytes are mixed into the entropy pool, the routines keep + * an *estimate* of how many bits of randomness have been stored into + * the random number generator's internal state. + * + * Even if it is possible to analyze Fortuna in some clever way, as + * long as the amount of data returned from the generator is less than + * the inherent entropy we've estimated in the pool, the output data + * is totally unpredictable. For this reason, the routine decreases + * its internal estimate of how many bits of "true randomness" are + * contained in the entropy pool as it outputs random numbers. + * + * If this estimate goes to zero, the routine can still generate + * random numbers; however, an attacker may (at least in theory) be + * able to infer the future output of the generator from prior + * outputs. This requires successful cryptanalysis of Fortuna, which is + * not believed to be feasible, but there is a remote possibility. + * Nonetheless, these numbers should be useful for the vast majority + * of purposes. + * + * Exported interfaces ---- output + * =============================== + * + * There are three exported interfaces; the first is one designed to + * be used from within the kernel: + * + * void get_random_bytes(void *buf, int nbytes); + * + * This interface will return the requested number of random bytes, + * and place it in the requested buffer. + * + * The two other interfaces are two character devices /dev/random and + * /dev/urandom. /dev/random is suitable for use when very high + * quality randomness is desired (for example, for key generation or + * one-time pads), as it will only return a maximum of the number of + * bits of randomness (as estimated by the random number generator) + * contained in the entropy pool. + * + * The /dev/urandom device does not have this limit, and will return + * as many bytes as are requested. As more and more random bytes are + * requested without giving time for the entropy pool to recharge, + * this will result in random numbers that are merely cryptographically + * strong. For many applications, however, this is acceptable. + * + * Exported interfaces ---- input + * ============================== + * + * The current exported interfaces for gathering environmental noise + * from the devices are: + * + * void add_keyboard_randomness(unsigned char scancode); + * void add_mouse_randomness(__u32 mouse_data); + * void add_interrupt_randomness(int irq); + * + * add_keyboard_randomness() uses the inter-keypress timing, as well as the + * scancode as random inputs into the "entropy pool". + * + * add_mouse_randomness() uses the mouse interrupt timing, as well as + * the reported position of the mouse from the hardware. + * + * add_interrupt_randomness() uses the inter-interrupt timing as random + * inputs to the entropy pool. Note that not all interrupts are good + * sources of randomness! For example, the timer interrupts is not a + * good choice, because the periodicity of the interrupts is too + * regular, and hence predictable to an attacker. Disk interrupts are + * a better measure, since the timing of the disk interrupts are more + * unpredictable. + * + * All of these routines try to estimate how many bits of randomness a + * particular randomness source. They do this by keeping track of the + * first and second order deltas of the event timings. + * + * Ensuring unpredictability at system startup + * ============================================ + * + * When any operating system starts up, it will go through a sequence + * of actions that are fairly predictable by an adversary, especially + * if the start-up does not involve interaction with a human operator. + * This reduces the actual number of bits of unpredictability in the + * entropy pool below the value in entropy_count. In order to + * counteract this effect, it helps to carry information in the + * entropy pool across shut-downs and start-ups. To do this, put the + * following lines an appropriate script which is run during the boot + * sequence: + * + * echo "Initializing random number generator..." + * random_seed=/var/run/random-seed + * # Carry a random seed from start-up to start-up + * # Load and then save the whole entropy pool + * if [ -f $random_seed ]; then + * cat $random_seed >/dev/urandom + * else + * touch $random_seed + * fi + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * and the following lines in an appropriate script which is run as + * the system is shutdown: + * + * # Carry a random seed from shut-down to start-up + * # Save the whole entropy pool + * # Fortuna resists using all of its pool matirial, so we need to + * # draw 8 seperate times (count=8) to ensure we get the entropy + * # from pool[0,1,2,3]'s entropy. count=2048 pool[0 .. 10], etc. + * echo "Saving random seed..." + * random_seed=/var/run/random-seed + * touch $random_seed + * chmod 600 $random_seed + * dd if=/dev/urandom of=$random_seed count=8 bs=256 + * + * For example, on most modern systems using the System V init + * scripts, such code fragments would be found in + * /etc/rc.d/init.d/random. On older Linux systems, the correct script + * location might be in /etc/rcb.d/rc.local or /etc/rc.d/rc.0. + * + * Effectively, these commands cause the contents of the entropy pool + * to be saved at shut-down time and reloaded into the entropy pool at + * start-up. (The 'dd' in the addition to the bootup script is to + * make sure that /etc/random-seed is different for every start-up, + * even if the system crashes without executing rc.0.) Even with + * complete knowledge of the start-up activities, predicting the state + * of the entropy pool requires knowledge of the previous history of + * the system. + * + * Configuring the /dev/random driver under Linux + * ============================================== + * + * The /dev/random driver under Linux uses minor numbers 8 and 9 of + * the /dev/mem major number (#1). So if your system does not have + * /dev/random and /dev/urandom created already, they can be created + * by using the commands: + * + * mknod /dev/random c 1 8 + * mknod /dev/urandom c 1 9 + * + * Acknowledgements: + * ================= + * + * Ideas for constructing this random number generator were derived + * from Pretty Good Privacy's random number generator, and from private + * discussions with Phil Karn. Colin Plumb provided a faster random + * number generator, which speed up the mixing function of the entropy + * pool, taken from PGPfone. Dale Worley has also contributed many + * useful ideas and suggestions to improve this driver. + * + * Any flaws in the design are solely my (jlcooke) responsibility, and + * should not be attributed to the Phil, Colin, or any of authors of PGP + * or the legacy random.c (Ted Ts'o). + * + * Further background information on this topic may be obtained from + * RFC 1750, "Randomness Recommendations for Security", by Donald + * Eastlake, Steve Crocker, and Jeff Schiller. And Chapter 10 of + * Practical Cryptography by Ferguson and Schneier. + */ + +#include <linux/utsname.h> +#include <linux/config.h> +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/major.h> +#include <linux/string.h> +#include <linux/fcntl.h> +#include <linux/slab.h> +#include <linux/random.h> +#include <linux/poll.h> +#include <linux/init.h> +#include <linux/fs.h> +#include <linux/workqueue.h> +#include <linux/genhd.h> +#include <linux/interrupt.h> +#include <linux/spinlock.h> +#include <linux/percpu.h> +#include <linux/crypto.h> + +#include <asm/scatterlist.h> +#include <asm/processor.h> +#include <asm/uaccess.h> +#include <asm/irq.h> +#include <asm/io.h> + + +/* + * Configuration information + */ +#define BATCH_ENTROPY_SIZE 256 +/* milli-seconds between random_reseeds for non-blocking reads */ +#define RANDOM_RESEED_INTERVAL 100 +/* + * Number of bytes you can extract at a time, 1MB is recomended in + * Practical Cryptography rev-0 + */ +#define RANDOM_MAX_EXTRACT_SIZE (1<<20) +#define USE_SHA256 +#define USE_AES + +/* + * Compile-time checking for our desired message digest + */ +#if defined USE_SHA256 + #if !CONFIG_CRYPTO_SHA256 + #error SHA256 not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "sha256" +#elif defined USE_WHIRLPOOL + #if !CONFIG_CRYPTO_WHIRLPOOL + #error WHIRLPOOL not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_DIGEST_ALGO "whirlpool" +#else + #error Desired message digest algorithm not found +#endif + +/* + * Compile-time checking for our desired block cipher + */ +#if defined USE_AES + #if (!CONFIG_CRYPTO_AES && !CONFIG_CRYPTO_AES_586) + #error AES not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "aes" +#elif defined USE_TWOFISH + #if (!CONFIG_CRYPTO_TWOFISH && !CONFIG_CRYPTO_TWOFISH_586) + #error TWOFISH not a built-in module, Fortuna configured to use it. + #endif + #define RANDOM_DEFAULT_CIPHER_ALGO "twofish" +#else + #error Desired block cipher algorithm not found +#endif /* USE_AES */ + +#define DEFAULT_POOL_NUMBER 5 /* 2^{5} = 32 pools */ +#define DEFAULT_POOL_SIZE ( (1<<DEFAULT_POOL_NUMBER) * 256) +/* largest block of random data to extract at a time when in blocking-mode */ +#define TMP_BUF_SIZE 512 +/* SHA512/WHIRLPOOL have 64bytes == 512 bits */ +#define RANDOM_MAX_DIGEST_SIZE 64 +/* AES256 has 16byte blocks == 128 bits */ +#define RANDOM_MAX_BLOCK_SIZE 16 +/* AES256 has 32byte keys == 256 bits */ +#define RANDOM_MAX_KEY_SIZE 32 + +/* + * The minimum number of bits of entropy before we wake up a read on + * /dev/random. We also wait for reseed_count>0 and we do a + * random_reseed() once we do wake up. + */ +static int random_read_wakeup_thresh = 64; + +/* + * If the entropy count falls under this number of bits, then we + * should wake up processes which are selecting or polling on write + * access to /dev/random. + */ +static int random_write_wakeup_thresh = 128; + +/* + * When the input pool goes over trickle_thresh, start dropping most + * samples to avoid wasting CPU time and reduce lock contention. + */ + +static int trickle_thresh = DEFAULT_POOL_SIZE * 7; + +static DEFINE_PER_CPU(int, trickle_count) = 0; + +#define POOLBYTES\ + ( (1<<random_state->pool_number) * random_state->digestsize ) +#define POOLBITS ( POOLBYTES * 8 ) + +/* + * Linux 2.2 compatibility + */ +#ifndef DECLARE_WAITQUEUE +#define DECLARE_WAITQUEUE(WAIT, PTR) struct wait_queue WAIT = { PTR, NULL } +#endif +#ifndef DECLARE_WAIT_QUEUE_HEAD +#define DECLARE_WAIT_QUEUE_HEAD(WAIT) struct wait_queue *WAIT +#endif + +/* + * Static global variables + */ +static struct entropy_store *random_state; /* The default global store */ +static DECLARE_WAIT_QUEUE_HEAD(random_read_wait); +static DECLARE_WAIT_QUEUE_HEAD(random_write_wait); + +/* + * Forward procedure declarations + */ +#ifdef CONFIG_SYSCTL +static void sysctl_init_random(struct entropy_store *random_state); +#endif + +/***************************************************************** + * + * Utility functions, with some ASM defined functions for speed + * purposes + * + *****************************************************************/ + +/* + * More asm magic.... + * + * For entropy estimation, we need to do an integral base 2 + * logarithm. + * + * Note the "12bits" suffix - this is used for numbers between + * 0 and 4095 only. This allows a few shortcuts. + */ +#if 0 /* Slow but clear version */ +static inline __u32 int_ln_12bits(__u32 word) +{ + __u32 nbits = 0; + + while (word >>= 1) + nbits++; + return nbits; +} +#else /* Faster (more clever) version, courtesy Colin Plumb */ +static inline __u32 int_ln_12bits(__u32 word) +{ + /* Smear msbit right to make an n-bit mask */ + word |= word >> 8; + word |= word >> 4; + word |= word >> 2; + word |= word >> 1; + /* Remove one bit to make this a logarithm */ + word >>= 1; + /* Count the bits set in the word */ + word -= (word >> 1) & 0x555; + word = (word & 0x333) + ((word >> 2) & 0x333); + word += (word >> 4); + word += (word >> 8); + return word & 15; +} +#endif + +#if 0 + #define DEBUG_ENT(fmt, arg...) printk("random: " fmt, ## arg) +#else + #define DEBUG_ENT(fmt, arg...) do {} while (0) +#endif +#if 0 + #define STATS_ENT(fmt, arg...) printk("random-stats: " fmt, ## arg) +#else + #define STATS_ENT(fmt, arg...) do {} while (0) +#endif + + +/********************************************************************** + * + * OS independent entropy store. Here are the functions which handle + * storing entropy in an entropy pool. + * + **********************************************************************/ + +struct entropy_store { + const char *digestAlgo; + unsigned int digestsize; + struct crypto_tfm *pools[1<<DEFAULT_POOL_NUMBER]; + /* optional, handy for statistics */ + unsigned int pools_bytes[1<<DEFAULT_POOL_NUMBER]; + + const char *cipherAlgo; + /* the key */ + unsigned char key[RANDOM_MAX_DIGEST_SIZE]; + unsigned int keysize; + /* the CTR value */ + unsigned char iv[16]; + unsigned int blocksize; + struct crypto_tfm *cipher; + + /* 2^pool_number # of pools */ + unsigned int pool_number; + /* current pool to add into */ + unsigned int pool_index; + /* size of the first pool */ + unsigned int pool0_len; + /* number of time we have reset */ + unsigned int reseed_count; + /* time in msec of the last reseed */ + time_t reseed_time; + /* digest used during random_reseed() */ + struct crypto_tfm *reseedHash; + /* cipher used for network randomness */ + struct crypto_tfm *networkCipher; + /* flag indicating if networkCipher has been seeded */ + char networkCipher_ready; + + /* read-write data: */ + spinlock_t lock ____cacheline_aligned_in_smp; + int entropy_count; +}; + +/* + * Initialize the entropy store. The input argument is the size of + * the random pool. + * + * Returns an negative error if there is a problem. + */ +static int create_entropy_store(int poolnum, struct entropy_store **ret_bucket) +{ + struct entropy_store *r; + unsigned long pool_number; + int keysize, i, j; + + pool_number = poolnum; + + r = kmalloc(sizeof(struct entropy_store), GFP_KERNEL); + if (!r) { + return -ENOMEM; + } + + memset (r, 0, sizeof(struct entropy_store)); + r->pool_number = pool_number; + r->digestAlgo = RANDOM_DEFAULT_DIGEST_ALGO; + +DEBUG_ENT("create_entropy_store() pools=%u index=%u\n", + 1<<pool_number, r->pool_index); + for (i=0; i<(1<<pool_number); i++) { +DEBUG_ENT("create_entropy_store() i=%i index=%u\n", i, r->pool_index); + r->pools[i] = crypto_alloc_tfm(r->digestAlgo, 0); + if (r->pools[i] == NULL) { + for (j=0; j<i; j++) { + if (r->pools[j] != NULL) { + kfree(r->pools[j]); + } + } + kfree(r); + return -ENOMEM; + } + crypto_digest_init( r->pools[i] ); + } + r->lock = SPIN_LOCK_UNLOCKED; + *ret_bucket = r; + + r->cipherAlgo = RANDOM_DEFAULT_CIPHER_ALGO; + if ((r->cipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + /* If the HASH's output is greater then the cipher's keysize, truncate + * to the cipher's keysize */ + keysize = crypto_tfm_alg_max_keysize(r->cipher); + r->digestsize = crypto_tfm_alg_digestsize(r->pools[0]); + r->blocksize = crypto_tfm_alg_blocksize(r->cipher); + + r->keysize = (keysize < r->digestsize) ? keysize : r->digestsize; +DEBUG_ENT("create_RANDOM %u %u %u\n", keysize, r->digestsize, r->keysize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* digest used duing random-reseed() */ + if ((r->reseedHash=crypto_alloc_tfm(r->digestAlgo, 0)) == NULL) { + return -ENOMEM; + } + /* cipher used for network randomness */ + if ((r->networkCipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) { + return -ENOMEM; + } + + return 0; +} + +/* + * This function adds a byte into the entropy "pool". It does not + * update the entropy estimate. The caller should call + * credit_entropy_store if this is appropriate. + */ +static void add_entropy_words(struct entropy_store *r, const __u32 *in, + int nwords, int dst_pool) +{ + unsigned long flags; + struct scatterlist sg[1]; + static unsigned int totalBytes=0; + + if (r == NULL) { + return; + } + + spin_lock_irqsave(&r->lock, flags); + + totalBytes += nwords * sizeof(__u32); + + sg[0].page = virt_to_page(in); + sg[0].offset = offset_in_page(in); + sg[0].length = nwords*sizeof(__u32); + + if (dst_pool == -1) { + r->pools_bytes[r->pool_index] += nwords * sizeof(__u32); + crypto_digest_update(r->pools[r->pool_index], sg, 1); + if (r->pool_index == 0) { + r->pool0_len += nwords*sizeof(__u32); + } + /* idx = (idx + 1) mod ( (2^N)-1 ) */ + r->pool_index = (r->pool_index + 1) + & ((1<<random_state->pool_number)-1); + } else { + /* Let's make sure nothing mean is happening... */ + dst_pool &= (1<<random_state->pool_number) - 1; + r->pools_bytes[dst_pool] += nwords * sizeof(__u32); + crypto_digest_update(r->pools[dst_pool], sg, 1); + } +DEBUG_ENT("r->pool0_len = %u\n", r->pool0_len); + + + spin_unlock_irqrestore(&r->lock, flags); +DEBUG_ENT("0 add_entropy_words() nwords=%u pool[i].bytes=%u total=%u\n", + nwords, r->pools_bytes[r->pool_index], totalBytes); +} + +/* + * Credit (or debit) the entropy store with n bits of entropy + */ +static void credit_entropy_store(struct entropy_store *r, int nbits) +{ + unsigned long flags; + + spin_lock_irqsave(&r->lock, flags); + + if (r->entropy_count + nbits < 0) { + DEBUG_ENT("negative entropy/overflow (%d+%d)\n", + r->entropy_count, nbits); + r->entropy_count = 0; + } else if (r->entropy_count + nbits > POOLBITS) { + r->entropy_count = POOLBITS; + } else { + r->entropy_count += nbits; + if (nbits) + DEBUG_ENT("%04d : added %d bits\n", + r->entropy_count, + nbits); + } + + spin_unlock_irqrestore(&r->lock, flags); +} + +/********************************************************************** + * + * Entropy batch input management + * + * We batch entropy to be added to avoid increasing interrupt latency + * + **********************************************************************/ + +struct sample { + __u32 data[2]; + int credit; +}; + +static struct sample *batch_entropy_pool, *batch_entropy_copy; +static int batch_head, batch_tail; +static spinlock_t batch_lock = SPIN_LOCK_UNLOCKED; + +static int batch_max; +static void batch_entropy_process(void *private_); +static DECLARE_WORK(batch_work, batch_entropy_process, NULL); + +/* note: the size must be a power of 2 */ +static int __init batch_entropy_init(int size, struct entropy_store *r) +{ + batch_entropy_pool = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_pool) + return -1; + batch_entropy_copy = kmalloc(size*sizeof(struct sample), GFP_KERNEL); + if (!batch_entropy_copy) { + kfree(batch_entropy_pool); + return -1; + } + batch_head = batch_tail = 0; + batch_work.data = r; + batch_max = size; + return 0; +} + +/* + * Changes to the entropy data is put into a queue rather than being added to + * the entropy counts directly. This is presumably to avoid doing heavy + * hashing calculations during an interrupt in add_timer_randomness(). + * Instead, the entropy is only added to the pool by keventd. + */ +void batch_entropy_store(u32 a, u32 b, int num) +{ + int new; + unsigned long flags; + + if (!batch_max) + return; + + spin_lock_irqsave(&batch_lock, flags); + + batch_entropy_pool[batch_head].data[0] = a; + batch_entropy_pool[batch_head].data[1] = b; + batch_entropy_pool[batch_head].credit = num; + + if (((batch_head - batch_tail) & (batch_max-1)) >= (batch_max / 2)) { + /* + * Schedule it for the next timer tick: + */ + schedule_delayed_work(&batch_work, 1); + } + + new = (batch_head+1) & (batch_max-1); + if (new == batch_tail) { + DEBUG_ENT("batch entropy buffer full\n"); + } else { + batch_head = new; + } + + spin_unlock_irqrestore(&batch_lock, flags); +} + +EXPORT_SYMBOL(batch_entropy_store); + +/* + * Flush out the accumulated entropy operations, adding entropy to the passed + * store (normally random_state). If that store has enough entropy, alternate + * between randomizing the data of the primary and secondary stores. + */ +static void batch_entropy_process(void *private_) +{ + int max_entropy = POOLBITS; + unsigned head, tail; + + /* Mixing into the pool is expensive, so copy over the batch + * data and release the batch lock. The pool is at least half + * full, so don't worry too much about copying only the used + * part. + */ + spin_lock_irq(&batch_lock); + + memcpy(batch_entropy_copy, batch_entropy_pool, + batch_max*sizeof(struct sample)); + + head = batch_head; + tail = batch_tail; + batch_tail = batch_head; + + spin_unlock_irq(&batch_lock); + + while (head != tail) { + if (random_state->entropy_count >= max_entropy) { + max_entropy = POOLBITS; + } + /* + * Only credit if we're feeding into pool[0] + * Otherwise we'd be assuming entropy in pool[31] would be + * usable when we read. This is conservative, but it'll + * not over-credit our entropy estimate for users of + * /dev/random, /dev/urandom will not be effected. + */ + if (random_state->pool_index == 0) { + credit_entropy_store(random_state, + batch_entropy_copy[tail].credit); + } + add_entropy_words(random_state, + batch_entropy_copy[tail].data, 2, -1); +; + + tail = (tail+1) & (batch_max-1); + } + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); +} + +/********************************************************************* + * + * Entropy input management + * + *********************************************************************/ + +/* There is one of these per entropy source */ +struct timer_rand_state { + __u32 last_time; + __s32 last_delta,last_delta2; + int dont_count_entropy:1; +}; + +static struct timer_rand_state keyboard_timer_state; +static struct timer_rand_state mouse_timer_state; +static struct timer_rand_state extract_timer_state; +static struct timer_rand_state *irq_timer_state[NR_IRQS]; + +/* + * This function adds entropy to the entropy "pool" by using timing + * delays. It uses the timer_rand_state structure to make an estimate + * of how many bits of entropy this call has added to the pool. + * + * The number "num" is also added to the pool - it should somehow describe + * the type of event which just happened. This is currently 0-255 for + * keyboard scan codes, and 256 upwards for interrupts. + * On the i386, this is assumed to be at most 16 bits, and the high bits + * are used for a high-resolution timer. + * + */ +static void add_timer_randomness(struct timer_rand_state *state, unsigned num) +{ + __u32 time; + __s32 delta, delta2, delta3; + int entropy = 0; + + /* if over the trickle threshold, use only 1 in 4096 samples */ + if ( random_state->entropy_count > trickle_thresh && + (__get_cpu_var(trickle_count)++ & 0xfff)) + return; + +#if defined (__i386__) || defined (__x86_64__) + if (cpu_has_tsc) { + __u32 high; + rdtsc(time, high); + num ^= high; + } else { + time = jiffies; + } +#elif defined (__sparc_v9__) + unsigned long tick = tick_ops->get_tick(); + + time = (unsigned int) tick; + num ^= (tick >> 32UL); +#else + time = jiffies; +#endif + + /* + * Calculate number of bits of randomness we probably added. + * We take into account the first, second and third-order deltas + * in order to make our estimate. + */ + if (!state->dont_count_entropy) { + delta = time - state->last_time; + state->last_time = time; + + delta2 = delta - state->last_delta; + state->last_delta = delta; + + delta3 = delta2 - state->last_delta2; + state->last_delta2 = delta2; + + if (delta < 0) + delta = -delta; + if (delta2 < 0) + delta2 = -delta2; + if (delta3 < 0) + delta3 = -delta3; + if (delta > delta2) + delta = delta2; + if (delta > delta3) + delta = delta3; + + /* + * delta is now minimum absolute delta. + * Round down by 1 bit on general principles, + * and limit entropy entimate to 12 bits. + */ + delta >>= 1; + delta &= (1 << 12) - 1; + + entropy = int_ln_12bits(delta); + } + batch_entropy_store(num, time, entropy); +} + +void add_keyboard_randomness(unsigned char scancode) +{ + static unsigned char last_scancode; + /* ignore autorepeat (multiple key down w/o key up) */ + if (scancode != last_scancode) { + last_scancode = scancode; + add_timer_randomness(&keyboard_timer_state, scancode); + } +} + +EXPORT_SYMBOL(add_keyboard_randomness); + +void add_mouse_randomness(__u32 mouse_data) +{ + add_timer_randomness(&mouse_timer_state, mouse_data); +} + +EXPORT_SYMBOL(add_mouse_randomness); + +void add_interrupt_randomness(int irq) +{ + if (irq >= NR_IRQS || irq_timer_state[irq] == 0) + return; + + add_timer_randomness(irq_timer_state[irq], 0x100+irq); +} + +EXPORT_SYMBOL(add_interrupt_randomness); + +void add_disk_randomness(struct gendisk *disk) +{ + if (!disk || !disk->random) + return; + /* first major is 1, so we get >= 0x200 here */ + add_timer_randomness(disk->random, + 0x100+MKDEV(disk->major, disk->first_minor)); +} + +EXPORT_SYMBOL(add_disk_randomness); + +/********************************************************************* + * + * Entropy extraction routines + * + *********************************************************************/ + +#define EXTRACT_ENTROPY_USER 1 +#define EXTRACT_ENTROPY_LIMIT 4 + +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags); + +static inline void increment_iv(unsigned char *iv, const unsigned int IVsize) { + switch (IVsize) { + case 8: + if (++((u32*)iv)[0]) + ++((u32*)iv)[1]; + break; + + case 16: + if (++((u32*)iv)[0]) + if (++((u32*)iv)[1]) + if (++((u32*)iv)[2]) + ++((u32*)iv)[3]; + break; + + default: + { + int i; + for (i=0; i<IVsize; i++) + if (++iv[i]) + break; + } + break; + } +} + +/* + * Fortuna's Reseed + * + * Key' = hash(Key || hash(pool[a0]) || hash(pool[a1]) || ...) + * where {a0,a1,...} are facators of r->reseed_count+1 which are of the form + * 2^j, 0<=j. + * Prevents backtracking attacks and with event inputs, supports forward + * secrecy + */ +static void random_reseed(struct entropy_store *r, size_t nbytes, int flags) { + struct scatterlist sg[1]; + unsigned int i, deduct; + unsigned char tmp[RANDOM_MAX_DIGEST_SIZE]; + unsigned long cpuflags; + + deduct = (r->keysize < r->digestsize) ? r->keysize : r->digestsize; + + /* Hold lock while accounting */ + spin_lock_irqsave(&r->lock, cpuflags); + + DEBUG_ENT("%04d : trying to extract %d bits\n", + random_state->entropy_count, + deduct * 8); + + /* + * Don't extract more data than in the entropy in the pooling system + */ + if (flags & EXTRACT_ENTROPY_LIMIT && nbytes >= r->entropy_count / 8) { + nbytes = r->entropy_count / 8; + } + + if (deduct*8 <= r->entropy_count) { + r->entropy_count -= deduct*8; + } else { + r->entropy_count = 0; + } + + if (r->entropy_count < random_write_wakeup_thresh) + wake_up_interruptible(&random_write_wait); + + DEBUG_ENT("%04d : debiting %d bits%s\n", + random_state->entropy_count, + deduct * 8, + flags & EXTRACT_ENTROPY_LIMIT ? "" : " (unlimited)"); + + r->reseed_count++; + r->pool0_len = 0; + + /* Entropy accounting done, release lock. */ + spin_unlock_irqrestore(&r->lock, cpuflags); + + DEBUG_ENT("random_reseed count=%u\n", r->reseed_count); + + crypto_digest_init(r->reseedHash); + + sg[0].page = virt_to_page(r->key); + sg[0].offset = offset_in_page(r->key); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + +#define TESTBIT(VAL, N)\ + ( ((VAL) >> (N)) & 1 ) + for (i=0; i<(1<<r->pool_number); i++) { + /* using pool[i] if r->reseed_count is divisible by 2^i + * since 2^0 == 1, we always use pool[0] + */ + if ( (i==0) || TESTBIT(r->reseed_count,i)==0 ) { + crypto_digest_final(r->pools[i], tmp); + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = r->keysize; + crypto_digest_update(r->reseedHash, sg, 1); + + crypto_digest_init(r->pools[i]); + /* Each pool carries its past state forward */ + crypto_digest_update(r->pools[i], sg, 1); + } else { + /* pool j is only used once every 2^j times */ + break; + } + } +#undef TESTBIT + + crypto_digest_final(r->reseedHash, r->key); + crypto_cipher_setkey(r->cipher, r->key, r->keysize); + increment_iv(r->iv, r->blocksize); +} + +static inline time_t get_msectime(void) { + struct timeval tv; + do_gettimeofday(&tv); + return (tv.tv_sec * 1000) + (tv.tv_usec / 1000); +} + +/* + * This function extracts randomness from the "entropy pool", and + * returns it in a buffer. This function computes how many remaining + * bits of entropy are left in the pool, but it does not restrict the + * number of bytes that are actually obtained. If the EXTRACT_ENTROPY_USER + * flag is given, then the buf pointer is assumed to be in user space. + */ +static ssize_t extract_entropy(struct entropy_store *r, void * buf, + size_t nbytes, int flags) +{ + ssize_t ret, i; + __u32 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgiv[1], sgtmp[1]; + time_t nowtime; + + /* Redundant, but just in case... */ + if (r->entropy_count > POOLBITS) + r->entropy_count = POOLBITS; + + /* + * To keep the possibility of collisions down, limit the number of + * output bytes per block cipher key. + */ + if (RANDOM_MAX_EXTRACT_SIZE < nbytes) + nbytes = RANDOM_MAX_EXTRACT_SIZE; + + if (flags & EXTRACT_ENTROPY_LIMIT) { + /* if in blocking, only output upto the entropy estimate */ + if (r->entropy_count/8 < nbytes) + nbytes = r->entropy_count/8; + /* + * if blocking and there is no entropy by our estimate, + * break out now. + */ + if (nbytes == 0) + return 0; + } + + /* + * If reading in non-blocking mode, pace ourselves in using up the pool + * system's entropy. + */ + if (! (flags & EXTRACT_ENTROPY_LIMIT) ) { + nowtime = get_msectime(); + if (r->pool0_len > 64 + && (nowtime - r->reseed_time) > RANDOM_RESEED_INTERVAL) { + random_reseed(r, nbytes, flags); + r->reseed_time = nowtime; + } + } + + sgiv[0].page = virt_to_page(r->iv); + sgiv[0].offset = offset_in_page(r->iv); + sgiv[0].length = r->blocksize; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = r->blocksize; + + ret = 0; + while (nbytes) { + /* + * Check if we need to break out or reschedule.... + */ + if ((flags & EXTRACT_ENTROPY_USER) && need_resched()) { + if (signal_pending(current)) { + if (ret == 0) + ret = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : extract sleeping (%d bytes left)\n", + random_state->entropy_count, + nbytes); + + schedule(); + + /* + * when we wakeup, there will be more data in our + * pooling system so we will reseed + */ + nowtime = get_msectime(); + if (r->pool0_len > 64 + && (nowtime-r->reseed_time) > RANDOM_RESEED_INTERVAL) { + random_reseed(r, nbytes, flags); + r->reseed_time = nowtime; + } + + DEBUG_ENT("%04d : extract woke up\n", + random_state->entropy_count); + } + + /* + * Reading from /dev/random, we limit this to the amount + * of entropy to deduct from our estimate. This estimate is + * most naturally updated from inside Fortuna-reseed, so we + * limit our block size here. + * + * At most, Fortuna will use e=min(r->digestsize, r->keysize) of + * entropy to reseed. + */ + if (flags & EXTRACT_ENTROPY_LIMIT) { + r->reseed_time = get_msectime(); + random_reseed(r, nbytes, flags); + } + + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, r->blocksize); + increment_iv(r->iv, r->blocksize); + + /* Copy data to destination buffer */ + i = (nbytes < r->blocksize) ? nbytes : r->blocksize; + if (flags & EXTRACT_ENTROPY_USER) { + i -= copy_to_user(buf, (__u8 const *)tmp, i); + if (!i) { + ret = -EFAULT; + break; + } + } else + memcpy(buf, (__u8 const *)tmp, i); + nbytes -= i; + buf += i; + ret += i; + } + + /* generate a new key */ + /* take into account the possibility that keysize >= blocksize */ + for (i=0; i+r->blocksize<=r->keysize; i+=r->blocksize) { + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + } + sgtmp[0].page = virt_to_page( r->key+i ); + sgtmp[0].offset = offset_in_page( r->key+i ); + sgtmp[0].length = r->blocksize-i; + crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1); + increment_iv(r->iv, r->blocksize); + + if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) { + return -EINVAL; + } + + /* Wipe data just returned from memory */ + memset(tmp, 0, sizeof(tmp)); + + return ret; +} + +/* + * This function is the exported kernel interface. It returns some + * number of good random numbers, suitable for seeding TCP sequence + * numbers, etc. + */ +void get_random_bytes(void *buf, int nbytes) +{ + if (random_state) + extract_entropy(random_state, (char *) buf, nbytes, 0); + else + printk(KERN_NOTICE "get_random_bytes called before " + "random driver initialization\n"); +} + +EXPORT_SYMBOL(get_random_bytes); + +/********************************************************************* + * + * Functions to interface with Linux + * + *********************************************************************/ + +/* + * Initialize the random pool with standard stuff. + * This is not secure random data, but it can't hurt us and people scream + * when you try to remove it. + * + * NOTE: This is an OS-dependent function. + */ +static void init_std_data(struct entropy_store *r) +{ + struct timeval tv; + __u32 words[2]; + char *p; + int i; + + do_gettimeofday(&tv); + words[0] = tv.tv_sec; + words[1] = tv.tv_usec; + add_entropy_words(r, words, 2, -1); + + /* + * This doesn't lock system.utsname. However, we are generating + * entropy so a race with a name set here is fine. + */ + p = (char *) &system_utsname; + for (i = sizeof(system_utsname) / sizeof(words); i; i--) { + memcpy(words, p, sizeof(words)); + add_entropy_words(r, words, sizeof(words)/4, -1); + p += sizeof(words); + } +} + +static int __init rand_initialize(void) +{ + int i; + + if (create_entropy_store(DEFAULT_POOL_NUMBER, &random_state)) + goto err; + if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state)) + goto err; + init_std_data(random_state); +#ifdef CONFIG_SYSCTL + sysctl_init_random(random_state); +#endif + for (i = 0; i < NR_IRQS; i++) + irq_timer_state[i] = NULL; + memset(&keyboard_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&mouse_timer_state, 0, sizeof(struct timer_rand_state)); + memset(&extract_timer_state, 0, sizeof(struct timer_rand_state)); + extract_timer_state.dont_count_entropy = 1; + return 0; +err: + return -1; +} +module_init(rand_initialize); + +void rand_initialize_irq(int irq) +{ + struct timer_rand_state *state; + + if (irq >= NR_IRQS || irq_timer_state[irq]) + return; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + irq_timer_state[irq] = state; + } +} + +void rand_initialize_disk(struct gendisk *disk) +{ + struct timer_rand_state *state; + + /* + * If kmalloc returns null, we just won't use that entropy + * source. + */ + state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL); + if (state) { + memset(state, 0, sizeof(struct timer_rand_state)); + disk->random = state; + } +} + +static ssize_t +random_read(struct file * file, char __user * buf, size_t nbytes, loff_t *ppos) +{ + DECLARE_WAITQUEUE(wait, current); + ssize_t n, retval = 0, count = 0; + + if (nbytes == 0) + return 0; + + while (nbytes > 0) { + n = nbytes; + + DEBUG_ENT("%04d : reading %d bits, p: %d s: %d\n", + random_state->entropy_count, + n*8, random_state->entropy_count, + random_state->entropy_count); + + n = extract_entropy(random_state, buf, n, + EXTRACT_ENTROPY_USER | + EXTRACT_ENTROPY_LIMIT); + + DEBUG_ENT("%04d : read got %d bits (%d needed, reseeds=%d)\n", + random_state->entropy_count, + random_state->reseed_count, + n*8, (nbytes-n)*8); + + if (n == 0) { + if (file->f_flags & O_NONBLOCK) { + retval = -EAGAIN; + break; + } + if (signal_pending(current)) { + retval = -ERESTARTSYS; + break; + } + + DEBUG_ENT("%04d : sleeping?\n", + random_state->entropy_count); + + set_current_state(TASK_INTERRUPTIBLE); + add_wait_queue(&random_read_wait, &wait); + + if (random_state->entropy_count / 8 == 0 + || random_state->reseed_count == 0) + schedule(); + + set_current_state(TASK_RUNNING); + remove_wait_queue(&random_read_wait, &wait); + + DEBUG_ENT("%04d : waking up\n", + random_state->entropy_count); + + continue; + } + + if (n < 0) { + retval = n; + break; + } + count += n; + buf += n; + nbytes -= n; + break; /* This break makes the device work */ + /* like a named pipe */ + } + + /* + * If we gave the user some bytes, update the access time. + */ + if (count) + file_accessed(file); + + return (count ? count : retval); +} + +static ssize_t +urandom_read(struct file * file, char __user * buf, + size_t nbytes, loff_t *ppos) +{ + /* Don't return anything untill we've reseeded at least once */ + if (random_state->reseed_count == 0) + return 0; + + return extract_entropy(random_state, buf, nbytes, + EXTRACT_ENTROPY_USER); +} + +static unsigned int +random_poll(struct file *file, poll_table * wait) +{ + unsigned int mask; + + poll_wait(file, &random_read_wait, wait); + poll_wait(file, &random_write_wait, wait); + mask = 0; + if (random_state->entropy_count >= random_read_wakeup_thresh) + mask |= POLLIN | POLLRDNORM; + if (random_state->entropy_count < random_write_wakeup_thresh) + mask |= POLLOUT | POLLWRNORM; + return mask; +} + +static ssize_t +random_write(struct file * file, const char __user * buffer, + size_t count, loff_t *ppos) +{ + static int idx = 0; + int ret = 0; + size_t bytes; + __u32 buf[16]; + const char __user *p = buffer; + size_t c = count; + + while (c > 0) { + bytes = min(c, sizeof(buf)); + + bytes -= copy_from_user(&buf, p, bytes); + if (!bytes) { + ret = -EFAULT; + break; + } + c -= bytes; + p += bytes; + + /* + * Use input data rotates though the pools independantly of + * system-events. + * + * idx = (idx + 1) mod ( (2^N)-1 ) + */ + idx = (idx + 1) & ((1<<random_state->pool_number)-1); + add_entropy_words(random_state, buf, bytes, idx); + } + if (p == buffer) { + return (ssize_t)ret; + } else { + file->f_dentry->d_inode->i_mtime = CURRENT_TIME; + mark_inode_dirty(file->f_dentry->d_inode); + return (ssize_t)(p - buffer); + } +} + +static int +random_ioctl(struct inode * inode, struct file * file, + unsigned int cmd, unsigned long arg) +{ + int size, ent_count; + int __user *p = (int __user *)arg; + int retval; + + switch (cmd) { + case RNDGETENTCNT: + ent_count = random_state->entropy_count; + if (put_user(ent_count, p)) + return -EFAULT; + return 0; + case RNDADDTOENTCNT: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p)) + return -EFAULT; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDGETPOOL: + /* can't do this anymore */ + return 0; + case RNDADDENTROPY: + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; + if (get_user(ent_count, p++)) + return -EFAULT; + if (ent_count < 0) + return -EINVAL; + if (get_user(size, p++)) + return -EFAULT; + retval = random_write(file, (const char __user *) p, + size, &file->f_pos); + if (retval < 0) + return retval; + credit_entropy_store(random_state, ent_count); + /* + * Wake up waiting processes if we have enough + * entropy. + */ + if (random_state->entropy_count >= random_read_wakeup_thresh + && random_state->reseed_count != 0) + wake_up_interruptible(&random_read_wait); + return 0; + case RNDZAPENTCNT: + /* Can't do this anymore */ + return 0; + case RNDCLEARPOOL: + /* Can't to this anymore */ + return 0; + default: + return -EINVAL; + } +} + +struct file_operations random_fops = { + .read = random_read, + .write = random_write, + .poll = random_poll, + .ioctl = random_ioctl, +}; + +struct file_operations urandom_fops = { + .read = urandom_read, + .write = random_write, + .ioctl = random_ioctl, +}; + +/*************************************************************** + * Random UUID interface + * + * Used here for a Boot ID, but can be useful for other kernel + * drivers. + ***************************************************************/ + +/* + * Generate random UUID + */ +void generate_random_uuid(unsigned char uuid_out[16]) +{ + get_random_bytes(uuid_out, 16); + /* Set UUID version to 4 --- truely random generation */ + uuid_out[6] = (uuid_out[6] & 0x0F) | 0x40; + /* Set the UUID variant to DCE */ + uuid_out[8] = (uuid_out[8] & 0x3F) | 0x80; +} + +EXPORT_SYMBOL(generate_random_uuid); + +/******************************************************************** + * + * Sysctl interface + * + ********************************************************************/ + +#ifdef CONFIG_SYSCTL + +#include <linux/sysctl.h> + +static int sysctl_poolsize; +static int min_read_thresh, max_read_thresh; +static int min_write_thresh, max_write_thresh; +static char sysctl_bootid[16]; + +static int proc_do_poolsize(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + int ret; + + sysctl_poolsize = POOLBITS; + + ret = proc_dointvec(table, write, filp, buffer, lenp, ppos); + if (ret || !write || + (sysctl_poolsize == POOLBITS)) + return ret; + + return ret; /* can't change the pool size in fortuna */ +} + +static int poolsize_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + int len; + + sysctl_poolsize = POOLBITS; + + /* + * We only handle the write case, since the read case gets + * handled by the default handler (and we don't care if the + * write case happens twice; it's harmless). + */ + if (newval && newlen) { + len = newlen; + if (len > table->maxlen) + len = table->maxlen; + if (copy_from_user(table->data, newval, len)) + return -EFAULT; + } + + return 0; +} + +/* + * These functions is used to return both the bootid UUID, and random + * UUID. The difference is in whether table->data is NULL; if it is, + * then a new UUID is generated and returned to the user. + * + * If the user accesses this via the proc interface, it will be returned + * as an ASCII string in the standard UUID format. If accesses via the + * sysctl system call, it is returned as 16 bytes of binary data. + */ +static int proc_do_uuid(ctl_table *table, int write, struct file *filp, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + ctl_table fake_table; + unsigned char buf[64], tmp_uuid[16], *uuid; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + sprintf(buf, "%02x%02x%02x%02x-%02x%02x-%02x%02x-%02x%02x-" + "%02x%02x%02x%02x%02x%02x", + uuid[0], uuid[1], uuid[2], uuid[3], + uuid[4], uuid[5], uuid[6], uuid[7], + uuid[8], uuid[9], uuid[10], uuid[11], + uuid[12], uuid[13], uuid[14], uuid[15]); + fake_table.data = buf; + fake_table.maxlen = sizeof(buf); + + return proc_dostring(&fake_table, write, filp, buffer, lenp, ppos); +} + +static int uuid_strategy(ctl_table *table, int __user *name, int nlen, + void __user *oldval, size_t __user *oldlenp, + void __user *newval, size_t newlen, void **context) +{ + unsigned char tmp_uuid[16], *uuid; + unsigned int len; + + if (!oldval || !oldlenp) + return 1; + + uuid = table->data; + if (!uuid) { + uuid = tmp_uuid; + uuid[8] = 0; + } + if (uuid[8] == 0) + generate_random_uuid(uuid); + + if (get_user(len, oldlenp)) + return -EFAULT; + if (len) { + if (len > 16) + len = 16; + if (copy_to_user(oldval, uuid, len) || + put_user(len, oldlenp)) + return -EFAULT; + } + return 1; +} + +ctl_table random_table[] = { + { + .ctl_name = RANDOM_POOLSIZE, + .procname = "poolsize", + .data = &sysctl_poolsize, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_do_poolsize, + .strategy = &poolsize_strategy, + }, + { + .ctl_name = RANDOM_ENTROPY_COUNT, + .procname = "entropy_avail", + .maxlen = sizeof(int), + .mode = 0444, + .proc_handler = &proc_dointvec, + }, + { + .ctl_name = RANDOM_READ_THRESH, + .procname = "read_wakeup_threshold", + .data = &random_read_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_read_thresh, + .extra2 = &max_read_thresh, + }, + { + .ctl_name = RANDOM_WRITE_THRESH, + .procname = "write_wakeup_threshold", + .data = &random_write_wakeup_thresh, + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = &proc_dointvec_minmax, + .strategy = &sysctl_intvec, + .extra1 = &min_write_thresh, + .extra2 = &max_write_thresh, + }, + { + .ctl_name = RANDOM_BOOT_ID, + .procname = "boot_id", + .data = &sysctl_bootid, + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_UUID, + .procname = "uuid", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_do_uuid, + .strategy = &uuid_strategy, + }, + { + .ctl_name = RANDOM_DIGEST_ALGO, + .procname = "digest_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { + .ctl_name = RANDOM_CIPHER_ALGO, + .procname = "cipher_algo", + .maxlen = 16, + .mode = 0444, + .proc_handler = &proc_dostring, + }, + { .ctl_name = 0 } +}; + +static void sysctl_init_random(struct entropy_store *random_state) +{ + int i; + + /* If the sys-admin doesn't want people to know how fast + * random events are happening, he can set the read-threshhold + * down to zero so /dev/random never blocks. Default is to block. + * This is for the paranoid loonies who think frequency analysis + * would lead to something. + */ + min_read_thresh = 0; + min_write_thresh = 0; + max_read_thresh = max_write_thresh = POOLBITS; + for (i=0; random_table[i].ctl_name!=0; i++) { + switch (random_table[i].ctl_name) { + case RANDOM_ENTROPY_COUNT: + random_table[i].data = &random_state->entropy_count; + break; + + case RANDOM_DIGEST_ALGO: + random_table[i].data = (void*)random_state->digestAlgo; + break; + + case RANDOM_CIPHER_ALGO: + random_table[i].data = (void*)random_state->cipherAlgo; + break; + + default: + break; + } + } +} +#endif /* CONFIG_SYSCTL */ + +/******************************************************************** + * + * Random funtions for networking + * + ********************************************************************/ + +/* + * TCP initial sequence number picking. This uses the random number + * generator to pick an initial secret value. This value is encrypted + * with the TCP endpoint information to provide a unique starting point + * for each pair of TCP endpoints. This defeats attacks which rely on + * guessing the initial TCP sequence number. This algorithm was + * suggested by Steve Bellovin, modified by Jean-Luc Cooke. + * + * Using a very strong hash was taking an appreciable amount of the total + * TCP connection establishment time, so this is a weaker hash, + * compensated for by changing the secret periodically. This was changed + * again by Jean-Luc Cooke to use AES256-CBC encryption which is faster + * still (see `/usr/bin/openssl speed md4 sha1 aes`) + */ + +/* This should not be decreased so low that ISNs wrap too fast. */ +#define REKEY_INTERVAL 300 +/* + * Bit layout of the tcp sequence numbers (before adding current time): + * bit 24-31: increased after every key exchange + * bit 0-23: hash(source,dest) + * + * The implementation is similar to the algorithm described + * in the Appendix of RFC 1185, except that + * - it uses a 1 MHz clock instead of a 250 kHz clock + * - it performs a rekey every 5 minutes, which is equivalent + * to a (source,dest) tulple dependent forward jump of the + * clock by 0..2^(HASH_BITS+1) + * + * Thus the average ISN wraparound time is 68 minutes instead of + * 4.55 hours. + * + * SMP cleanup and lock avoidance with poor man's RCU. + * Manfred Spraul <manfred@colorfullife.com> + * + */ +#define COUNT_BITS 8 +#define COUNT_MASK ( (1<<COUNT_BITS)-1) +#define HASH_BITS 24 +#define HASH_MASK ( (1<<HASH_BITS)-1 ) + +static spinlock_t ip_lock = SPIN_LOCK_UNLOCKED; +static unsigned int ip_cnt, network_count; + +static void __check_and_rekey(time_t time) +{ + u8 tmp[RANDOM_MAX_KEY_SIZE]; + spin_lock_bh(&ip_lock); + + get_random_bytes(tmp, random_state->keysize); + crypto_cipher_setkey(random_state->networkCipher, + (const u8*)tmp, + random_state->keysize); + random_state->networkCipher_ready = 1; + network_count = (ip_cnt & COUNT_MASK) << HASH_BITS; + mb(); + ip_cnt++; + + spin_unlock_bh(&ip_lock); + return; +} + +static inline void check_and_rekey(time_t time) +{ + static time_t rekey_time=0; + + rmb(); + if (!rekey_time || (time - rekey_time) > REKEY_INTERVAL) { + __check_and_rekey(time); + rekey_time = time; + } + + return; +} + +#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE) +__u32 secure_tcpv6_sequence_number(__u32 *saddr, __u32 *daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * The procedure is the same as for IPv4, but addresses are longer. + * Thus we must use two AES operations. + */ + + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 2 block in CBC with + * and IV={(sport)<<16 | dport, 0, 0, 0} + * + * seq = ct[0], ct = Enc-CBC(Key, {ports}, {daddr, saddr}); + * = Enc(Key, saddr xor Enc(Key, daddr)) + */ + + /* PT0 = daddr */ + memcpy(tmp, daddr, random_state->blocksize); + /* IV = {ports,0,0,0} */ + tmp[0] ^= (sport<<16) | dport; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + /* PT1 = saddr */ + random_state->networkCipher->crt_cipher.cit_xor_block(tmp, (const u8*)saddr); + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + seq += tv.tv_usec + tv.tv_sec*1000000; + + return seq; +} +EXPORT_SYMBOL(secure_tcpv6_sequence_number); + +__u32 secure_ipv6_id(__u32 *daddr) +{ + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + check_and_rekey(get_seconds()); + + memcpy(tmp, daddr, random_state->blocksize); + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + /* id = tmp[0], tmp = Enc(Key, daddr); */ + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +EXPORT_SYMBOL(secure_ipv6_id); +#endif + + +__u32 secure_tcp_sequence_number(__u32 saddr, __u32 daddr, + __u16 sport, __u16 dport) +{ + struct timeval tv; + __u32 seq; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + struct scatterlist sgtmp[1]; + + /* + * Pick a random secret every REKEY_INTERVAL seconds. + */ + do_gettimeofday(&tv); /* We need the usecs below... */ + check_and_rekey(tv.tv_sec); + + /* + * Pick a unique starting offset for each TCP connection endpoints + * (saddr, daddr, sport, dport). + * Note that the words are placed into the starting vector, which is + * then mixed with a partial MD4 over random data. + */ + /* + * AES256 is 2.5 times faster then MD4 by openssl tests. + * We can afford to encrypt 1 block + * + * seq = ct[0], ct = Enc(Key, {(sport<<16)|dport, daddr, saddr, 0}) + */ + + tmp[0] = (sport<<16) | dport; + tmp[1] = daddr; + tmp[2] = saddr; + tmp[3] = 0; + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + seq = tmp[0]; + seq += network_count; + /* + * As close as possible to RFC 793, which + * suggests using a 250 kHz clock. + * Further reading shows this assumes 2 Mb/s networks. + * For 10 Mb/s Ethernet, a 1 MHz clock is appropriate. + * That's funny, Linux has one built in! Use it! + * (Networks are faster now - should this be increased?) + */ + seq += tv.tv_usec + tv.tv_sec*1000000; + +#if 0 + printk("init_seq(%lx, %lx, %d, %d) = %d\n", + saddr, daddr, sport, dport, seq); +#endif + return seq; +} + +EXPORT_SYMBOL(secure_tcp_sequence_number); + +/* The code below is shamelessly stolen from secure_tcp_sequence_number(). + * All blames to Andrey V. Savochkin <saw@msu.ru>. + * Changed by Jean-Luc Cooke <jlcooke@certainkey.com> to use AES & C.A.P.I. + */ +__u32 secure_ip_id(__u32 daddr) +{ + struct scatterlist sgtmp[1]; + u8 tmp[RANDOM_MAX_BLOCK_SIZE]; + + check_and_rekey(get_seconds()); + + /* + * Pick a unique starting offset for each IP destination. + * id = ct[0], ct = Enc(Key, {daddr,0,0,0}); + */ + tmp[0] = daddr; + tmp[1] = 0; + tmp[2] = 0; + tmp[3] = 0; + sgtmp[0].page = virt_to_page(tmp); + sgtmp[0].offset = offset_in_page(tmp); + sgtmp[0].length = random_state->blocksize; + + crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgtmp, 1); + + return tmp[0]; +} + +#ifdef CONFIG_SYN_COOKIES +/* + * Secure SYN cookie computation. This is the algorithm worked out by + * Dan Bernstein and Eric Schenk. + * + * For linux I implement the 1 minute counter by looking at the jiffies clock. + * The count is passed in as a parameter, so this code doesn't much care. + * + * SYN cookie (and seq# & id#) Changed in 2004 by Jean-Luc Cooke + * <jlcooke@certainkey.com> to use the C.A.P.I. and AES256. + */ + +#define COOKIEBITS 24 /* Upper bits store count */ +#define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1) + +__u32 secure_tcp_syn_cookie(__u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 data) +{ + struct scatterlist sg[1]; + __u32 tmp[4]; + + /* + * Compute the secure sequence number. + * + * Output is the 32bit tag of a CBC-MAC of + * PT={count,0,0,0} with IV={addr,daddr,sport|dport,sseq} + * cookie = {<8bit count>, + * truncate_24bit( + * Encrypt(Sec, {saddr,daddr,sport|dport,sseq}) + * ) + * } + * + * DJB wrote (http://cr.yp.to/syncookies/archive) about how to do this + * with hash algorithms. + * - we can replace two SHA1s used in the previous kernel with 1 AES + * and make things 5x faster + * - I'd like to propose we remove the use of two whittenings with a + * single operation since we were only using addition modulo 2^32 of + * all these values anyways. Not to mention the hashs differ only in + * that the second processes more data... why drop the first hash? + * We did learn that addition is commutative and associative long ago. + * - by replacing two SHA1s and addition modulo 2^32 with encryption of + * a 32bit value using CAPI we've made it 1,000,000,000 times easier + * to understand what is going on. + */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + if (!random_state->networkCipher_ready) { + check_and_rekey(get_seconds()); + } + /* tmp[]/sg[0] = Enc(Sec, {saddr,daddr,sport|dport,sseq}) */ + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* cookie = CTR encrypt of 8-bit-count and 24-bit-data */ + return tmp[0] ^ ( (count << COOKIEBITS) | (data & COOKIEMASK) ); +} + +/* + * This retrieves the small "data" value from the syncookie. + * If the syncookie is bad, the data returned will be out of + * range. This must be checked by the caller. + * + * The count value used to generate the cookie must be within + * "maxdiff" if the current (passed-in) "count". The return value + * is (__u32)-1 if this test fails. + */ +__u32 check_tcp_syn_cookie(__u32 cookie, __u32 saddr, __u32 daddr, __u16 sport, + __u16 dport, __u32 sseq, __u32 count, __u32 maxdiff) +{ + struct scatterlist sg[1]; + __u32 tmp[4], thiscount, diff; + + if (random_state == NULL || !random_state->networkCipher_ready) + return (__u32)-1; /* Well, duh! */ + + tmp[0] = saddr; + tmp[1] = daddr; + tmp[2] = (sport << 16) + dport; + tmp[3] = sseq; + sg[0].page = virt_to_page(tmp); + sg[0].offset = offset_in_page(tmp); + sg[0].length = random_state->blocksize; + crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); + + /* CTR decrypt the cookie */ + cookie ^= tmp[0]; + + /* top 8 bits are 'count' */ + thiscount = cookie >> COOKIEBITS; + + diff = count - thiscount; + if (diff >= maxdiff) + return (__u32)-1; + + /* bottom 24 bits are 'data' */ + return cookie & COOKIEMASK; +} +#endif ^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [PROPOSAL/PATCH 2] Fortuna PRNG in /dev/random 2004-09-30 4:23 ` Jean-Luc Cooke 2004-09-30 6:50 ` James Morris 2004-09-30 9:03 ` Felipe Alfaro Solana @ 2004-09-30 10:46 ` Jan-Benedict Glaw 2 siblings, 0 replies; 28+ messages in thread From: Jan-Benedict Glaw @ 2004-09-30 10:46 UTC (permalink / raw) To: Jean-Luc Cooke; +Cc: Theodore Ts'o, linux, linux-kernel, cryptoapi, jmorris [-- Attachment #1: Type: text/plain, Size: 1245 bytes --] On Thu, 2004-09-30 00:23:03 -0400, Jean-Luc Cooke <jlcooke@certainkey.com> wrote in message <20040930042303.GS16057@certainkey.com>: > --- linux-2.6.8.1/crypto/Kconfig 2004-08-14 06:56:22.000000000 -0400 > +++ linux-2.6.8.1-rand2/crypto/Kconfig 2004-09-28 23:30:04.000000000 -0400 > @@ -9,6 +9,15 @@ > help > This option provides the core Cryptographic API. > > +config CRYPTO_RANDOM_FORTUNA > + bool "The Fortuna RNG" > + help > + Replaces the legacy Linux RNG with one using the crypto API > + and Fortuna by Ferguson and Schneier. Entropy estimation, and > + a throttled /dev/random remain. Improvements include faster > + /dev/urandom output and event input mixing. > + Note: Requires AES and SHA256 to be built-in. > + > config CRYPTO_HMAC > bool "HMAC support" Instead of mentioning AES and SHA256 being required built-in, why not just "select" them? MfG, JBG -- Jan-Benedict Glaw jbglaw@lug-owl.de . +49-172-7608481 _ O _ "Eine Freie Meinung in einem Freien Kopf | Gegen Zensur | Gegen Krieg _ _ O fuer einen Freien Staat voll Freier Bürger" | im Internet! | im Irak! O O O ret = do_actions((curr | FREE_SPEECH) & ~(NEW_COPYRIGHT_LAW | DRM | TCPA)); [-- Attachment #2: Digital signature --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2004-10-01 13:07 UTC | newest] Thread overview: 28+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2004-09-24 0:59 [PROPOSAL/PATCH] Fortuna PRNG in /dev/random linux 2004-09-24 2:34 ` Jean-Luc Cooke 2004-09-24 6:19 ` linux 2004-09-24 21:42 ` linux 2004-09-25 14:54 ` Jean-Luc Cooke 2004-09-25 18:43 ` Theodore Ts'o 2004-09-26 1:42 ` Jean-Luc Cooke 2004-09-26 5:23 ` Theodore Ts'o 2004-09-27 0:50 ` linux 2004-09-27 13:07 ` Jean-Luc Cooke 2004-09-27 14:23 ` Theodore Ts'o 2004-09-27 14:42 ` Jean-Luc Cooke 2004-09-26 6:46 ` linux 2004-09-26 16:32 ` Jean-Luc Cooke 2004-09-26 2:31 ` linux 2004-09-29 17:10 ` [PROPOSAL/PATCH 2] " Jean-Luc Cooke 2004-09-29 19:31 ` Theodore Ts'o 2004-09-29 20:27 ` Jean-Luc Cooke 2004-09-29 21:40 ` Theodore Ts'o 2004-09-29 21:53 ` Theodore Ts'o 2004-09-29 23:24 ` Jean-Luc Cooke 2004-09-30 0:21 ` Jean-Luc Cooke 2004-09-30 4:23 ` Jean-Luc Cooke 2004-09-30 6:50 ` James Morris 2004-09-30 9:03 ` Felipe Alfaro Solana 2004-09-30 13:36 ` Jean-Luc Cooke 2004-10-01 12:56 ` Jean-Luc Cooke 2004-09-30 10:46 ` Jan-Benedict Glaw
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox