public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
@ 2004-09-23 23:43 Jean-Luc Cooke
  2004-09-24  4:38 ` Theodore Ts'o
  2004-09-27  4:58 ` Theodore Ts'o
  0 siblings, 2 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-23 23:43 UTC (permalink / raw)
  To: linux-kernel

[-- Attachment #1: Type: text/plain, Size: 3726 bytes --]

here we go ...

Team,

Here is a patch for the 2.6.8.1 Linux kernel which replaces the existing PRNG
in random.c with the Fortuna PRNG designed by Ferguson and Schneier (Practical
Cryptography).  It is regarded in crypto circles as the current state-of-the-art
in cryptographically secure PRNGs.

Warning: Ted Ts'o and I talked about this at great length in sci.crypt and
in the end I failed on convince him that my patch was worth becoming main-line,
and he failed to convince me that status-quo is acceptable considering a better
solution exists.

I've made a page to capture my reasons and findings for this patch.
  http://jlcooke.ca/random/
Please review this link.  As a minimum the first table comparing status-quo
and Fortuna.

Changes in this patch (2 files):
 include/linux/sysctl.h
  + added a RANDOM_DERIVE_SEED enum for the new
    /proc/sys/kernel/random/derive_seed interface.

 drivers/char/random.c
  + Kept all the event collection mechanisms, and interfaces.
  + Removed MD5-noPadding, SHA1-noPadding-endianIncorrect, halfMD4-noPadding (!?)
    and twoThirdsMD4-noPadding (!?)
   - Now uses Fortuna and the CryptoAPI (SHA-256, AES256)
  + Removed the one-of-a-kind (to my knowledge anyway) linear input mixing
    function
   - Now uses Fortuna and the CryptoAPI (SHA-256, AES256)
  + Removed the SHA1-feedback RNG output function
   - Input/Output now uses the Fortuna PRNG
  + Removed the "HASH+HASH++++" system of SynCookies
   - SynCookies now use block cipher CBC encryption
  + Removed the "HASH+++" system of TCPv4/v6 sequence number generation
   - Now uses a block cipher CTR system to generate 32bit random value
  + Removed the "HASH" system of IPv4/v6 ID number generation
   - Now uses a block cipher CTR system to generate 32bit random value
  + Removed entropy estimation
   - Fortuna doesn't need it, vanilla-/dev/random and other Yarrow-like
     PRNGs do to survive state compromise and other attacks.
   - /dev/random is synonymous with /dev/urandom
   - /proc/sys/kernel/random/entropy_avail is always the same as
     /proc/sys/kernel/random/pool_size so ssh, dm-crypt and other apps who
     block waiting for entropy don't seize up all together.
  + Added /proc/sys/kernel/random/derive_pool to save the pooling system's
    state.  This is a good thing because Fortuna will avoid using the entire
    pooling system for output (this is a strength).

I expect much discussion on this.  So let me lay out some facts:
 - I have not broken /dev/random.  I wrote this patch because I think we can
   do better and Fortuna is the best there is right now.
 - Current /dev/random is difficult to analyze because it doesn't use standards
   compliant cryptographic primitives.
 - Current /dev/random is slower then my proposed patch (5x on output, 2x on input)
 - Current /dev/random's input mixing function is a linear function.  This is bad in crypto-circles.
   Why?  Linear functions are communitive, associative and sometimes distributive.
   Outputs from Linear function based PRNGs are very weak.
   + Note: Currently, output from /dev/random is feed-back into the input mixing
           function making linear attacks of the PRNG more complex.  But I fear
           the combination of linear input mixing & knowing the feedback input
           is a bad combination.  Fortuna eliminates this and other theoretical
           attacks. Read:
   http://groups.google.com/groups?lr=&ie=UTF-8&th=2d80024f677ccadc&seekm=BemdnYeJjt2qMc3cRVn-jw%40comcast.com
 - If need-be, I am prepared to take over maintainership of drivers/char/random.c
  + I don't want to push such a big change into Ted's lap, I am capable of taking
    over for him.

I look forward to hearing from all of you.

JLC

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fortuna-2.6.8.1.patch --]
[-- Type: text/plain; charset=unknown-8bit, Size: 80118 bytes --]

diff -uNr linux-2.6.8.1-orig/include/linux/sysctl.h linux-2.6.8.1-fortuna/include/linux/sysctl.h
--- linux-2.6.8.1-orig/include/linux/sysctl.h	2004-08-14 12:55:33.000000000 +0200
+++ linux-2.6.8.1-fortuna/include/linux/sysctl.h	2004-09-13 18:55:43.000000000 +0200
@@ -198,7 +198,8 @@
 	RANDOM_READ_THRESH=3,
 	RANDOM_WRITE_THRESH=4,
 	RANDOM_BOOT_ID=5,
-	RANDOM_UUID=6
+	RANDOM_UUID=6,
+	RANDOM_DERIVE_SEED=7
 };
 
 /* /proc/sys/kernel/pty */
--- linux-2.6.8.1/drivers/char/random.c	2004-08-14 06:54:48.000000000 -0400
+++ linux-2.6.8.1-rand2/drivers/char/random.c	2004-09-23 16:21:08.345499160 -0400
@@ -2,9 +2,11 @@
  * random.c -- A strong random number generator
  *
  * Version 1.89, last modified 19-Sep-99
+ * Version 2.01, last modified 23-Sep-2004
  * 
  * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999.  All
  * rights reserved.
+ * Copyright Jean-Luc Cooke, 2004.  All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,6 +42,180 @@
  */
 
 /*
+ * Ammendum to Ts'o's comments by Jean-Luc Cooke Aug 2004
+ *
+ * The entire PRNG used in this file was replaced using a variant of the Fortuna
+ * PRNG described in Practical Cryptography by Ferguson and Schnier.
+ *
+ * The changes to their design include:
+ *  - feeding the output of each pool back into their input to carry entropy
+ *    forward (avoids pool overflow attacks like "dd if=/dev/zero of=/dev/random"
+ *
+ * Also, the entropy estimator was removed since it is not needed for cryptographically
+ * secure random data and such constructions are historically prone to attack
+ * [read Practical Cryptography].
+ *
+ * The new "start" and "stop" scripts are as follows:
+ *
+ *      echo "Initializing random number generator..."
+ *      random_seed=/var/run/random-seed
+ *      # Carry a random seed from start-up to start-up
+ *      # Load and then save the whole entropy pool
+ *      if [ -f $random_seed ]; then
+ *              cat $random_seed >/dev/urandom
+ *      else
+ *              touch $random_seed
+ *      fi
+ *      chmod 600 $random_seed
+ *      dd if=/proc/sys/kernel/random/derive_seed of=$random_seed
+ *
+ * and the following lines in an appropriate script which is run as
+ * the system is shutdown:
+ *
+ *      # Carry a random seed from shut-down to start-up
+ *      # Save the whole entropy pool
+ *      echo "Saving random seed..."
+ *      random_seed=/var/run/random-seed
+ *      touch $random_seed
+ *      chmod 600 $random_seed
+ *      dd if=/proc/sys/kernel/random/derive_seed of=$random_seed
+ *
+ * The Fortuna PRNG as described in Practical Cryptography is implemented here.
+ * 
+ * Pseudo-code follows.
+ *
+<b>create_entropy_pool(r)</b>
+ - create an entropy pool in "r"
+
+  r.pool0_len = 0;
+  r.reseed_count = 0;
+  r.derive_count = 0;
+  r.digestsize = // digest size for our hash
+  r.blocksize = // block size for our cipher
+  r.keysize = // key size for our cipher
+  for (i=0; i&lt;32; i++) {
+    crypto_digest_init(r.pool[i]);
+  }
+  memset(r.key, 0, r.keysize);
+  crypto_cipher_setkey(r.cipher, r.key, r.keysize);
+
+<b>add_entropy_words(r, in, nwords)</b>
+ - mix 32bit word array "in" which is "nwords" long into pool "r"
+
+  crypto_digest_update(r.pool[i], in, nwords*sizeof(in[0]));
+  if (r.pool_index == 0)
+    r.pool0_len += nwords*sizeof(in[0]);
+  r.pool_index = r.pool_index + 1  mod  (2<sup>number of pools</sup> - 1)
+  
+<b>random_reseed(r)</b>
+ - reseed the key from the pooling system
+
+  r.reseed_count++;
+  
+  crypto_digest_init(hash);
+  crypto_digest_update(hash, r.key, r.keysize);
+  
+  for (i=0; i&lt;32; i++) {
+    if (2<sup>i</sup> is a factor of r.reseed_count) {
+      crypto_digest_final(r.pool[i], tmp);
+      crypto_digest_init(r.poo[i]);
+      crypto_digest_update(hash, tmp, r.digestsize);
+  
+      // jlcooke: small change from Ferguson
+      crypto_digest_update(r.pool[i], tmp, r.digestsize);
+    }
+  }
+  
+  crypto_digest_final(hash, tmp);
+  crypto_cipher_setkey(r.cipher, tmp, r.keysize);
+  r.ctrValue = r.ctrValue + 1  mod  (2<sup>number of pools</sup> - 1)
+
+<b>extract_entropy(r, buf, nbytes, flags)</b>
+ - fill byte array "buf" with "nbytes" of random data from entropy pool "r"
+
+  random_reseed(r);
+  r.pool0_len = 0;
+  
+  while (nbytes &gt; 0) {
+    crypto_cipher_encrypt(r.cipher, tmp, r.ctrValue, r.blocksize);
+    r.ctrValue++; // modulo 2<sup>r.blocksize/8</sup> 
+  
+    //
+    // Copy r.blocksize of tmp to the user
+    // Unless nbytes is less than r.blocksize, in which case only copy nbytes
+    //  
+  
+    nbytes -= r.blocksize;
+  }
+  
+  // generate a new key
+  crypto_cipher_encrypt(r.cipher, r.key, r.ctrValue, r.blocksize);
+  crypto_cipher_setkey(r.cipher, r.key, r.keysize);
+  
+<b>derive_pool(r, buf)</b>
+ - Fill "buf" with the output from a 1-way transformation of all 32-pools
+
+  memset(tmp, 0, r.digestsize);
+  r.pool0_len = 0;
+  
+  for (i=0; i&lt;32; i++) {
+    crypto_digest_init(hash);
+  
+    crypto_digest_update(hash, tmp, r.digestsize);
+  
+    crypto_digest_final(r.pool[i], tmp);
+    crypto_digest_init(r.pool[i]);
+    crypto_digest_update(hash, tmp, r.digestsize);
+  
+    crypto_digest_update(hash, r.derive_count, sizeof(r.derive_count));
+  
+    crypto_digest_final(hash, tmp);
+  
+    // Replace all 0x00 in "tmp" with "0x01" because the API to return a byte
+    //  array does not exist.  Only a "return string" API is provided.  This
+    //  reduces the effective entropy of the output by 0.39%.
+    // 
+  
+    memcpy(&amp;buf[i*r.digestsize], tmp, r.digestsize);
+    r.derive_count++;
+  }
+ *
+ * Draft Security Statement/Analysis (Jean-Luc Cooke <jlcooke@certainkey.com>)
+ *
+ * The Fortuna PRNG is resilliant to all known and preventable PRNG attacks.
+ * Proof of strength to these attacks can be done by reduction to the security
+ * of the underlying cryptographic primitives.
+ *  * H = HASH(M)
+ *   + M={0,1}^Mlen  0 <= Mlen < infinity
+ *   + H={0,1}^Hlen  256 <= Hlen 
+ *  * C = ENCRYPT(K,M)
+ *   + K={0,1}^Klen  256 <= Klen
+ *   + M={0,1}^Mlen  Mlen = 128
+ *   + C={0,1}^Clen  Clen = 128
+ *
+ *  - Invertability of the output function
+ *    The state of the output function Output[i] = ENCRYPT(KEY, CTR++) is {KEY,CTR}
+ *    To recover the state {KEY,CTR} the attacker must be able to mount a known-plaintext
+ *    or a known-ciphertext attack on the block cipher C=ENCRYPT(K,M) with N blocks.
+ *    N = ReseedIntervalInSeconds * OutputRateInBytesPerSecond / BytesPerBlock
+ *    AES256 in CTR is secure from known-plaintext/ciphertext key recovery attacks with
+ *    N < 2^128
+ *    However, After 2^64 blocks (2^71 bits) an attacker would have a 0.5 chance to guessing
+ *    the next 128bit output.  N <<< 2^64
+ *
+ *  - Invertability of the pool mixing function
+ *    The pool mixing function H' = HASH(H' || M) is said to be non-inveretable
+ *    if H=HASH(MSG) is invertable.
+ *    There have been no invertability discoveries in SHA-256.
+ * 
+ *  - Manipulating pool mixing
+ *    An attack who has access to one or all of the entropy event sources may be able to
+ *    input mallicious event data to alter any one of the pool states into a degenerate
+ *    state.  This requires that the underlying H=HASH(MSG) function is suseptible to
+ *    a 2nd pre-image attack.  SHA-256 has no such known attacks.
+ */
+
+/*
  * (now, with legal B.S. out of the way.....) 
  * 
  * This routine gathers environmental noise from device drivers, etc.,
@@ -254,94 +430,43 @@
 #include <linux/interrupt.h>
 #include <linux/spinlock.h>
 #include <linux/percpu.h>
+#include <linux/crypto.h>
+#include <../crypto/internal.h>
 
+#include <asm/scatterlist.h>
 #include <asm/processor.h>
 #include <asm/uaccess.h>
 #include <asm/irq.h>
 #include <asm/io.h>
 
-/*
- * Configuration information
- */
-#define DEFAULT_POOL_SIZE 512
-#define SECONDARY_POOL_SIZE 128
-#define BATCH_ENTROPY_SIZE 256
-#define USE_SHA
-
-/*
- * The minimum number of bits of entropy before we wake up a read on
- * /dev/random.  Should be enough to do a significant reseed.
- */
-static int random_read_wakeup_thresh = 64;
+#if 0
+	#define DEBUG_PRINTK  printk
+#else
+	#define DEBUG_PRINTK  debug_printk
+static inline void debug_printk(const char *a, ...) {}
+#endif
 
 /*
- * If the entropy count falls under this number of bits, then we
- * should wake up processes which are selecting or polling on write
- * access to /dev/random.
+ * Configuration information
  */
-static int random_write_wakeup_thresh = 128;
+#define BATCH_ENTROPY_SIZE 512 /* how many events do we buffer?  BATCH_ENTROPY_SIZE/2 == how many we need before batch-submitting them */
+#define RANDOM_RESEED_INTERVAL 600 /* reseed the PRNG output state every 5mins */
+#define RANDOM_DEFAULT_CIPHER_ALGO "aes"
+#define RANDOM_DEFAULT_DIGEST_ALGO "sha256"
+
+#define DEFAULT_POOL_NUMBER 5 /* 2^{5} = 32 pools */
+#define MAXIMUM_POOL_NUMBER DEFAULT_POOL_NUMBER
+#define MINIMUM_POOL_NUMBER 2 /* 2^{2} = 4 pools */
+#define USE_SHA256
+#define RANDOM_MAX_DIGEST_SIZE 64 /* SHA512/WHIRLPOOL have 64bytes == 512 bits */
+#define RANDOM_MAX_BLOCK_SIZE  16 /* AES256 has 16byte blocks == 128 bits */
+#define RANDOM_MAX_KEY_SIZE    32 /* AES256 has 32byte keys == 256 bits */
+#define USE_AES256
 
 /*
- * When the input pool goes over trickle_thresh, start dropping most
- * samples to avoid wasting CPU time and reduce lock contention.
+ * Throttle mouse/keyboard/disk/interrupt entropy input to only add after this many jiffies/rdtsc counts
  */
-
-static int trickle_thresh = DEFAULT_POOL_SIZE * 7;
-
-static DEFINE_PER_CPU(int, trickle_count) = 0;
-
-/*
- * A pool of size .poolwords is stirred with a primitive polynomial
- * of degree .poolwords over GF(2).  The taps for various sizes are
- * defined below.  They are chosen to be evenly spaced (minimum RMS
- * distance from evenly spaced; the numbers in the comments are a
- * scaled squared error sum) except for the last tap, which is 1 to
- * get the twisting happening as fast as possible.
- */
-static struct poolinfo {
-	int	poolwords;
-	int	tap1, tap2, tap3, tap4, tap5;
-} poolinfo_table[] = {
-	/* x^2048 + x^1638 + x^1231 + x^819 + x^411 + x + 1  -- 115 */
-	{ 2048,	1638,	1231,	819,	411,	1 },
-
-	/* x^1024 + x^817 + x^615 + x^412 + x^204 + x + 1 -- 290 */
-	{ 1024,	817,	615,	412,	204,	1 },
-#if 0				/* Alternate polynomial */
-	/* x^1024 + x^819 + x^616 + x^410 + x^207 + x^2 + 1 -- 115 */
-	{ 1024,	819,	616,	410,	207,	2 },
-#endif
-
-	/* x^512 + x^411 + x^308 + x^208 + x^104 + x + 1 -- 225 */
-	{ 512,	411,	308,	208,	104,	1 },
-#if 0				/* Alternates */
-	/* x^512 + x^409 + x^307 + x^206 + x^102 + x^2 + 1 -- 95 */
-	{ 512,	409,	307,	206,	102,	2 },
-	/* x^512 + x^409 + x^309 + x^205 + x^103 + x^2 + 1 -- 95 */
-	{ 512,	409,	309,	205,	103,	2 },
-#endif
-
-	/* x^256 + x^205 + x^155 + x^101 + x^52 + x + 1 -- 125 */
-	{ 256,	205,	155,	101,	52,	1 },
-
-	/* x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 -- 105 */
-	{ 128,	103,	76,	51,	25,	1 },
-#if 0	/* Alternate polynomial */
-	/* x^128 + x^103 + x^78 + x^51 + x^27 + x^2 + 1 -- 70 */
-	{ 128,	103,	78,	51,	27,	2 },
-#endif
-
-	/* x^64 + x^52 + x^39 + x^26 + x^14 + x + 1 -- 15 */
-	{ 64,	52,	39,	26,	14,	1 },
-
-	/* x^32 + x^26 + x^20 + x^14 + x^7 + x + 1 -- 15 */
-	{ 32,	26,	20,	14,	7,	1 },
-
-	{ 0,	0,	0,	0,	0,	0 },
-};
-
-#define POOLBITS	poolwords*32
-#define POOLBYTES	poolwords*4
+#define RANDOM_INPUT_THROTTLE  1000
 
 /*
  * For the purposes of better mixing, we use the CRC-32 polynomial as
@@ -399,8 +524,10 @@
 /*
  * Static global variables
  */
+static int random_entropy_count; // jlc & cam have been together for 5 and 2/3 years as of the time this was written;
+static int random_read_wakeup_thresh = 0; // ignored now.
+static int random_write_wakeup_thresh = 0; // ignored now.
 static struct entropy_store *random_state; /* The default global store */
-static struct entropy_store *sec_random_state; /* secondary store */
 static DECLARE_WAIT_QUEUE_HEAD(random_read_wait);
 static DECLARE_WAIT_QUEUE_HEAD(random_write_wait);
 
@@ -419,27 +546,6 @@
  *****************************************************************/
 
 /*
- * Unfortunately, while the GCC optimizer for the i386 understands how
- * to optimize a static rotate left of x bits, it doesn't know how to
- * deal with a variable rotate of x bits.  So we use a bit of asm magic.
- */
-#if (!defined (__i386__))
-static inline __u32 rotate_left(int i, __u32 word)
-{
-	return (word << i) | (word >> (32 - i));
-	
-}
-#else
-static inline __u32 rotate_left(int i, __u32 word)
-{
-	__asm__("roll %%cl,%0"
-		:"=r" (word)
-		:"0" (word),"c" (i));
-	return word;
-}
-#endif
-
-/*
  * More asm magic....
  * 
  * For entropy estimation, we need to do an integral base 2
@@ -490,15 +596,28 @@
  **********************************************************************/
 
 struct entropy_store {
-	/* mostly-read data: */
-	struct poolinfo poolinfo;
-	__u32		*pool;
+	const char *digestAlgo;
+	unsigned int  digestsize;
+	struct crypto_tfm *pools[1<<MAXIMUM_POOL_NUMBER];
+	/* optional, handy for statistics */
+	unsigned int pools_bytes[1<<MAXIMUM_POOL_NUMBER];
+
+	const char *cipherAlgo;
+	unsigned char key[RANDOM_MAX_DIGEST_SIZE];     /* the key */
+	unsigned int  keysize;
+	unsigned char iv[16];      /* the CTR value */
+	unsigned int  blocksize;
+	struct crypto_tfm *cipher;
+
+	unsigned int  pool_number; /* 2^pool_number # of pools */
+	unsigned int  pool_index;  /* current pool to add into */
+	unsigned int  pool0_len;   /* size of the first pool */
+	unsigned int  reseed_count; /* number of time we have reset */
+	struct crypto_tfm *reseedHash; /* digest used during random_reseed() */
+	struct crypto_tfm *networkCipher; /* cipher used for network randomness */
+	char networkCipher_ready;         /* flag indicating if networkCipher has been seeded */
 
-	/* read-write data: */
 	spinlock_t lock ____cacheline_aligned_in_smp;
-	unsigned	add_ptr;
-	int		entropy_count;
-	int		input_rotate;
 };
 
 /*
@@ -507,61 +626,75 @@
  *
  * Returns an negative error if there is a problem.
  */
-static int create_entropy_store(int size, struct entropy_store **ret_bucket)
+static int create_entropy_store(int pool_number_arg, struct entropy_store **ret_bucket)
 {
 	struct	entropy_store	*r;
-	struct	poolinfo	*p;
-	int	poolwords;
-
-	poolwords = (size + 3) / 4; /* Convert bytes->words */
-	/* The pool size must be a multiple of 16 32-bit words */
-	poolwords = ((poolwords + 15) / 16) * 16;
+	unsigned long pool_number;
+	int 	keysize, i, j;
 
-	for (p = poolinfo_table; p->poolwords; p++) {
-		if (poolwords == p->poolwords)
-			break;
-	}
-	if (p->poolwords == 0)
-		return -EINVAL;
+	pool_number = pool_number_arg;
+	if (pool_number < MINIMUM_POOL_NUMBER)
+		pool_number = MINIMUM_POOL_NUMBER;
 
 	r = kmalloc(sizeof(struct entropy_store), GFP_KERNEL);
-	if (!r)
+	if (!r) {
 		return -ENOMEM;
+	}
 
 	memset (r, 0, sizeof(struct entropy_store));
-	r->poolinfo = *p;
+	r->pool_number = pool_number;
+	r->digestAlgo = RANDOM_DEFAULT_DIGEST_ALGO;
 
-	r->pool = kmalloc(POOLBYTES, GFP_KERNEL);
-	if (!r->pool) {
-		kfree(r);
-		return -ENOMEM;
+DEBUG_PRINTK("create_entropy_store() pools=%u index=%u\n", 1<<pool_number, r->pool_index);
+	for (i=0; i<(1<<pool_number); i++) {
+DEBUG_PRINTK("create_entropy_store() i=%i index=%u\n", i, r->pool_index);
+		r->pools[i] = crypto_alloc_tfm(r->digestAlgo, 0);
+		if (r->pools[i] == NULL) {
+		  	for (j=0; j<i; j++) {
+				if (r->pools[j] != NULL) {
+					kfree(r->pools[j]);
+				}
+			}
+			kfree(r);
+			return -ENOMEM;
+		}
+		crypto_digest_init( r->pools[i] );
 	}
-	memset(r->pool, 0, POOLBYTES);
 	r->lock = SPIN_LOCK_UNLOCKED;
 	*ret_bucket = r;
+
+	r->cipherAlgo = RANDOM_DEFAULT_CIPHER_ALGO;
+	if ((r->cipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) {
+	  	return -ENOMEM;
+	}
+
+	/* If the HASH's output is greater then the cipher's keysize, truncate to the
+         * cipher's keysize */
+	keysize = crypto_tfm_alg_max_keysize(r->cipher);
+	r->digestsize = crypto_tfm_alg_digestsize(r->pools[0]);
+	r->blocksize = crypto_tfm_alg_blocksize(r->cipher);
+
+	r->keysize = (keysize < r->digestsize) ? keysize : r->digestsize;
+
+	if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) {
+		return -EINVAL;
+	}
+
+	/* digest used duing random-reseed() */
+	if ((r->reseedHash=crypto_alloc_tfm(r->digestAlgo, 0)) == NULL) {
+		return -ENOMEM;
+	}
+	/* cipher used for network randomness, init to key={zerovector} for now */
+	if ((r->networkCipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) {
+		return -ENOMEM;
+	}
+
 	return 0;
 }
 
-/* Clear the entropy pool and associated counters. */
-static void clear_entropy_store(struct entropy_store *r)
-{
-	r->add_ptr = 0;
-	r->entropy_count = 0;
-	r->input_rotate = 0;
-	memset(r->pool, 0, r->poolinfo.POOLBYTES);
-}
-#ifdef CONFIG_SYSCTL
-static void free_entropy_store(struct entropy_store *r)
-{
-	if (r->pool)
-		kfree(r->pool);
-	kfree(r);
-}
-#endif
 /*
  * This function adds a byte into the entropy "pool".  It does not
- * update the entropy estimate.  The caller should call
- * credit_entropy_store if this is appropriate.
+ * update the entropy estimate.
  * 
  * The pool is stirred with a primitive polynomial of the appropriate
  * degree, and then twisted.  We twist by three bits at a time because
@@ -571,87 +704,33 @@
 static void add_entropy_words(struct entropy_store *r, const __u32 *in,
 			      int nwords)
 {
-	static __u32 const twist_table[8] = {
-		         0, 0x3b6e20c8, 0x76dc4190, 0x4db26158,
-		0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };
-	unsigned long i, add_ptr, tap1, tap2, tap3, tap4, tap5;
-	int new_rotate, input_rotate;
-	int wordmask = r->poolinfo.poolwords - 1;
-	__u32 w, next_w;
 	unsigned long flags;
+	struct scatterlist sg[1];
+	static unsigned int totalBytes=0;
 
-	/* Taps are constant, so we can load them without holding r->lock.  */
-	tap1 = r->poolinfo.tap1;
-	tap2 = r->poolinfo.tap2;
-	tap3 = r->poolinfo.tap3;
-	tap4 = r->poolinfo.tap4;
-	tap5 = r->poolinfo.tap5;
-	next_w = *in++;
-
-	spin_lock_irqsave(&r->lock, flags);
-	prefetch_range(r->pool, wordmask);
-	input_rotate = r->input_rotate;
-	add_ptr = r->add_ptr;
-
-	while (nwords--) {
-		w = rotate_left(input_rotate, next_w);
-		if (nwords > 0)
-			next_w = *in++;
-		i = add_ptr = (add_ptr - 1) & wordmask;
-		/*
-		 * Normally, we add 7 bits of rotation to the pool.
-		 * At the beginning of the pool, add an extra 7 bits
-		 * rotation, so that successive passes spread the
-		 * input bits across the pool evenly.
-		 */
-		new_rotate = input_rotate + 14;
-		if (i)
-			new_rotate = input_rotate + 7;
-		input_rotate = new_rotate & 31;
-
-		/* XOR in the various taps */
-		w ^= r->pool[(i + tap1) & wordmask];
-		w ^= r->pool[(i + tap2) & wordmask];
-		w ^= r->pool[(i + tap3) & wordmask];
-		w ^= r->pool[(i + tap4) & wordmask];
-		w ^= r->pool[(i + tap5) & wordmask];
-		w ^= r->pool[i];
-		r->pool[i] = (w >> 3) ^ twist_table[w & 7];
+	if (r == NULL) {
+		return;
 	}
 
-	r->input_rotate = input_rotate;
-	r->add_ptr = add_ptr;
-
-	spin_unlock_irqrestore(&r->lock, flags);
-}
+	spin_lock_irqsave(&r->lock, flags);
 
-/*
- * Credit (or debit) the entropy store with n bits of entropy
- */
-static void credit_entropy_store(struct entropy_store *r, int nbits)
-{
-	unsigned long flags;
+	totalBytes += nwords * sizeof(__u32);
+	r->pools_bytes[r->pool_index] += nwords * sizeof(__u32);
 
-	spin_lock_irqsave(&r->lock, flags);
+	sg[0].page = virt_to_page(in);
+	sg[0].offset = offset_in_page(in);
+	sg[0].length = nwords*sizeof(__u32);
+	crypto_digest_update(r->pools[r->pool_index], sg, 1);
 
-	if (r->entropy_count + nbits < 0) {
-		DEBUG_ENT("negative entropy/overflow (%d+%d)\n",
-			  r->entropy_count, nbits);
-		r->entropy_count = 0;
-	} else if (r->entropy_count + nbits > r->poolinfo.POOLBITS) {
-		r->entropy_count = r->poolinfo.POOLBITS;
-	} else {
-		r->entropy_count += nbits;
-		if (nbits)
-			DEBUG_ENT("%04d %04d : added %d bits to %s\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count,
-				  nbits,
-				  r == sec_random_state ? "secondary" :
-				  r == random_state ? "primary" : "unknown");
+	if (r->pool_index == 0) {
+		r->pool0_len += nwords*sizeof(__u32);
 	}
 
+	/* idx = (idx + 1) mod ( (2^N)-1 ) */
+	r->pool_index = (r->pool_index + 1) & ((1<<r->pool_number)-1);
+
 	spin_unlock_irqrestore(&r->lock, flags);
+DEBUG_PRINTK("0 add_entropy_words() nwords=%u pool[i].bytes=%u total=%u\n", nwords, r->pools_bytes[r->pool_index], totalBytes);
 }
 
 /**********************************************************************
@@ -668,10 +747,10 @@
 };
 
 static struct sample *batch_entropy_pool, *batch_entropy_copy;
-static int	batch_head, batch_tail;
+static int      batch_head, batch_tail;
 static spinlock_t batch_lock = SPIN_LOCK_UNLOCKED;
 
-static int	batch_max;
+static int      batch_max;
 static void batch_entropy_process(void *private_);
 static DECLARE_WORK(batch_work, batch_entropy_process, NULL);
 
@@ -703,19 +782,20 @@
 	int new;
 	unsigned long flags;
 
-	if (!batch_max)
+	if (!batch_max) {
 		return;
+	}
 
 	spin_lock_irqsave(&batch_lock, flags);
 
 	batch_entropy_pool[batch_head].data[0] = a;
 	batch_entropy_pool[batch_head].data[1] = b;
-	batch_entropy_pool[batch_head].credit = num;
+	batch_entropy_pool[batch_head].credit = 0;
 
 	if (((batch_head - batch_tail) & (batch_max-1)) >= (batch_max / 2)) {
 		/*
-		 * Schedule it for the next timer tick:
-		 */
+		* Schedule it for the next timer tick:
+		*/
 		schedule_delayed_work(&batch_work, 1);
 	}
 
@@ -738,8 +818,7 @@
  */
 static void batch_entropy_process(void *private_)
 {
-	struct entropy_store *r	= (struct entropy_store *) private_, *p;
-	int max_entropy = r->poolinfo.POOLBITS;
+	struct entropy_store *r = (struct entropy_store *) private_;
 	unsigned head, tail;
 
 	/* Mixing into the pool is expensive, so copy over the batch
@@ -750,7 +829,7 @@
 	spin_lock_irq(&batch_lock);
 
 	memcpy(batch_entropy_copy, batch_entropy_pool,
-	       batch_max*sizeof(struct sample));
+	batch_max*sizeof(struct sample));
 
 	head = batch_head;
 	tail = batch_tail;
@@ -758,39 +837,19 @@
 
 	spin_unlock_irq(&batch_lock);
 
-	p = r;
 	while (head != tail) {
-		if (r->entropy_count >= max_entropy) {
-			r = (r == sec_random_state) ?	random_state :
-							sec_random_state;
-			max_entropy = r->poolinfo.POOLBITS;
-		}
 		add_entropy_words(r, batch_entropy_copy[tail].data, 2);
-		credit_entropy_store(r, batch_entropy_copy[tail].credit);
 		tail = (tail+1) & (batch_max-1);
 	}
-	if (p->entropy_count >= random_read_wakeup_thresh)
-		wake_up_interruptible(&random_read_wait);
 }
 
+
 /*********************************************************************
  *
  * Entropy input management
  *
  *********************************************************************/
 
-/* There is one of these per entropy source */
-struct timer_rand_state {
-	__u32		last_time;
-	__s32		last_delta,last_delta2;
-	int		dont_count_entropy:1;
-};
-
-static struct timer_rand_state keyboard_timer_state;
-static struct timer_rand_state mouse_timer_state;
-static struct timer_rand_state extract_timer_state;
-static struct timer_rand_state *irq_timer_state[NR_IRQS];
-
 /*
  * This function adds entropy to the entropy "pool" by using timing
  * delays.  It uses the timer_rand_state structure to make an estimate
@@ -803,16 +862,10 @@
  * are used for a high-resolution timer.
  *
  */
-static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
+static void add_timer_randomness(unsigned num)
 {
-	__u32		time;
-	__s32		delta, delta2, delta3;
-	int		entropy = 0;
-
-	/* if over the trickle threshold, use only 1 in 4096 samples */
-	if ( random_state->entropy_count > trickle_thresh &&
-	     (__get_cpu_var(trickle_count)++ & 0xfff))
-		return;
+	static __u32	lasttime=0;
+	__u32	time;
 
 #if defined (__i386__) || defined (__x86_64__)
 	if (cpu_has_tsc) {
@@ -822,480 +875,57 @@
 	} else {
 		time = jiffies;
 	}
-#elif defined (__sparc_v9__)
-	unsigned long tick = tick_ops->get_tick();
-
-	time = (unsigned int) tick;
-	num ^= (tick >> 32UL);
 #else
 	time = jiffies;
 #endif
 
-	/*
-	 * Calculate number of bits of randomness we probably added.
-	 * We take into account the first, second and third-order deltas
-	 * in order to make our estimate.
-	 */
-	if (!state->dont_count_entropy) {
-		delta = time - state->last_time;
-		state->last_time = time;
-
-		delta2 = delta - state->last_delta;
-		state->last_delta = delta;
-
-		delta3 = delta2 - state->last_delta2;
-		state->last_delta2 = delta2;
-
-		if (delta < 0)
-			delta = -delta;
-		if (delta2 < 0)
-			delta2 = -delta2;
-		if (delta3 < 0)
-			delta3 = -delta3;
-		if (delta > delta2)
-			delta = delta2;
-		if (delta > delta3)
-			delta = delta3;
-
-		/*
-		 * delta is now minimum absolute delta.
-		 * Round down by 1 bit on general principles,
-		 * and limit entropy entimate to 12 bits.
-		 */
-		delta >>= 1;
-		delta &= (1 << 12) - 1;
-
-		entropy = int_ln_12bits(delta);
+	/* jlcooke: ToDo: Throttle here? */
+	/* Throttle our input to add_entropy_words() */
+	if ((time-lasttime) < RANDOM_INPUT_THROTTLE) {
+		return;
 	}
-	batch_entropy_store(num, time, entropy);
+	lasttime = time;
+
+	batch_entropy_store(num, time, 0);
 }
 
 void add_keyboard_randomness(unsigned char scancode)
 {
-	static unsigned char last_scancode;
-	/* ignore autorepeat (multiple key down w/o key up) */
-	if (scancode != last_scancode) {
-		last_scancode = scancode;
-		add_timer_randomness(&keyboard_timer_state, scancode);
-	}
+	/* jlcooke: we don't care about auto-repeats, they can't hurt us  */
+	add_timer_randomness(scancode);
 }
 
 EXPORT_SYMBOL(add_keyboard_randomness);
 
 void add_mouse_randomness(__u32 mouse_data)
 {
-	add_timer_randomness(&mouse_timer_state, mouse_data);
+	add_timer_randomness(mouse_data);
 }
 
 EXPORT_SYMBOL(add_mouse_randomness);
 
 void add_interrupt_randomness(int irq)
 {
-	if (irq >= NR_IRQS || irq_timer_state[irq] == 0)
+	if (irq >= NR_IRQS)
 		return;
 
-	add_timer_randomness(irq_timer_state[irq], 0x100+irq);
+	/* jlcooke: no need to add 0x100 ... not random! :P */
+	add_timer_randomness(irq);
 }
 
 EXPORT_SYMBOL(add_interrupt_randomness);
 
 void add_disk_randomness(struct gendisk *disk)
 {
-	if (!disk || !disk->random)
+	if (!disk)
 		return;
 	/* first major is 1, so we get >= 0x200 here */
-	add_timer_randomness(disk->random, 0x100+MKDEV(disk->major, disk->first_minor));
+	/* jlcooke: adding 0x100 is useless */
+	add_timer_randomness(MKDEV(disk->major, disk->first_minor));
 }
 
 EXPORT_SYMBOL(add_disk_randomness);
 
-/******************************************************************
- *
- * Hash function definition
- *
- *******************************************************************/
-
-/*
- * This chunk of code defines a function
- * void HASH_TRANSFORM(__u32 digest[HASH_BUFFER_SIZE + HASH_EXTRA_SIZE],
- * 		__u32 const data[16])
- * 
- * The function hashes the input data to produce a digest in the first
- * HASH_BUFFER_SIZE words of the digest[] array, and uses HASH_EXTRA_SIZE
- * more words for internal purposes.  (This buffer is exported so the
- * caller can wipe it once rather than this code doing it each call,
- * and tacking it onto the end of the digest[] array is the quick and
- * dirty way of doing it.)
- *
- * It so happens that MD5 and SHA share most of the initial vector
- * used to initialize the digest[] array before the first call:
- * 1) 0x67452301
- * 2) 0xefcdab89
- * 3) 0x98badcfe
- * 4) 0x10325476
- * 5) 0xc3d2e1f0 (SHA only)
- * 
- * For /dev/random purposes, the length of the data being hashed is
- * fixed in length, so appending a bit count in the usual way is not
- * cryptographically necessary.
- */
-
-#ifdef USE_SHA
-
-#define HASH_BUFFER_SIZE 5
-#define HASH_EXTRA_SIZE 80
-#define HASH_TRANSFORM SHATransform
-
-/* Various size/speed tradeoffs are available.  Choose 0..3. */
-#define SHA_CODE_SIZE 0
-
-/*
- * SHA transform algorithm, taken from code written by Peter Gutmann,
- * and placed in the public domain.
- */
-
-/* The SHA f()-functions.  */
-
-#define f1(x,y,z)   ( z ^ (x & (y^z)) )		/* Rounds  0-19: x ? y : z */
-#define f2(x,y,z)   (x ^ y ^ z)			/* Rounds 20-39: XOR */
-#define f3(x,y,z)   ( (x & y) + (z & (x ^ y)) )	/* Rounds 40-59: majority */
-#define f4(x,y,z)   (x ^ y ^ z)			/* Rounds 60-79: XOR */
-
-/* The SHA Mysterious Constants */
-
-#define K1  0x5A827999L			/* Rounds  0-19: sqrt(2) * 2^30 */
-#define K2  0x6ED9EBA1L			/* Rounds 20-39: sqrt(3) * 2^30 */
-#define K3  0x8F1BBCDCL			/* Rounds 40-59: sqrt(5) * 2^30 */
-#define K4  0xCA62C1D6L			/* Rounds 60-79: sqrt(10) * 2^30 */
-
-#define ROTL(n,X)  ( ( ( X ) << n ) | ( ( X ) >> ( 32 - n ) ) )
-
-#define subRound(a, b, c, d, e, f, k, data) \
-    ( e += ROTL( 5, a ) + f( b, c, d ) + k + data, b = ROTL( 30, b ) )
-
-
-static void SHATransform(__u32 digest[85], __u32 const data[16])
-{
-    __u32 A, B, C, D, E;     /* Local vars */
-    __u32 TEMP;
-    int	i;
-#define W (digest + HASH_BUFFER_SIZE)	/* Expanded data array */
-
-    /*
-     * Do the preliminary expansion of 16 to 80 words.  Doing it
-     * out-of-line line this is faster than doing it in-line on
-     * register-starved machines like the x86, and not really any
-     * slower on real processors.
-     */
-    memcpy(W, data, 16*sizeof(__u32));
-    for (i = 0; i < 64; i++) {
-	    TEMP = W[i] ^ W[i+2] ^ W[i+8] ^ W[i+13];
-	    W[i+16] = ROTL(1, TEMP);
-    }
-
-    /* Set up first buffer and local data buffer */
-    A = digest[ 0 ];
-    B = digest[ 1 ];
-    C = digest[ 2 ];
-    D = digest[ 3 ];
-    E = digest[ 4 ];
-
-    /* Heavy mangling, in 4 sub-rounds of 20 iterations each. */
-#if SHA_CODE_SIZE == 0
-    /*
-     * Approximately 50% of the speed of the largest version, but
-     * takes up 1/16 the space.  Saves about 6k on an i386 kernel.
-     */
-    for (i = 0; i < 80; i++) {
-	if (i < 40) {
-	    if (i < 20)
-		TEMP = f1(B, C, D) + K1;
-	    else
-		TEMP = f2(B, C, D) + K2;
-	} else {
-	    if (i < 60)
-		TEMP = f3(B, C, D) + K3;
-	    else
-		TEMP = f4(B, C, D) + K4;
-	}
-	TEMP += ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-#elif SHA_CODE_SIZE == 1
-    for (i = 0; i < 20; i++) {
-	TEMP = f1(B, C, D) + K1 + ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-    for (; i < 40; i++) {
-	TEMP = f2(B, C, D) + K2 + ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-    for (; i < 60; i++) {
-	TEMP = f3(B, C, D) + K3 + ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-    for (; i < 80; i++) {
-	TEMP = f4(B, C, D) + K4 + ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-#elif SHA_CODE_SIZE == 2
-    for (i = 0; i < 20; i += 5) {
-	subRound( A, B, C, D, E, f1, K1, W[ i   ] );
-	subRound( E, A, B, C, D, f1, K1, W[ i+1 ] );
-	subRound( D, E, A, B, C, f1, K1, W[ i+2 ] );
-	subRound( C, D, E, A, B, f1, K1, W[ i+3 ] );
-	subRound( B, C, D, E, A, f1, K1, W[ i+4 ] );
-    }
-    for (; i < 40; i += 5) {
-	subRound( A, B, C, D, E, f2, K2, W[ i   ] );
-	subRound( E, A, B, C, D, f2, K2, W[ i+1 ] );
-	subRound( D, E, A, B, C, f2, K2, W[ i+2 ] );
-	subRound( C, D, E, A, B, f2, K2, W[ i+3 ] );
-	subRound( B, C, D, E, A, f2, K2, W[ i+4 ] );
-    }
-    for (; i < 60; i += 5) {
-	subRound( A, B, C, D, E, f3, K3, W[ i   ] );
-	subRound( E, A, B, C, D, f3, K3, W[ i+1 ] );
-	subRound( D, E, A, B, C, f3, K3, W[ i+2 ] );
-	subRound( C, D, E, A, B, f3, K3, W[ i+3 ] );
-	subRound( B, C, D, E, A, f3, K3, W[ i+4 ] );
-    }
-    for (; i < 80; i += 5) {
-	subRound( A, B, C, D, E, f4, K4, W[ i   ] );
-	subRound( E, A, B, C, D, f4, K4, W[ i+1 ] );
-	subRound( D, E, A, B, C, f4, K4, W[ i+2 ] );
-	subRound( C, D, E, A, B, f4, K4, W[ i+3 ] );
-	subRound( B, C, D, E, A, f4, K4, W[ i+4 ] );
-    }
-#elif SHA_CODE_SIZE == 3 /* Really large version */
-    subRound( A, B, C, D, E, f1, K1, W[  0 ] );
-    subRound( E, A, B, C, D, f1, K1, W[  1 ] );
-    subRound( D, E, A, B, C, f1, K1, W[  2 ] );
-    subRound( C, D, E, A, B, f1, K1, W[  3 ] );
-    subRound( B, C, D, E, A, f1, K1, W[  4 ] );
-    subRound( A, B, C, D, E, f1, K1, W[  5 ] );
-    subRound( E, A, B, C, D, f1, K1, W[  6 ] );
-    subRound( D, E, A, B, C, f1, K1, W[  7 ] );
-    subRound( C, D, E, A, B, f1, K1, W[  8 ] );
-    subRound( B, C, D, E, A, f1, K1, W[  9 ] );
-    subRound( A, B, C, D, E, f1, K1, W[ 10 ] );
-    subRound( E, A, B, C, D, f1, K1, W[ 11 ] );
-    subRound( D, E, A, B, C, f1, K1, W[ 12 ] );
-    subRound( C, D, E, A, B, f1, K1, W[ 13 ] );
-    subRound( B, C, D, E, A, f1, K1, W[ 14 ] );
-    subRound( A, B, C, D, E, f1, K1, W[ 15 ] );
-    subRound( E, A, B, C, D, f1, K1, W[ 16 ] );
-    subRound( D, E, A, B, C, f1, K1, W[ 17 ] );
-    subRound( C, D, E, A, B, f1, K1, W[ 18 ] );
-    subRound( B, C, D, E, A, f1, K1, W[ 19 ] );
-
-    subRound( A, B, C, D, E, f2, K2, W[ 20 ] );
-    subRound( E, A, B, C, D, f2, K2, W[ 21 ] );
-    subRound( D, E, A, B, C, f2, K2, W[ 22 ] );
-    subRound( C, D, E, A, B, f2, K2, W[ 23 ] );
-    subRound( B, C, D, E, A, f2, K2, W[ 24 ] );
-    subRound( A, B, C, D, E, f2, K2, W[ 25 ] );
-    subRound( E, A, B, C, D, f2, K2, W[ 26 ] );
-    subRound( D, E, A, B, C, f2, K2, W[ 27 ] );
-    subRound( C, D, E, A, B, f2, K2, W[ 28 ] );
-    subRound( B, C, D, E, A, f2, K2, W[ 29 ] );
-    subRound( A, B, C, D, E, f2, K2, W[ 30 ] );
-    subRound( E, A, B, C, D, f2, K2, W[ 31 ] );
-    subRound( D, E, A, B, C, f2, K2, W[ 32 ] );
-    subRound( C, D, E, A, B, f2, K2, W[ 33 ] );
-    subRound( B, C, D, E, A, f2, K2, W[ 34 ] );
-    subRound( A, B, C, D, E, f2, K2, W[ 35 ] );
-    subRound( E, A, B, C, D, f2, K2, W[ 36 ] );
-    subRound( D, E, A, B, C, f2, K2, W[ 37 ] );
-    subRound( C, D, E, A, B, f2, K2, W[ 38 ] );
-    subRound( B, C, D, E, A, f2, K2, W[ 39 ] );
-    
-    subRound( A, B, C, D, E, f3, K3, W[ 40 ] );
-    subRound( E, A, B, C, D, f3, K3, W[ 41 ] );
-    subRound( D, E, A, B, C, f3, K3, W[ 42 ] );
-    subRound( C, D, E, A, B, f3, K3, W[ 43 ] );
-    subRound( B, C, D, E, A, f3, K3, W[ 44 ] );
-    subRound( A, B, C, D, E, f3, K3, W[ 45 ] );
-    subRound( E, A, B, C, D, f3, K3, W[ 46 ] );
-    subRound( D, E, A, B, C, f3, K3, W[ 47 ] );
-    subRound( C, D, E, A, B, f3, K3, W[ 48 ] );
-    subRound( B, C, D, E, A, f3, K3, W[ 49 ] );
-    subRound( A, B, C, D, E, f3, K3, W[ 50 ] );
-    subRound( E, A, B, C, D, f3, K3, W[ 51 ] );
-    subRound( D, E, A, B, C, f3, K3, W[ 52 ] );
-    subRound( C, D, E, A, B, f3, K3, W[ 53 ] );
-    subRound( B, C, D, E, A, f3, K3, W[ 54 ] );
-    subRound( A, B, C, D, E, f3, K3, W[ 55 ] );
-    subRound( E, A, B, C, D, f3, K3, W[ 56 ] );
-    subRound( D, E, A, B, C, f3, K3, W[ 57 ] );
-    subRound( C, D, E, A, B, f3, K3, W[ 58 ] );
-    subRound( B, C, D, E, A, f3, K3, W[ 59 ] );
-
-    subRound( A, B, C, D, E, f4, K4, W[ 60 ] );
-    subRound( E, A, B, C, D, f4, K4, W[ 61 ] );
-    subRound( D, E, A, B, C, f4, K4, W[ 62 ] );
-    subRound( C, D, E, A, B, f4, K4, W[ 63 ] );
-    subRound( B, C, D, E, A, f4, K4, W[ 64 ] );
-    subRound( A, B, C, D, E, f4, K4, W[ 65 ] );
-    subRound( E, A, B, C, D, f4, K4, W[ 66 ] );
-    subRound( D, E, A, B, C, f4, K4, W[ 67 ] );
-    subRound( C, D, E, A, B, f4, K4, W[ 68 ] );
-    subRound( B, C, D, E, A, f4, K4, W[ 69 ] );
-    subRound( A, B, C, D, E, f4, K4, W[ 70 ] );
-    subRound( E, A, B, C, D, f4, K4, W[ 71 ] );
-    subRound( D, E, A, B, C, f4, K4, W[ 72 ] );
-    subRound( C, D, E, A, B, f4, K4, W[ 73 ] );
-    subRound( B, C, D, E, A, f4, K4, W[ 74 ] );
-    subRound( A, B, C, D, E, f4, K4, W[ 75 ] );
-    subRound( E, A, B, C, D, f4, K4, W[ 76 ] );
-    subRound( D, E, A, B, C, f4, K4, W[ 77 ] );
-    subRound( C, D, E, A, B, f4, K4, W[ 78 ] );
-    subRound( B, C, D, E, A, f4, K4, W[ 79 ] );
-#else
-#error Illegal SHA_CODE_SIZE
-#endif
-
-    /* Build message digest */
-    digest[ 0 ] += A;
-    digest[ 1 ] += B;
-    digest[ 2 ] += C;
-    digest[ 3 ] += D;
-    digest[ 4 ] += E;
-
-	/* W is wiped by the caller */
-#undef W
-}
-
-#undef ROTL
-#undef f1
-#undef f2
-#undef f3
-#undef f4
-#undef K1	
-#undef K2
-#undef K3	
-#undef K4	
-#undef subRound
-	
-#else /* !USE_SHA - Use MD5 */
-
-#define HASH_BUFFER_SIZE 4
-#define HASH_EXTRA_SIZE 0
-#define HASH_TRANSFORM MD5Transform
-	
-/*
- * MD5 transform algorithm, taken from code written by Colin Plumb,
- * and put into the public domain
- */
-
-/* The four core functions - F1 is optimized somewhat */
-
-/* #define F1(x, y, z) (x & y | ~x & z) */
-#define F1(x, y, z) (z ^ (x & (y ^ z)))
-#define F2(x, y, z) F1(z, x, y)
-#define F3(x, y, z) (x ^ y ^ z)
-#define F4(x, y, z) (y ^ (x | ~z))
-
-/* This is the central step in the MD5 algorithm. */
-#define MD5STEP(f, w, x, y, z, data, s) \
-	( w += f(x, y, z) + data,  w = w<<s | w>>(32-s),  w += x )
-
-/*
- * The core of the MD5 algorithm, this alters an existing MD5 hash to
- * reflect the addition of 16 longwords of new data.  MD5Update blocks
- * the data and converts bytes into longwords for this routine.
- */
-static void MD5Transform(__u32 buf[HASH_BUFFER_SIZE], __u32 const in[16])
-{
-	__u32 a, b, c, d;
-
-	a = buf[0];
-	b = buf[1];
-	c = buf[2];
-	d = buf[3];
-
-	MD5STEP(F1, a, b, c, d, in[ 0]+0xd76aa478,  7);
-	MD5STEP(F1, d, a, b, c, in[ 1]+0xe8c7b756, 12);
-	MD5STEP(F1, c, d, a, b, in[ 2]+0x242070db, 17);
-	MD5STEP(F1, b, c, d, a, in[ 3]+0xc1bdceee, 22);
-	MD5STEP(F1, a, b, c, d, in[ 4]+0xf57c0faf,  7);
-	MD5STEP(F1, d, a, b, c, in[ 5]+0x4787c62a, 12);
-	MD5STEP(F1, c, d, a, b, in[ 6]+0xa8304613, 17);
-	MD5STEP(F1, b, c, d, a, in[ 7]+0xfd469501, 22);
-	MD5STEP(F1, a, b, c, d, in[ 8]+0x698098d8,  7);
-	MD5STEP(F1, d, a, b, c, in[ 9]+0x8b44f7af, 12);
-	MD5STEP(F1, c, d, a, b, in[10]+0xffff5bb1, 17);
-	MD5STEP(F1, b, c, d, a, in[11]+0x895cd7be, 22);
-	MD5STEP(F1, a, b, c, d, in[12]+0x6b901122,  7);
-	MD5STEP(F1, d, a, b, c, in[13]+0xfd987193, 12);
-	MD5STEP(F1, c, d, a, b, in[14]+0xa679438e, 17);
-	MD5STEP(F1, b, c, d, a, in[15]+0x49b40821, 22);
-
-	MD5STEP(F2, a, b, c, d, in[ 1]+0xf61e2562,  5);
-	MD5STEP(F2, d, a, b, c, in[ 6]+0xc040b340,  9);
-	MD5STEP(F2, c, d, a, b, in[11]+0x265e5a51, 14);
-	MD5STEP(F2, b, c, d, a, in[ 0]+0xe9b6c7aa, 20);
-	MD5STEP(F2, a, b, c, d, in[ 5]+0xd62f105d,  5);
-	MD5STEP(F2, d, a, b, c, in[10]+0x02441453,  9);
-	MD5STEP(F2, c, d, a, b, in[15]+0xd8a1e681, 14);
-	MD5STEP(F2, b, c, d, a, in[ 4]+0xe7d3fbc8, 20);
-	MD5STEP(F2, a, b, c, d, in[ 9]+0x21e1cde6,  5);
-	MD5STEP(F2, d, a, b, c, in[14]+0xc33707d6,  9);
-	MD5STEP(F2, c, d, a, b, in[ 3]+0xf4d50d87, 14);
-	MD5STEP(F2, b, c, d, a, in[ 8]+0x455a14ed, 20);
-	MD5STEP(F2, a, b, c, d, in[13]+0xa9e3e905,  5);
-	MD5STEP(F2, d, a, b, c, in[ 2]+0xfcefa3f8,  9);
-	MD5STEP(F2, c, d, a, b, in[ 7]+0x676f02d9, 14);
-	MD5STEP(F2, b, c, d, a, in[12]+0x8d2a4c8a, 20);
-
-	MD5STEP(F3, a, b, c, d, in[ 5]+0xfffa3942,  4);
-	MD5STEP(F3, d, a, b, c, in[ 8]+0x8771f681, 11);
-	MD5STEP(F3, c, d, a, b, in[11]+0x6d9d6122, 16);
-	MD5STEP(F3, b, c, d, a, in[14]+0xfde5380c, 23);
-	MD5STEP(F3, a, b, c, d, in[ 1]+0xa4beea44,  4);
-	MD5STEP(F3, d, a, b, c, in[ 4]+0x4bdecfa9, 11);
-	MD5STEP(F3, c, d, a, b, in[ 7]+0xf6bb4b60, 16);
-	MD5STEP(F3, b, c, d, a, in[10]+0xbebfbc70, 23);
-	MD5STEP(F3, a, b, c, d, in[13]+0x289b7ec6,  4);
-	MD5STEP(F3, d, a, b, c, in[ 0]+0xeaa127fa, 11);
-	MD5STEP(F3, c, d, a, b, in[ 3]+0xd4ef3085, 16);
-	MD5STEP(F3, b, c, d, a, in[ 6]+0x04881d05, 23);
-	MD5STEP(F3, a, b, c, d, in[ 9]+0xd9d4d039,  4);
-	MD5STEP(F3, d, a, b, c, in[12]+0xe6db99e5, 11);
-	MD5STEP(F3, c, d, a, b, in[15]+0x1fa27cf8, 16);
-	MD5STEP(F3, b, c, d, a, in[ 2]+0xc4ac5665, 23);
-
-	MD5STEP(F4, a, b, c, d, in[ 0]+0xf4292244,  6);
-	MD5STEP(F4, d, a, b, c, in[ 7]+0x432aff97, 10);
-	MD5STEP(F4, c, d, a, b, in[14]+0xab9423a7, 15);
-	MD5STEP(F4, b, c, d, a, in[ 5]+0xfc93a039, 21);
-	MD5STEP(F4, a, b, c, d, in[12]+0x655b59c3,  6);
-	MD5STEP(F4, d, a, b, c, in[ 3]+0x8f0ccc92, 10);
-	MD5STEP(F4, c, d, a, b, in[10]+0xffeff47d, 15);
-	MD5STEP(F4, b, c, d, a, in[ 1]+0x85845dd1, 21);
-	MD5STEP(F4, a, b, c, d, in[ 8]+0x6fa87e4f,  6);
-	MD5STEP(F4, d, a, b, c, in[15]+0xfe2ce6e0, 10);
-	MD5STEP(F4, c, d, a, b, in[ 6]+0xa3014314, 15);
-	MD5STEP(F4, b, c, d, a, in[13]+0x4e0811a1, 21);
-	MD5STEP(F4, a, b, c, d, in[ 4]+0xf7537e82,  6);
-	MD5STEP(F4, d, a, b, c, in[11]+0xbd3af235, 10);
-	MD5STEP(F4, c, d, a, b, in[ 2]+0x2ad7d2bb, 15);
-	MD5STEP(F4, b, c, d, a, in[ 9]+0xeb86d391, 21);
-
-	buf[0] += a;
-	buf[1] += b;
-	buf[2] += c;
-	buf[3] += d;
-}
-
-#undef F1
-#undef F2
-#undef F3
-#undef F4
-#undef MD5STEP
-
-#endif /* !USE_SHA */
-
 /*********************************************************************
  *
  * Entropy extraction routines
@@ -1305,37 +935,63 @@
 #define EXTRACT_ENTROPY_USER		1
 #define EXTRACT_ENTROPY_SECONDARY	2
 #define EXTRACT_ENTROPY_LIMIT		4
-#define TMP_BUF_SIZE			(HASH_BUFFER_SIZE + HASH_EXTRA_SIZE)
-#define SEC_XFER_SIZE			(TMP_BUF_SIZE*4)
+#define CRYPTO_MAX_BLOCK_SIZE		32
 
 static ssize_t extract_entropy(struct entropy_store *r, void * buf,
 			       size_t nbytes, int flags);
 
+static inline void increment_iv(unsigned char *IV, const unsigned int IVsize) {
+	unsigned int i;
+	for (i=0; i<IVsize; i++) {
+		if ( ++(IV[i]) ) {
+			break;
+		}
+	}
+}
+
 /*
- * This utility inline function is responsible for transfering entropy
- * from the primary pool to the secondary extraction pool. We make
- * sure we pull enough for a 'catastrophic reseed'.
- */
-static inline void xfer_secondary_pool(struct entropy_store *r,
-				       size_t nbytes, __u32 *tmp)
-{
-	if (r->entropy_count < nbytes * 8 &&
-	    r->entropy_count < r->poolinfo.POOLBITS) {
-		int bytes = max_t(int, random_read_wakeup_thresh / 8,
-				min_t(int, nbytes, TMP_BUF_SIZE));
-
-		DEBUG_ENT("%04d %04d : going to reseed %s with %d bits "
-			  "(%d of %d requested)\n",
-			  random_state->entropy_count,
-			  sec_random_state->entropy_count,
-			  r == sec_random_state ? "secondary" : "unknown",
-			  bytes * 8, nbytes * 8, r->entropy_count);
-
-		bytes=extract_entropy(random_state, tmp, bytes,
-				      EXTRACT_ENTROPY_LIMIT);
-		add_entropy_words(r, tmp, bytes);
-		credit_entropy_store(r, bytes*8);
+ * Fortuna's Reseed is ...
+ */
+static void random_reseed(struct entropy_store *r) {
+	struct scatterlist sg[1];
+	int i;
+	unsigned char tmp[RANDOM_MAX_DIGEST_SIZE];
+
+	r->reseed_count++;
+
+	crypto_digest_init(r->reseedHash);
+
+	sg[0].page = virt_to_page(r->key);
+	sg[0].offset = offset_in_page(r->key);
+	sg[0].length = r->keysize;
+	crypto_digest_update(r->reseedHash, sg, 1);
+
+#define TESTBIT(VAL, N)\
+  ( ((VAL) >> (N)) & 1 )
+	for (i=0; i<(1<<r->pool_number); i++) {
+		/* using pool[i] if r->reseed_count is divisible by 2^i
+		 * since 2^0 == 1, we always use pool[0]
+		 */
+		if ( (i==0)  ||  TESTBIT(r->reseed_count,i)==0 ) {
+			crypto_digest_final(r->pools[i], tmp);
+
+			sg[0].page = virt_to_page(tmp);
+			sg[0].offset = offset_in_page(tmp);
+			sg[0].length = r->keysize;
+			crypto_digest_update(r->reseedHash, sg, 1);
+
+			crypto_digest_init(r->pools[i]);
+			crypto_digest_update(r->pools[i], sg, 1); /* should each pool carry it's past state forward? */
+		} else {
+			/* pool N can only be used once every 2^N times */
+			break;
+		}
 	}
+#undef TESTBIT
+
+	crypto_digest_final(r->reseedHash, r->key);
+	crypto_cipher_setkey(r->cipher, r->key, r->keysize);
+	increment_iv(r->iv, r->blocksize);
 }
 
 /*
@@ -1355,119 +1011,76 @@
 			       size_t nbytes, int flags)
 {
 	ssize_t ret, i;
-	__u32 tmp[TMP_BUF_SIZE];
-	__u32 x;
+	__u32 tmp[CRYPTO_MAX_BLOCK_SIZE];
 	unsigned long cpuflags;
+	struct scatterlist sgiv[1],
+			   sgtmp[1];
 
-
-	/* Redundant, but just in case... */
-	if (r->entropy_count > r->poolinfo.POOLBITS)
-		r->entropy_count = r->poolinfo.POOLBITS;
-
-	if (flags & EXTRACT_ENTROPY_SECONDARY)
-		xfer_secondary_pool(r, nbytes, tmp);
-
-	/* Hold lock while accounting */
+	/* lock while we're reseeding */
 	spin_lock_irqsave(&r->lock, cpuflags);
 
-	DEBUG_ENT("%04d %04d : trying to extract %d bits from %s\n",
-		  random_state->entropy_count,
-		  sec_random_state->entropy_count,
-		  nbytes * 8,
-		  r == sec_random_state ? "secondary" :
-		  r == random_state ? "primary" : "unknown");
-
-	if (flags & EXTRACT_ENTROPY_LIMIT && nbytes >= r->entropy_count / 8)
-		nbytes = r->entropy_count / 8;
-
-	if (r->entropy_count / 8 >= nbytes)
-		r->entropy_count -= nbytes*8;
-	else
-		r->entropy_count = 0;
+	random_reseed(r);
+	r->pool0_len = 0;
 
-	if (r->entropy_count < random_write_wakeup_thresh)
-		wake_up_interruptible(&random_write_wait);
+	spin_unlock_irqrestore(&r->lock, cpuflags);
 
-	DEBUG_ENT("%04d %04d : debiting %d bits from %s%s\n",
-		  random_state->entropy_count,
-		  sec_random_state->entropy_count,
-		  nbytes * 8,
-		  r == sec_random_state ? "secondary" :
-		  r == random_state ? "primary" : "unknown",
-		  flags & EXTRACT_ENTROPY_LIMIT ? "" : " (unlimited)");
+	/*
+	 * don't output any data until we reseed at least once
+	 * But this causes problems at boot-time.  So weĺl assume since they
+	 * don't wait to the PRNG to setup, they don't really new strong random
+	 * data
+	*/
+	/*
+	if (r->reseed_count == 0)
+		return 0;
+	*/
 
-	spin_unlock_irqrestore(&r->lock, cpuflags);
+	sgiv[0].page = virt_to_page(r->iv);
+	sgiv[0].offset = offset_in_page(r->iv);
+	sgiv[0].length = r->blocksize;
+	sgtmp[0].page = virt_to_page(tmp);
+	sgtmp[0].offset = offset_in_page(tmp);
+	sgtmp[0].length = r->blocksize;
 
 	ret = 0;
 	while (nbytes) {
-		/*
-		 * Check if we need to break out or reschedule....
-		 */
-		if ((flags & EXTRACT_ENTROPY_USER) && need_resched()) {
-			if (signal_pending(current)) {
-				if (ret == 0)
-					ret = -ERESTARTSYS;
-				break;
-			}
-
-			DEBUG_ENT("%04d %04d : extract feeling sleepy (%d bytes left)\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count, nbytes);
-
-			schedule();
-
-			DEBUG_ENT("%04d %04d : extract woke up\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count);
-		}
+		crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, r->blocksize);
+		increment_iv(r->iv, r->blocksize);
 
-		/* Hash the pool to get the output */
-		tmp[0] = 0x67452301;
-		tmp[1] = 0xefcdab89;
-		tmp[2] = 0x98badcfe;
-		tmp[3] = 0x10325476;
-#ifdef USE_SHA
-		tmp[4] = 0xc3d2e1f0;
-#endif
-		/*
-		 * As we hash the pool, we mix intermediate values of
-		 * the hash back into the pool.  This eliminates
-		 * backtracking attacks (where the attacker knows
-		 * the state of the pool plus the current outputs, and
-		 * attempts to find previous ouputs), unless the hash
-		 * function can be inverted.
-		 */
-		for (i = 0, x = 0; i < r->poolinfo.poolwords; i += 16, x+=2) {
-			HASH_TRANSFORM(tmp, r->pool+i);
-			add_entropy_words(r, &tmp[x%HASH_BUFFER_SIZE], 1);
-		}
-		
-		/*
-		 * In case the hash function has some recognizable
-		 * output pattern, we fold it in half.
-		 */
-		for (i = 0; i <  HASH_BUFFER_SIZE/2; i++)
-			tmp[i] ^= tmp[i + (HASH_BUFFER_SIZE+1)/2];
-#if HASH_BUFFER_SIZE & 1	/* There's a middle word to deal with */
-		x = tmp[HASH_BUFFER_SIZE/2];
-		x ^= (x >> 16);		/* Fold it in half */
-		((__u16 *)tmp)[HASH_BUFFER_SIZE-1] = (__u16)x;
-#endif
-		
 		/* Copy data to destination buffer */
-		i = min(nbytes, HASH_BUFFER_SIZE*sizeof(__u32)/2);
+		i = (nbytes < 16) ? nbytes : 16;
 		if (flags & EXTRACT_ENTROPY_USER) {
 			i -= copy_to_user(buf, (__u8 const *)tmp, i);
 			if (!i) {
 				ret = -EFAULT;
 				break;
 			}
-		} else
+		} else {
 			memcpy(buf, (__u8 const *)tmp, i);
+		}
 		nbytes -= i;
 		buf += i;
 		ret += i;
 	}
+	
+	/* generate a new key */
+	/* take into account the possibility that keysize >= blocksize */
+	for (i=0; i+r->blocksize<=r->keysize; i+=r->blocksize) {
+		sgtmp[0].page = virt_to_page( r->key+i );
+		sgtmp[0].offset = offset_in_page( r->key+i );
+		sgtmp[0].length = r->blocksize;
+		crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1);
+		increment_iv(r->iv, r->blocksize);
+	}
+	sgtmp[0].page = virt_to_page( r->key+i );
+	sgtmp[0].offset = offset_in_page( r->key+i );
+	sgtmp[0].length = r->blocksize-i;
+	crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1);
+	increment_iv(r->iv, r->blocksize);
+
+	if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) {
+		return -EINVAL;
+	}
 
 	/* Wipe data just returned from memory */
 	memset(tmp, 0, sizeof(tmp));
@@ -1482,10 +1095,7 @@
  */
 void get_random_bytes(void *buf, int nbytes)
 {
-	if (sec_random_state)  
-		extract_entropy(sec_random_state, (char *) buf, nbytes, 
-				EXTRACT_ENTROPY_SECONDARY);
-	else if (random_state)
+	if (random_state)
 		extract_entropy(random_state, (char *) buf, nbytes, 0);
 	else
 		printk(KERN_NOTICE "get_random_bytes called before "
@@ -1500,57 +1110,16 @@
  *
  *********************************************************************/
 
-/*
- * Initialize the random pool with standard stuff.
- *
- * NOTE: This is an OS-dependent function.
- */
-static void init_std_data(struct entropy_store *r)
-{
-	struct timeval 	tv;
-	__u32		words[2];
-	char 		*p;
-	int		i;
-
-	do_gettimeofday(&tv);
-	words[0] = tv.tv_sec;
-	words[1] = tv.tv_usec;
-	add_entropy_words(r, words, 2);
-
-	/*
-	 *	This doesn't lock system.utsname. However, we are generating
-	 *	entropy so a race with a name set here is fine.
-	 */
-	p = (char *) &system_utsname;
-	for (i = sizeof(system_utsname) / sizeof(words); i; i--) {
-		memcpy(words, p, sizeof(words));
-		add_entropy_words(r, words, sizeof(words)/4);
-		p += sizeof(words);
-	}
-}
-
 static int __init rand_initialize(void)
 {
-	int i;
-
-	if (create_entropy_store(DEFAULT_POOL_SIZE, &random_state))
-		goto err;
-	if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state))
-		goto err;
-	if (create_entropy_store(SECONDARY_POOL_SIZE, &sec_random_state))
+	if (create_entropy_store(DEFAULT_POOL_NUMBER, &random_state))
 		goto err;
-	clear_entropy_store(random_state);
-	clear_entropy_store(sec_random_state);
-	init_std_data(random_state);
+        if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state))
+                goto err;
+
 #ifdef CONFIG_SYSCTL
 	sysctl_init_random(random_state);
 #endif
-	for (i = 0; i < NR_IRQS; i++)
-		irq_timer_state[i] = NULL;
-	memset(&keyboard_timer_state, 0, sizeof(struct timer_rand_state));
-	memset(&mouse_timer_state, 0, sizeof(struct timer_rand_state));
-	memset(&extract_timer_state, 0, sizeof(struct timer_rand_state));
-	extract_timer_state.dont_count_entropy = 1;
 	return 0;
 err:
 	return -1;
@@ -1559,139 +1128,33 @@
 
 void rand_initialize_irq(int irq)
 {
-	struct timer_rand_state *state;
-	
-	if (irq >= NR_IRQS || irq_timer_state[irq])
-		return;
-
-	/*
-	 * If kmalloc returns null, we just won't use that entropy
-	 * source.
-	 */
-	state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL);
-	if (state) {
-		memset(state, 0, sizeof(struct timer_rand_state));
-		irq_timer_state[irq] = state;
-	}
+	/* we don't use timers anymore, we just use the current time */
 }
  
 void rand_initialize_disk(struct gendisk *disk)
 {
-	struct timer_rand_state *state;
-	
-	/*
-	 * If kmalloc returns null, we just won't use that entropy
-	 * source.
-	 */
-	state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL);
-	if (state) {
-		memset(state, 0, sizeof(struct timer_rand_state));
-		disk->random = state;
-	}
-}
-
-static ssize_t
-random_read(struct file * file, char __user * buf, size_t nbytes, loff_t *ppos)
-{
-	DECLARE_WAITQUEUE(wait, current);
-	ssize_t			n, retval = 0, count = 0;
-	
-	if (nbytes == 0)
-		return 0;
-
-	while (nbytes > 0) {
-		n = nbytes;
-		if (n > SEC_XFER_SIZE)
-			n = SEC_XFER_SIZE;
-
-		DEBUG_ENT("%04d %04d : reading %d bits, p: %d s: %d\n",
-			  random_state->entropy_count,
-			  sec_random_state->entropy_count,
-			  n*8, random_state->entropy_count,
-			  sec_random_state->entropy_count);
-
-		n = extract_entropy(sec_random_state, buf, n,
-				    EXTRACT_ENTROPY_USER |
-				    EXTRACT_ENTROPY_LIMIT |
-				    EXTRACT_ENTROPY_SECONDARY);
-
-		DEBUG_ENT("%04d %04d : read got %d bits (%d still needed)\n",
-			  random_state->entropy_count,
-			  sec_random_state->entropy_count,
-			  n*8, (nbytes-n)*8);
-
-		if (n == 0) {
-			if (file->f_flags & O_NONBLOCK) {
-				retval = -EAGAIN;
-				break;
-			}
-			if (signal_pending(current)) {
-				retval = -ERESTARTSYS;
-				break;
-			}
-
-			DEBUG_ENT("%04d %04d : sleeping?\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count);
-
-			set_current_state(TASK_INTERRUPTIBLE);
-			add_wait_queue(&random_read_wait, &wait);
-
-			if (sec_random_state->entropy_count / 8 == 0)
-				schedule();
-
-			set_current_state(TASK_RUNNING);
-			remove_wait_queue(&random_read_wait, &wait);
-
-			DEBUG_ENT("%04d %04d : waking up\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count);
-
-			continue;
-		}
-
-		if (n < 0) {
-			retval = n;
-			break;
-		}
-		count += n;
-		buf += n;
-		nbytes -= n;
-		break;		/* This break makes the device work */
-				/* like a named pipe */
-	}
-
-	/*
-	 * If we gave the user some bytes, update the access time.
-	 */
-	if (count)
-		file_accessed(file);
-	
-	return (count ? count : retval);
+	/* we don't use timers anymore, we just use the current time */
 }
 
 static ssize_t
 urandom_read(struct file * file, char __user * buf,
 		      size_t nbytes, loff_t *ppos)
 {
-	return extract_entropy(sec_random_state, buf, nbytes,
+	return extract_entropy(random_state, buf, nbytes,
 			       EXTRACT_ENTROPY_USER |
 			       EXTRACT_ENTROPY_SECONDARY);
 }
 
+static ssize_t
+random_read(struct file * file, char __user * buf, size_t nbytes, loff_t *ppos)
+{
+	return urandom_read(file, buf, nbytes, ppos);
+}
+
 static unsigned int
 random_poll(struct file *file, poll_table * wait)
 {
-	unsigned int mask;
-
-	poll_wait(file, &random_read_wait, wait);
-	poll_wait(file, &random_write_wait, wait);
-	mask = 0;
-	if (random_state->entropy_count >= random_read_wakeup_thresh)
-		mask |= POLLIN | POLLRDNORM;
-	if (random_state->entropy_count < random_write_wakeup_thresh)
-		mask |= POLLOUT | POLLWRNORM;
-	return mask;
+	return POLLIN | POLLRDNORM  |  POLLOUT | POLLWRNORM;
 }
 
 static ssize_t
@@ -1701,12 +1164,13 @@
 	int		ret = 0;
 	size_t		bytes;
 	__u32 		buf[16];
-	const char 	__user *p = buffer;
+	const char __user	*p = buffer;
 	size_t		c = count;
 
 	while (c > 0) {
 		bytes = min(c, sizeof(buf));
 
+DEBUG_PRINTK("random_write() %p, %p, %u\n", &buf, p, bytes);
 		bytes -= copy_from_user(&buf, p, bytes);
 		if (!bytes) {
 			ret = -EFAULT;
@@ -1730,67 +1194,25 @@
 random_ioctl(struct inode * inode, struct file * file,
 	     unsigned int cmd, unsigned long arg)
 {
-	int *tmp, size, ent_count;
-	int __user *p = (int __user *)arg;
+	int size, ent_count;
+	int __user *p = (int __user *) arg;
 	int retval;
-	unsigned long flags;
 	
 	switch (cmd) {
 	case RNDGETENTCNT:
-		ent_count = random_state->entropy_count;
-		if (put_user(ent_count, p))
+		if (put_user(random_entropy_count, p))
 			return -EFAULT;
 		return 0;
 	case RNDADDTOENTCNT:
-		if (!capable(CAP_SYS_ADMIN))
-			return -EPERM;
-		if (get_user(ent_count, p))
-			return -EFAULT;
-		credit_entropy_store(random_state, ent_count);
-		/*
-		 * Wake up waiting processes if we have enough
-		 * entropy.
-		 */
-		if (random_state->entropy_count >= random_read_wakeup_thresh)
-			wake_up_interruptible(&random_read_wait);
+		/* entropy accounting removed. */
 		return 0;
 	case RNDGETPOOL:
-		if (!capable(CAP_SYS_ADMIN))
-			return -EPERM;
-		if (get_user(size, p) ||
-		    put_user(random_state->poolinfo.poolwords, p++))
-			return -EFAULT;
-		if (size < 0)
-			return -EFAULT;
-		if (size > random_state->poolinfo.poolwords)
-			size = random_state->poolinfo.poolwords;
-
-		/* prepare to atomically snapshot pool */
-
-		tmp = kmalloc(size * sizeof(__u32), GFP_KERNEL);
-
-		if (!tmp)
-			return -ENOMEM;
-
-		spin_lock_irqsave(&random_state->lock, flags);
-		ent_count = random_state->entropy_count;
-		memcpy(tmp, random_state->pool, size * sizeof(__u32));
-		spin_unlock_irqrestore(&random_state->lock, flags);
-
-		if (!copy_to_user(p, tmp, size * sizeof(__u32))) {
-			kfree(tmp);
-			return -EFAULT;
-		}
-
-		kfree(tmp);
-
-		if(put_user(ent_count, p++))
-			return -EFAULT;
-
+		/* jlcooke: never get the raw pool!!! */
 		return 0;
 	case RNDADDENTROPY:
 		if (!capable(CAP_SYS_ADMIN))
 			return -EPERM;
+		p = (int *) arg;
 		if (get_user(ent_count, p++))
 			return -EFAULT;
 		if (ent_count < 0)
@@ -1801,25 +1223,12 @@
 				      size, &file->f_pos);
 		if (retval < 0)
 			return retval;
-		credit_entropy_store(random_state, ent_count);
-		/*
-		 * Wake up waiting processes if we have enough
-		 * entropy.
-		 */
-		if (random_state->entropy_count >= random_read_wakeup_thresh)
-			wake_up_interruptible(&random_read_wait);
 		return 0;
 	case RNDZAPENTCNT:
-		if (!capable(CAP_SYS_ADMIN))
-			return -EPERM;
-		random_state->entropy_count = 0;
+		/* entropy accounting removed. */
 		return 0;
 	case RNDCLEARPOOL:
-		/* Clear the entropy pool and associated counters. */
-		if (!capable(CAP_SYS_ADMIN))
-			return -EPERM;
-		clear_entropy_store(random_state);
-		init_std_data(random_state);
+		/* jlcooke: this is maddness! Never clear the entropy pool */
 		return 0;
 	default:
 		return -EINVAL;
@@ -1875,71 +1284,96 @@
 static int min_write_thresh, max_write_thresh;
 static char sysctl_bootid[16];
 
-/*
- * This function handles a request from the user to change the pool size 
- * of the primary entropy store.
- */
-static int change_poolsize(int poolsize)
-{
-	struct entropy_store	*new_store, *old_store;
-	int			ret;
-	
-	if ((ret = create_entropy_store(poolsize, &new_store)))
-		return ret;
-
-	add_entropy_words(new_store, random_state->pool,
-			  random_state->poolinfo.poolwords);
-	credit_entropy_store(new_store, random_state->entropy_count);
-
-	sysctl_init_random(new_store);
-	old_store = random_state;
-	random_state = batch_work.data = new_store;
-	free_entropy_store(old_store);
-	return 0;
-}
-
 static int proc_do_poolsize(ctl_table *table, int write, struct file *filp,
 			    void __user *buffer, size_t *lenp, loff_t *ppos)
 {
-	int	ret;
+	int ret;
 
-	sysctl_poolsize = random_state->poolinfo.POOLBYTES;
+	if (write) {
+		// you can't change the poolsize, but we'll let you think you can for legacy reasons.
+		return 0;
+	}
 
+	sysctl_poolsize = (1<<random_state->pool_number) * random_state->pools[0]->__crt_alg->cra_ctxsize;
 	ret = proc_dointvec(table, write, filp, buffer, lenp, ppos);
-	if (ret || !write ||
-	    (sysctl_poolsize == random_state->poolinfo.POOLBYTES))
-		return ret;
 
-	return change_poolsize(sysctl_poolsize);
+	return ret;
 }
 
-static int poolsize_strategy(ctl_table *table, int __user *name, int nlen,
+static int poolsize_strategy(ctl_table *table, int *name, int nlen,
 			     void __user *oldval, size_t __user *oldlenp,
 			     void __user *newval, size_t newlen, void **context)
 {
-	int	len;
-	
-	sysctl_poolsize = random_state->poolinfo.POOLBYTES;
+	/* you can't set a poolsize strtegy because it doesn't change in size anymore */
+	return 0;
+}
 
-	/*
-	 * We only handle the write case, since the read case gets
-	 * handled by the default handler (and we don't care if the
-	 * write case happens twice; it's harmless).
-	 */
-	if (newval && newlen) {
-		len = newlen;
-		if (len > table->maxlen)
-			len = table->maxlen;
-		if (copy_from_user(table->data, newval, len))
-			return -EFAULT;
+static int proc_derive_seed(ctl_table *table, int write, struct file *filp,
+				void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+	static unsigned char	*hextab = "0123456789abcdef";
+	static unsigned int	derive_count=0;
+	static unsigned char	buf[(1<<MAXIMUM_POOL_NUMBER) * RANDOM_MAX_DIGEST_SIZE *8/4]; /* hex length of derived seed */
+	unsigned long flags;
+	ctl_table       fake_table;
+	unsigned char   tmp[RANDOM_MAX_DIGEST_SIZE];
+	unsigned int	i,j;
+	struct scatterlist sg[3];
+	int ret;
+	void *p;
+
+DEBUG_PRINTK("proc_derive_seed() 0\n");
+
+	spin_lock_irqsave(&random_state->lock, flags);
+	random_state->pool0_len = 0;
+
+	memset(buf, 0, random_state->pool_number * 2*random_state->digestsize);
+
+	/* the carry-state from pool to pool */
+	memset(tmp, 0, random_state->digestsize);
+
+	for (i=0; i<(1<<random_state->pool_number); i++) {
+		crypto_digest_init(random_state->reseedHash);
+
+		/* carry the digest from the previous output so a derive seed from a lightly seeded state is indistinguishavble from a heavily seeded one */
+		p = &tmp;
+		sg[0].page = virt_to_page(p);
+		sg[0].offset = offset_in_page(p);
+		sg[0].length = sizeof(tmp);
+
+		/* finalize and digest the i-th pool */
+		crypto_digest_final(random_state->pools[i], tmp);
+		crypto_digest_init(random_state->pools[i]);
+		p = &tmp;
+		sg[1].page = virt_to_page(p);
+		sg[1].offset = offset_in_page(p);
+		sg[1].length = sizeof(tmp);
+
+		/* digest in a counter to ensure the final hash can change even if the message does not */
+		p = &derive_count;
+		sg[2].page = virt_to_page(p);
+		sg[2].offset = offset_in_page(p);
+		sg[2].length = sizeof(derive_count);
+
+		crypto_digest_digest(random_state->reseedHash, sg, 3, tmp);
+		for (j=0; j<random_state->digestsize; j++) {
+			buf[2*(i*random_state->digestsize +j)  ] = hextab[ (tmp[j] >> 4) & 0xf ];
+			buf[2*(i*random_state->digestsize +j)+1] = hextab[ (tmp[j]     ) & 0xf ];
+		}
+		derive_count++;
 	}
 
-	if (sysctl_poolsize != random_state->poolinfo.POOLBYTES)
-		return change_poolsize(sysctl_poolsize);
+	spin_unlock_irqrestore(&random_state->lock, flags);
 
-	return 0;
+	fake_table.data = buf;
+	fake_table.maxlen = (1<<random_state->pool_number) * 2*random_state->digestsize;
+
+	ret = proc_dostring(&fake_table, write, filp, buffer, lenp, ppos);
+
+	return ret;
 }
 
+
 /*
  * These functions is used to return both the bootid UUID, and random
  * UUID.  The difference is in whether table->data is NULL; if it is,
@@ -1975,7 +1409,7 @@
 	return proc_dostring(&fake_table, write, filp, buffer, lenp, ppos);
 }
 
-static int uuid_strategy(ctl_table *table, int __user *name, int nlen,
+static int uuid_strategy(ctl_table *table, int *name, int nlen,
 			 void __user *oldval, size_t __user *oldlenp,
 			 void __user *newval, size_t newlen, void **context)
 {
@@ -2011,38 +1445,38 @@
 		.procname	= "poolsize",
 		.data		= &sysctl_poolsize,
 		.maxlen		= sizeof(int),
-		.mode		= 0644,
+		.mode		= 0644, // you can't change the poolsize, but we'll let you think you can for legacy reasons.
 		.proc_handler	= &proc_do_poolsize,
 		.strategy	= &poolsize_strategy,
 	},
 	{
-		.ctl_name	= RANDOM_ENTROPY_COUNT,
-		.procname	= "entropy_avail",
-		.maxlen		= sizeof(int),
-		.mode		= 0444,
-		.proc_handler	= &proc_dointvec,
-	},
+		.ctl_name       = RANDOM_ENTROPY_COUNT,
+		.procname       = "entropy_avail",
+		.maxlen         = sizeof(int),
+		.mode           = 0444,
+		.proc_handler   = &proc_dointvec,
+        },
 	{
-		.ctl_name	= RANDOM_READ_THRESH,
-		.procname	= "read_wakeup_threshold",
-		.data		= &random_read_wakeup_thresh,
-		.maxlen		= sizeof(int),
-		.mode		= 0644,
-		.proc_handler	= &proc_dointvec_minmax,
-		.strategy	= &sysctl_intvec,
-		.extra1		= &min_read_thresh,
-		.extra2		= &max_read_thresh,
+		.ctl_name       = RANDOM_READ_THRESH,
+		.procname       = "read_wakeup_threshold",
+		.data           = &random_read_wakeup_thresh,
+		.maxlen         = sizeof(int),
+		.mode           = 0644,
+		.proc_handler   = &proc_dointvec_minmax,
+		.strategy       = &sysctl_intvec,
+		.extra1         = &min_read_thresh,
+		.extra2         = &max_read_thresh,
 	},
 	{
-		.ctl_name	= RANDOM_WRITE_THRESH,
-		.procname	= "write_wakeup_threshold",
-		.data		= &random_write_wakeup_thresh,
-		.maxlen		= sizeof(int),
-		.mode		= 0644,
-		.proc_handler	= &proc_dointvec_minmax,
-		.strategy	= &sysctl_intvec,
-		.extra1		= &min_write_thresh,
-		.extra2		= &max_write_thresh,
+		.ctl_name       = RANDOM_WRITE_THRESH,
+		.procname       = "write_wakeup_threshold",
+		.data           = &random_write_wakeup_thresh,
+		.maxlen         = sizeof(int),
+		.mode           = 0644,
+		.proc_handler   = &proc_dointvec_minmax,
+		.strategy       = &sysctl_intvec,
+		.extra1         = &min_write_thresh,
+		.extra2         = &max_write_thresh,
 	},
 	{
 		.ctl_name	= RANDOM_BOOT_ID,
@@ -2061,15 +1495,23 @@
 		.proc_handler	= &proc_do_uuid,
 		.strategy	= &uuid_strategy,
 	},
+	{
+		.ctl_name	= RANDOM_DERIVE_SEED,
+		.procname	= "derive_seed",
+		.maxlen		= MAXIMUM_POOL_NUMBER * RANDOM_MAX_DIGEST_SIZE,
+		.mode		= 0400,
+		.proc_handler	= &proc_derive_seed,
+	},
 	{ .ctl_name = 0 }
 };
 
-static void sysctl_init_random(struct entropy_store *random_state)
+static void sysctl_init_random(struct entropy_store *r)
 {
 	min_read_thresh = 8;
 	min_write_thresh = 0;
-	max_read_thresh = max_write_thresh = random_state->poolinfo.POOLBITS;
-	random_table[1].data = &random_state->entropy_count;
+	random_entropy_count =
+	max_read_thresh = max_write_thresh = (1<<r->pool_number) * r->pools[0]->__crt_alg->cra_ctxsize;
+	random_table[1].data = &random_entropy_count;
 }
 #endif 	/* CONFIG_SYSCTL */
 
@@ -2092,124 +1534,6 @@
  * compensated for by changing the secret periodically.
  */
 
-/* F, G and H are basic MD4 functions: selection, majority, parity */
-#define F(x, y, z) ((z) ^ ((x) & ((y) ^ (z))))
-#define G(x, y, z) (((x) & (y)) + (((x) ^ (y)) & (z)))
-#define H(x, y, z) ((x) ^ (y) ^ (z))
-
-/*
- * The generic round function.  The application is so specific that
- * we don't bother protecting all the arguments with parens, as is generally
- * good macro practice, in favor of extra legibility.
- * Rotation is separate from addition to prevent recomputation
- */
-#define ROUND(f, a, b, c, d, x, s)	\
-	(a += f(b, c, d) + x, a = (a << s) | (a >> (32-s)))
-#define K1 0
-#define K2 013240474631UL
-#define K3 015666365641UL
-
-/*
- * Basic cut-down MD4 transform.  Returns only 32 bits of result.
- */
-static __u32 halfMD4Transform (__u32 const buf[4], __u32 const in[8])
-{
-	__u32	a = buf[0], b = buf[1], c = buf[2], d = buf[3];
-
-	/* Round 1 */
-	ROUND(F, a, b, c, d, in[0] + K1,  3);
-	ROUND(F, d, a, b, c, in[1] + K1,  7);
-	ROUND(F, c, d, a, b, in[2] + K1, 11);
-	ROUND(F, b, c, d, a, in[3] + K1, 19);
-	ROUND(F, a, b, c, d, in[4] + K1,  3);
-	ROUND(F, d, a, b, c, in[5] + K1,  7);
-	ROUND(F, c, d, a, b, in[6] + K1, 11);
-	ROUND(F, b, c, d, a, in[7] + K1, 19);
-
-	/* Round 2 */
-	ROUND(G, a, b, c, d, in[1] + K2,  3);
-	ROUND(G, d, a, b, c, in[3] + K2,  5);
-	ROUND(G, c, d, a, b, in[5] + K2,  9);
-	ROUND(G, b, c, d, a, in[7] + K2, 13);
-	ROUND(G, a, b, c, d, in[0] + K2,  3);
-	ROUND(G, d, a, b, c, in[2] + K2,  5);
-	ROUND(G, c, d, a, b, in[4] + K2,  9);
-	ROUND(G, b, c, d, a, in[6] + K2, 13);
-
-	/* Round 3 */
-	ROUND(H, a, b, c, d, in[3] + K3,  3);
-	ROUND(H, d, a, b, c, in[7] + K3,  9);
-	ROUND(H, c, d, a, b, in[2] + K3, 11);
-	ROUND(H, b, c, d, a, in[6] + K3, 15);
-	ROUND(H, a, b, c, d, in[1] + K3,  3);
-	ROUND(H, d, a, b, c, in[5] + K3,  9);
-	ROUND(H, c, d, a, b, in[0] + K3, 11);
-	ROUND(H, b, c, d, a, in[4] + K3, 15);
-
-	return buf[1] + b;	/* "most hashed" word */
-	/* Alternative: return sum of all words? */
-}
-
-#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
-
-static __u32 twothirdsMD4Transform (__u32 const buf[4], __u32 const in[12])
-{
-	__u32	a = buf[0], b = buf[1], c = buf[2], d = buf[3];
-
-	/* Round 1 */
-	ROUND(F, a, b, c, d, in[ 0] + K1,  3);
-	ROUND(F, d, a, b, c, in[ 1] + K1,  7);
-	ROUND(F, c, d, a, b, in[ 2] + K1, 11);
-	ROUND(F, b, c, d, a, in[ 3] + K1, 19);
-	ROUND(F, a, b, c, d, in[ 4] + K1,  3);
-	ROUND(F, d, a, b, c, in[ 5] + K1,  7);
-	ROUND(F, c, d, a, b, in[ 6] + K1, 11);
-	ROUND(F, b, c, d, a, in[ 7] + K1, 19);
-	ROUND(F, a, b, c, d, in[ 8] + K1,  3);
-	ROUND(F, d, a, b, c, in[ 9] + K1,  7);
-	ROUND(F, c, d, a, b, in[10] + K1, 11);
-	ROUND(F, b, c, d, a, in[11] + K1, 19);
-
-	/* Round 2 */
-	ROUND(G, a, b, c, d, in[ 1] + K2,  3);
-	ROUND(G, d, a, b, c, in[ 3] + K2,  5);
-	ROUND(G, c, d, a, b, in[ 5] + K2,  9);
-	ROUND(G, b, c, d, a, in[ 7] + K2, 13);
-	ROUND(G, a, b, c, d, in[ 9] + K2,  3);
-	ROUND(G, d, a, b, c, in[11] + K2,  5);
-	ROUND(G, c, d, a, b, in[ 0] + K2,  9);
-	ROUND(G, b, c, d, a, in[ 2] + K2, 13);
-	ROUND(G, a, b, c, d, in[ 4] + K2,  3);
-	ROUND(G, d, a, b, c, in[ 6] + K2,  5);
-	ROUND(G, c, d, a, b, in[ 8] + K2,  9);
-	ROUND(G, b, c, d, a, in[10] + K2, 13);
-
-	/* Round 3 */
-	ROUND(H, a, b, c, d, in[ 3] + K3,  3);
-	ROUND(H, d, a, b, c, in[ 7] + K3,  9);
-	ROUND(H, c, d, a, b, in[11] + K3, 11);
-	ROUND(H, b, c, d, a, in[ 2] + K3, 15);
-	ROUND(H, a, b, c, d, in[ 6] + K3,  3);
-	ROUND(H, d, a, b, c, in[10] + K3,  9);
-	ROUND(H, c, d, a, b, in[ 1] + K3, 11);
-	ROUND(H, b, c, d, a, in[ 5] + K3, 15);
-	ROUND(H, a, b, c, d, in[ 9] + K3,  3);
-	ROUND(H, d, a, b, c, in[ 0] + K3,  9);
-	ROUND(H, c, d, a, b, in[ 4] + K3, 11);
-	ROUND(H, b, c, d, a, in[ 8] + K3, 15);
-
-	return buf[1] + b;	/* "most hashed" word */
-	/* Alternative: return sum of all words? */
-}
-#endif
-
-#undef ROUND
-#undef F
-#undef G
-#undef H
-#undef K1
-#undef K2
-#undef K3
 
 /* This should not be decreased so low that ISNs wrap too fast. */
 #define REKEY_INTERVAL	300
@@ -2237,79 +1561,67 @@
 #define HASH_BITS	24
 #define HASH_MASK	( (1<<HASH_BITS)-1 )
 
-static struct keydata {
-	time_t rekey_time;
-	__u32	count;		// already shifted to the final position
-	__u32	secret[12];
-} ____cacheline_aligned ip_keydata[2];
-
 static spinlock_t ip_lock = SPIN_LOCK_UNLOCKED;
-static unsigned int ip_cnt;
 
-static struct keydata *__check_and_rekey(time_t time)
+static __u32 network_random_read32(void)
 {
-	struct keydata *keyptr;
+	static u8			ctr[16];    /* max block size? */
+	static struct scatterlist	sgctr[1];
+	static unsigned int		master_count=0;
+	static time_t			lastRekey=0;
+
+	struct scatterlist sgtmp[1];
+	unsigned int	count;
+	unsigned char	tmp[16];
+	struct timeval	tv;
+
+        rmb();
 	spin_lock_bh(&ip_lock);
-	keyptr = &ip_keydata[ip_cnt&1];
-	if (!keyptr->rekey_time || (time - keyptr->rekey_time) > REKEY_INTERVAL) {
-		keyptr = &ip_keydata[1^(ip_cnt&1)];
-		keyptr->rekey_time = time;
-		get_random_bytes(keyptr->secret, sizeof(keyptr->secret));
-		keyptr->count = (ip_cnt&COUNT_MASK)<<HASH_BITS;
+
+	count = ++master_count;
+	increment_iv(ctr, random_state->blocksize);
+
+	do_gettimeofday(&tv);
+	if (lastRekey==0  || (tv.tv_sec - lastRekey) < REKEY_INTERVAL) {
+		lastRekey = tv.tv_sec;
+
+		sgctr[0].page = virt_to_page(ctr);
+		sgctr[0].offset = offset_in_page(ctr);
+		sgctr[0].length = 16;
+
+		if (!random_state->networkCipher_ready) {
+			u8 secret[32]; /* max key size? */
+			get_random_bytes(secret, random_state->keysize);
+			crypto_cipher_setkey(random_state->networkCipher, (const u8*)secret, random_state->keysize);
+			random_state->networkCipher_ready = 1;
+		}
+
 		mb();
-		ip_cnt++;
-	}
-	spin_unlock_bh(&ip_lock);
-	return keyptr;
-}
+        }
 
-static inline struct keydata *check_and_rekey(time_t time)
-{
-	struct keydata *keyptr = &ip_keydata[ip_cnt&1];
+        spin_unlock_bh(&ip_lock);
 
-	rmb();
-	if (!keyptr->rekey_time || (time - keyptr->rekey_time) > REKEY_INTERVAL) {
-		keyptr = __check_and_rekey(time);
-	}
+	sgtmp[0].page = virt_to_page(tmp);
+	sgtmp[0].offset = offset_in_page(tmp);
+	sgtmp[0].length = random_state->blocksize;
+	crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgctr, 1); /* tmp[]/sg[0] = Enc(Sec, CTR++) */
+	increment_iv(ctr, random_state->blocksize);
 
-	return keyptr;
+	/* seq# needs to be random-ish, but incresing */
+	return tmp[0] + (count << (32-COUNT_BITS));
 }
 
 #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
 __u32 secure_tcpv6_sequence_number(__u32 *saddr, __u32 *daddr,
 				   __u16 sport, __u16 dport)
 {
-	struct timeval 	tv;
-	__u32		seq;
-	__u32		hash[12];
-	struct keydata *keyptr;
-
-	/* The procedure is the same as for IPv4, but addresses are longer.
-	 * Thus we must use twothirdsMD4Transform.
-	 */
-
-	do_gettimeofday(&tv);	/* We need the usecs below... */
-	keyptr = check_and_rekey(tv.tv_sec);
-
-	memcpy(hash, saddr, 16);
-	hash[4]=(sport << 16) + dport;
-	memcpy(&hash[5],keyptr->secret,sizeof(__u32)*7);
-
-	seq = twothirdsMD4Transform(daddr, hash) & HASH_MASK;
-	seq += keyptr->count;
-	seq += tv.tv_usec + tv.tv_sec*1000000;
-
-	return seq;
+	return network_random_read32();
 }
 EXPORT_SYMBOL(secure_tcpv6_sequence_number);
 
 __u32 secure_ipv6_id(__u32 *daddr)
 {
-	struct keydata *keyptr;
-
-	keyptr = check_and_rekey(get_seconds());
-
-	return halfMD4Transform(daddr, keyptr->secret);
+	return network_random_read32();
 }
 
 EXPORT_SYMBOL(secure_ipv6_id);
@@ -2319,69 +1631,14 @@
 __u32 secure_tcp_sequence_number(__u32 saddr, __u32 daddr,
 				 __u16 sport, __u16 dport)
 {
-	struct timeval 	tv;
-	__u32		seq;
-	__u32	hash[4];
-	struct keydata *keyptr;
-
-	/*
-	 * Pick a random secret every REKEY_INTERVAL seconds.
-	 */
-	do_gettimeofday(&tv);	/* We need the usecs below... */
-	keyptr = check_and_rekey(tv.tv_sec);
-
-	/*
-	 *  Pick a unique starting offset for each TCP connection endpoints
-	 *  (saddr, daddr, sport, dport).
-	 *  Note that the words are placed into the starting vector, which is 
-	 *  then mixed with a partial MD4 over random data.
-	 */
-	hash[0]=saddr;
-	hash[1]=daddr;
-	hash[2]=(sport << 16) + dport;
-	hash[3]=keyptr->secret[11];
-
-	seq = halfMD4Transform(hash, keyptr->secret) & HASH_MASK;
-	seq += keyptr->count;
-	/*
-	 *	As close as possible to RFC 793, which
-	 *	suggests using a 250 kHz clock.
-	 *	Further reading shows this assumes 2 Mb/s networks.
-	 *	For 10 Mb/s Ethernet, a 1 MHz clock is appropriate.
-	 *	That's funny, Linux has one built in!  Use it!
-	 *	(Networks are faster now - should this be increased?)
-	 */
-	seq += tv.tv_usec + tv.tv_sec*1000000;
-#if 0
-	printk("init_seq(%lx, %lx, %d, %d) = %d\n",
-	       saddr, daddr, sport, dport, seq);
-#endif
-	return seq;
+	return network_random_read32();
 }
 
 EXPORT_SYMBOL(secure_tcp_sequence_number);
 
-/*  The code below is shamelessly stolen from secure_tcp_sequence_number().
- *  All blames to Andrey V. Savochkin <saw@msu.ru>.
- */
 __u32 secure_ip_id(__u32 daddr)
 {
-	struct keydata *keyptr;
-	__u32 hash[4];
-
-	keyptr = check_and_rekey(get_seconds());
-
-	/*
-	 *  Pick a unique starting offset for each IP destination.
-	 *  The dest ip address is placed in the starting vector,
-	 *  which is then hashed with random data.
-	 */
-	hash[0] = daddr;
-	hash[1] = keyptr->secret[9];
-	hash[2] = keyptr->secret[10];
-	hash[3] = keyptr->secret[11];
-
-	return halfMD4Transform(hash, keyptr->secret);
+	return network_random_read32();
 }
 
 #ifdef CONFIG_SYN_COOKIES
@@ -2389,6 +1646,8 @@
  * Secure SYN cookie computation. This is the algorithm worked out by
  * Dan Bernstein and Eric Schenk.
  *
+ * Replaced by Jean-Luc Cooke <jlcooke@certainkey.com> and Tom St. Denis <tstdenis@certainkey.com>
+ *
  * For linux I implement the 1 minute counter by looking at the jiffies clock.
  * The count is passed in as a parameter, so this code doesn't much care.
  */
@@ -2396,25 +1655,32 @@
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static int	syncookie_init;
-static __u32	syncookie_secret[2][16-3+HASH_BUFFER_SIZE];
-
 __u32 secure_tcp_syn_cookie(__u32 saddr, __u32 daddr, __u16 sport,
 		__u16 dport, __u32 sseq, __u32 count, __u32 data)
 {
-	__u32 	tmp[16 + HASH_BUFFER_SIZE + HASH_EXTRA_SIZE];
-	__u32	seq;
-
-	/*
-	 * Pick two random secrets the first time we need a cookie.
-	 */
-	if (syncookie_init == 0) {
-		get_random_bytes(syncookie_secret, sizeof(syncookie_secret));
-		syncookie_init = 1;
-	}
+	struct scatterlist sg[1];
+	__u32	tmp[4];
 
 	/*
 	 * Compute the secure sequence number.
+	 * 
+	 * jlcooke
+	 * Output is the 32bit tag of a CBC-MAC of PT={count,0,0,0} with IV={addr,daddr,sport|dport,sseq}
+	 *   cookie = <8bit count> || truncate_24bit( Encrypt(Sec, {saddr,daddr,sport|dport,sseq}) )
+	 * 
+	 * DJB wrote (http://cr.yp.to/syncookies/archive) about how to do this with hash algorithms
+	 * - we can replace two SHA1s used in the previous kernel with two AESs and make things 3x faster
+	 * - I'd like to propose we remove the use of two whittenings with a single operation since we
+	 *   were only using addition modulo 2^32 of all these values anyways.  Not to mention the hashs
+	 *   differ only in that the second processes more data... why drop the first hash?  We did learn
+	 *   that addition is commutative and associative long ago.
+	 * - by replacing two SHA1s and addition modulo 2^32 with encryption of a 32bit value using AES-CTR
+	 *   we've made it 1,000,000,000 times easier to understand what is going on.
+	 * - Todo: we should rekey the cipher peridoically... if we do this, some packets will now fail 
+	 *   our checking system... is this ok?  How can we get around this?  Rekey's would ideally happen
+	 *   once per minute (6 million TCP connections per minute is a unrealistic enough security margin)
+	 * jlcooke
+	 *
 	 * The output should be:
    	 *   HASH(sec1,saddr,sport,daddr,dport,sec1) + sseq + (count * 2^24)
 	 *      + (HASH(sec2,saddr,sport,daddr,dport,count,sec2) % 2^24).
@@ -2424,22 +1690,26 @@
 	 * MSS into the second hash value.
 	 */
 
-	memcpy(tmp+3, syncookie_secret[0], sizeof(syncookie_secret[0]));
-	tmp[0]=saddr;
-	tmp[1]=daddr;
-	tmp[2]=(sport << 16) + dport;
-	HASH_TRANSFORM(tmp+16, tmp);
-	seq = tmp[17] + sseq + (count << COOKIEBITS);
-
-	memcpy(tmp+3, syncookie_secret[1], sizeof(syncookie_secret[1]));
-	tmp[0]=saddr;
-	tmp[1]=daddr;
-	tmp[2]=(sport << 16) + dport;
-	tmp[3] = count;	/* minute counter */
-	HASH_TRANSFORM(tmp+16, tmp);
+	tmp[0] = saddr;
+	tmp[1] = daddr;
+	tmp[2] = (sport << 16) + dport;
+	tmp[3] = sseq;
+
+	sg[0].page = virt_to_page(tmp);
+	sg[0].offset = offset_in_page(tmp);
+	sg[0].length = 16;
+	if (!random_state->networkCipher_ready) {
+		u8 secret[32];
+		get_random_bytes(secret, sizeof(secret));
+		if (crypto_cipher_setkey(random_state->networkCipher, secret, random_state->keysize)) {
+			return 0;
+		}
+		random_state->networkCipher_ready = 1;
+	}
+	crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1); /* tmp[]/sg[0] = Enc(Sec, {saddr,daddr,sport|dport,sseq}) */
 
-	/* Add in the second hash and the data */
-	return seq + ((tmp[17] + data) & COOKIEMASK);
+	/* cookie = CTR encrypt of 8-bit-count and 24-bit-data */
+	return tmp[0] ^ ( (count << COOKIEBITS) | (data >> (sizeof(__u32)*8-COOKIEBITS)) );
 }
 
 /*
@@ -2454,32 +1724,29 @@
 __u32 check_tcp_syn_cookie(__u32 cookie, __u32 saddr, __u32 daddr, __u16 sport,
 		__u16 dport, __u32 sseq, __u32 count, __u32 maxdiff)
 {
-	__u32 	tmp[16 + HASH_BUFFER_SIZE + HASH_EXTRA_SIZE];
-	__u32	diff;
+	struct scatterlist sg[1];
+	__u32 tmp[4], thiscount, diff;
 
-	if (syncookie_init == 0)
+	if (random_state == NULL  ||  !random_state->networkCipher_ready)
 		return (__u32)-1;	/* Well, duh! */
 
-	/* Strip away the layers from the cookie */
-	memcpy(tmp+3, syncookie_secret[0], sizeof(syncookie_secret[0]));
-	tmp[0]=saddr;
-	tmp[1]=daddr;
-	tmp[2]=(sport << 16) + dport;
-	HASH_TRANSFORM(tmp+16, tmp);
-	cookie -= tmp[17] + sseq;
-	/* Cookie is now reduced to (count * 2^24) ^ (hash % 2^24) */
-
-	diff = (count - (cookie >> COOKIEBITS)) & ((__u32)-1 >> COOKIEBITS);
-	if (diff >= maxdiff)
-		return (__u32)-1;
-
-	memcpy(tmp+3, syncookie_secret[1], sizeof(syncookie_secret[1]));
 	tmp[0] = saddr;
 	tmp[1] = daddr;
 	tmp[2] = (sport << 16) + dport;
-	tmp[3] = count - diff;	/* minute counter */
-	HASH_TRANSFORM(tmp+16, tmp);
+	tmp[3] = sseq;
+	sg[0].page = virt_to_page(tmp);
+	sg[0].offset = offset_in_page(tmp);
+	sg[0].length = 16;
+	crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1);
+
+	cookie ^= tmp[0]; /* CTR decrypt the cookie */
+
+	thiscount = cookie >> COOKIEBITS; /* top 8 bits are 'count' */
+
+	diff = count - thiscount;
+	if (diff >= maxdiff)
+		return (__u32)-1;
 
-	return (cookie - tmp[17]) & COOKIEMASK;	/* Leaving the data behind */
+	return cookie >> (sizeof(__u32)*8-COOKIEBITS); /* bottom 24 bits are 'data' */
 }
 #endif

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
@ 2004-09-24  0:59 linux
  2004-09-24  2:34 ` Jean-Luc Cooke
  0 siblings, 1 reply; 35+ messages in thread
From: linux @ 2004-09-24  0:59 UTC (permalink / raw)
  To: jlcooke; +Cc: linux-kernel

Fortuna is an attempt to avoid the need for entropy estimation.
It doesn't do a perfect job.  And I don't think it's received enough
review to be "regarded as the state of the art".

Entropy estimation is very difficult, but not doing it leads to problems.

Bruce Schneier's "catastrophic reseeding" ideas have some merit.  If,
for some reason, the state of your RNG pool has been captured, then
adding one bit of seed material doesn't hurt an attacker who can look
at the output and brute-force that bit.

Thus, once you've lost security, you never regain it.  If you save up,
say, 128 bits of seed material and add it all at once, your attacker
can't brute-force it.

/dev/random tries to solve this my never letting anyone see more output
than there is seed material input.  So regardless of the initial state
of the pool, an attacker can never get enough output to compute a unique
solution to the seed material question.  (See "unicity distance".)

However, this requires knowing the entropy content of the input, which is
a hard thing to measure.

The while issue of catastrophic reseeding applies to output-larger-than-key
generators like  something like /dev/urandom (that uses cryptographic


Here's an example of how Fortuna's design fails.

Suppose we have a source which produces 32-bit samples, which are
guaranteed to contain 1 bit of new entropy per sample.  We should be
able to feed that into Fortuna and have a good RNG, right?  Wrong.

Suppose that each time you sample the source, it adds one bit to a 32-bit
shift register, and gives you the result.  So sample[0] shares 31 bits
with sample[1], 30 bits with sample[2], etc.

Now, suppose that we add samples to 32 buckets in round-robin order,
and dump bucket[i] into the pool every round 2^i rounds.  Further,
assume that our attacker can query the pool's output and brute-force 32
bits of seed material.  In the following, "+=" is some cryptographic
mixing primitive, not literal addition.

Pool: Initial state known to attacker (e.g. empty)
Buckets: Initial state known to attacker (e.g. empty)
bucket[0] += sample[0]; pool += bucket[0]
	-> attacker can query the pool and brute-force compute sample[0].
bucket[1] += sample[1] (= sample[0] << 1 | sample[32] >> 31)
bucket[2] += sample[2] (= sample[0] << 2 | sample[32] >> 30)
...
bucket[31] += sample[31] (= sample[0] << 31 | sample[32] >> 1)
bucket[0] += sample[32]; pool += bucket[0]
	-> attacker can query the pool and brute-force compute sample[32].
	-> Attacker now knows sample[1] through sample[31]
	-> Attacker now knows bucket[1] through bucket[31.

Note that the attacker now knows the value of sample[1] through sample[31] and
thus the state of all the buckets, and can continue tracking the pool's
state indefinitely:

bucket[1] += sample[33]; pool += bucket[1]
	-> attacker can query the pool and brute-force compute sample[33].
etc.

This shift register behaviour should be obvious, but suppose that sample[i]
is put through an encryption (known to the attacker) before being presented.
You can't tell that you're being fed cooked data, but the attack works just the
same.


Now, this is, admittedly, a highly contrived example, but it shows that
Fortuna does not completely achieve its stated design goal of achieving
catastrophic reseeding after having received some contant times the
necessary entropy as seed material.  Its round-robin structure makes it
vulnerable to serial correlations in the input seed material.  If they're
bad enough, its security can be completely destroyed.  What *are* the
requirements for it to be secure?  I don't know.

All I know is that it hasn't been analyzed well enough to be a panacea.

(The other thing I don't care for is the limited size of the
entropy pools.  I like the "big pool" approach.  Yes, 256 bits is
enough if everything works okay, but why take chances?  But that's a
philosophical/style/gut feel argument more than a really technical one.)


I confess I haven't dug into the /dev/{,u}random code lately.  The various
problems with low-latency random numbers needed by the IP stack suggest
that perhaps a faster PRNG would be useful in-kernel.  If so, there may
be a justification for an in-kernel PRNG fast enough to use for disk
overwriting or the like.  (As people persist in using /dev/urandom for,
even though it's explicitly not designed for that.)

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24  0:59 linux
@ 2004-09-24  2:34 ` Jean-Luc Cooke
  2004-09-24  6:19   ` linux
  2004-09-24 21:42   ` linux
  0 siblings, 2 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-24  2:34 UTC (permalink / raw)
  To: linux; +Cc: linux-kernel, cryptoapi, jmorris, tytso

"linux",

The Fortuna patch I've submitted tried to achieve this "more than 256 bits per
pool" by carrying forward the digest output to the next pool.  Stock Fortuna
does not carry forward digest output form previous iterations.

reseed:
  reseedCount++;
  for (i=0..31) {
    if (2^i is a factor of reseedCount) {
      hash_final(pool[i], dgst);
      hash_init(pool[i]);
      hash_update(pool[i], dgst); // my addition
      ...
    }
  }
  ...

Considering each pool has 256 bits of digest output, and there are 32 pools,
this gives about 8192 bits for the pool size.  Far greater then current
design.  If you extremely pessimistically consider the probability of drawing
pool j is 1/2 that of {j-1}, then it's a 512bit RNG.


But I'd like talk to your attack for a second.  I'd argue that it is valid
for the current /dev/random and Yarrow with entropy estimators as well.

I agree that if the state is known by an active attacker, then a trickle of
entropy into Fortuna compared to the output gathered by an attacker would
make for an argument that "Fortuna doesn't have it right."  And no matter
what PRNG engine you but between the attack and the random sources, there is
no solution other than accurate entropy measurement (*not* estimation).

However, this places the security of the system in the hands of the entropy
estimator.  If it is too liberal, we have the nearly the same situation with
Fortuna.  As much as I rely on Ted's work everyday for the smooth running of
my machine, I can't concede to the notion that Ted got it right.

Fortuna, I'd argue reduces the attack on the PRNG to that of the base
crypto primitives, the randomness of the events and the rate at which data is
output by /dev/random.  This holds true for the current /dev/random except:
 1) crypto primitives are do not pass test vectors, and the input mixing
    function is linear.
 2) The randomness of the events can only be estimated, their true randomness
    requires analysis of the hardware device itself... not Feasible
    considering all the possible IRQ sources, mice, and hard disks that Linux
    drives.
 3) Following on (2) above, the output rate of /dev/random is directly
    related to the estimated randomness.

If you have ideas on how to make a PRNG that can more closely tie output rate
to input events and survive state compromise attacks (backtracking, forward
secrecy, etc) then please drop anonymity and contact me at my email address.
Perhaps a collaboration is possible.

Cheers,

JLC


On Fri, Sep 24, 2004 at 12:59:38AM -0000, linux@horizon.com wrote:
> Fortuna is an attempt to avoid the need for entropy estimation.
> It doesn't do a perfect job.  And I don't think it's received enough
> review to be "regarded as the state of the art".
> 
> Entropy estimation is very difficult, but not doing it leads to problems.
> 
> Bruce Schneier's "catastrophic reseeding" ideas have some merit.  If,
> for some reason, the state of your RNG pool has been captured, then
> adding one bit of seed material doesn't hurt an attacker who can look
> at the output and brute-force that bit.
> 
> Thus, once you've lost security, you never regain it.  If you save up,
> say, 128 bits of seed material and add it all at once, your attacker
> can't brute-force it.
> 
> /dev/random tries to solve this my never letting anyone see more output
> than there is seed material input.  So regardless of the initial state
> of the pool, an attacker can never get enough output to compute a unique
> solution to the seed material question.  (See "unicity distance".)
> 
> However, this requires knowing the entropy content of the input, which is
> a hard thing to measure.
> 
> The while issue of catastrophic reseeding applies to output-larger-than-key
> generators like  something like /dev/urandom (that uses cryptographic
> 
> 
> Here's an example of how Fortuna's design fails.
> 
> Suppose we have a source which produces 32-bit samples, which are
> guaranteed to contain 1 bit of new entropy per sample.  We should be
> able to feed that into Fortuna and have a good RNG, right?  Wrong.
> 
> Suppose that each time you sample the source, it adds one bit to a 32-bit
> shift register, and gives you the result.  So sample[0] shares 31 bits
> with sample[1], 30 bits with sample[2], etc.
> 
> Now, suppose that we add samples to 32 buckets in round-robin order,
> and dump bucket[i] into the pool every round 2^i rounds.  Further,
> assume that our attacker can query the pool's output and brute-force 32
> bits of seed material.  In the following, "+=" is some cryptographic
> mixing primitive, not literal addition.
> 
> Pool: Initial state known to attacker (e.g. empty)
> Buckets: Initial state known to attacker (e.g. empty)
> bucket[0] += sample[0]; pool += bucket[0]
> 	-> attacker can query the pool and brute-force compute sample[0].
> bucket[1] += sample[1] (= sample[0] << 1 | sample[32] >> 31)
> bucket[2] += sample[2] (= sample[0] << 2 | sample[32] >> 30)
> ...
> bucket[31] += sample[31] (= sample[0] << 31 | sample[32] >> 1)
> bucket[0] += sample[32]; pool += bucket[0]
> 	-> attacker can query the pool and brute-force compute sample[32].
> 	-> Attacker now knows sample[1] through sample[31]
> 	-> Attacker now knows bucket[1] through bucket[31.
> 
> Note that the attacker now knows the value of sample[1] through sample[31] and
> thus the state of all the buckets, and can continue tracking the pool's
> state indefinitely:
> 
> bucket[1] += sample[33]; pool += bucket[1]
> 	-> attacker can query the pool and brute-force compute sample[33].
> etc.
> 
> This shift register behaviour should be obvious, but suppose that sample[i]
> is put through an encryption (known to the attacker) before being presented.
> You can't tell that you're being fed cooked data, but the attack works just the
> same.
> 
> 
> Now, this is, admittedly, a highly contrived example, but it shows that
> Fortuna does not completely achieve its stated design goal of achieving
> catastrophic reseeding after having received some contant times the
> necessary entropy as seed material.  Its round-robin structure makes it
> vulnerable to serial correlations in the input seed material.  If they're
> bad enough, its security can be completely destroyed.  What *are* the
> requirements for it to be secure?  I don't know.
> 
> All I know is that it hasn't been analyzed well enough to be a panacea.
> 
> (The other thing I don't care for is the limited size of the
> entropy pools.  I like the "big pool" approach.  Yes, 256 bits is
> enough if everything works okay, but why take chances?  But that's a
> philosophical/style/gut feel argument more than a really technical one.)
> 
> 
> I confess I haven't dug into the /dev/{,u}random code lately.  The various
> problems with low-latency random numbers needed by the IP stack suggest
> that perhaps a faster PRNG would be useful in-kernel.  If so, there may
> be a justification for an in-kernel PRNG fast enough to use for disk
> overwriting or the like.  (As people persist in using /dev/urandom for,
> even though it's explicitly not designed for that.)
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-23 23:43 [PROPOSAL/PATCH] Fortuna PRNG in /dev/random Jean-Luc Cooke
@ 2004-09-24  4:38 ` Theodore Ts'o
  2004-09-24 12:54   ` Jean-Luc Cooke
  2004-09-24 13:44   ` Jean-Luc Cooke
  2004-09-27  4:58 ` Theodore Ts'o
  1 sibling, 2 replies; 35+ messages in thread
From: Theodore Ts'o @ 2004-09-24  4:38 UTC (permalink / raw)
  To: Jean-Luc Cooke; +Cc: linux-kernel

On Thu, Sep 23, 2004 at 07:43:40PM -0400, Jean-Luc Cooke wrote:
> 
> Here is a patch for the 2.6.8.1 Linux kernel which replaces the existing PRNG
> in random.c with the Fortuna PRNG designed by Ferguson and Schneier (Practical
> Cryptography).  It is regarded in crypto circles as the current state-of-the-art
> in cryptographically secure PRNGs.
>
> Warning: Ted Ts'o and I talked about this at great length in sci.crypt and
> in the end I failed on convince him that my patch was worth becoming main-line,
> and he failed to convince me that status-quo is acceptable considering a better
> solution exists.

I've taken a quick look at your patch, and here are some problems with it.


0.  Code style issues

Take a look at /usr/src/linux/Documentation/CodingStyle, and follow
it, please.  In particular, pay attention to wrapping text
(particularly comment blocks) at 80 characters, max, and lose the
C++-style comments, please.  Maintaining a good common comment
convention is good, too.

1.  Don't leave out-of-date comments behind.  

Your patch makes significant changes, but you haven't updated the
comments to reflect all of your changes.  For example, the comments
for secure TCP sequence number generation are no longer correct.  The
comments about the twisted GFSR document the original scheme, not the
Fortuna generator.  If you're going to remove the code, remove the
comments too, or the resulting mess will be confusing and not very
maintainable.

2.  The kernel will break if CONFIG_CRYPTO is false

The /dev/random driver is designed to be present in the system no
matter what.  This was a design decision that was made long ago, to
simplify user space applications that could count on /dev/random being
present.  This is a philosophical divide; your belief (as you put it
on your web site) seems to be: "If you want secure random numbers but
don't want crypto, then you don't want secure random numbers."  The
problem is that someone may not want (or need) encryption algorithms
in the *kernel*, but they may still want secure random numbers in
*userspace*.

In any case, your patch is broken, since the kernel will simply fail
to build if CONFIG_CRYPTO is turned off.  And simply making the
compilation of /dev/random conditional on CONFIG_CRYPTO isn't good
enough, since there are other portions of the kernel that assume that
random.c will be present.  (For example, irqaction for providing
entropy input, and the TCP stack depends on it for secquence numbers.)

3.  The TCP sequence numbers are broken

The requirements on secure sequence number is far more than just
"needs to be random-ish, but incresing [sic]".  Read RFC 1948:

   The choice of initial sequence numbers for a connection is not
   random.  Rather, it must be chosen so as to minimize the probability
   of old stale packets being accepted by new incarnations of the same
   connection [RFC 1185].

The increasing bit is also not guaranteed, since tmp[0] isn't masked
off.  Not that it would really matter if it did; with only 8 bits
worth of COUNT_BITS, every 256 TCP connections you will wrap, and
expose that connection to the risk that stale packets being accepted.

I'm also a bit concerned about how much time AES takes over the
cut-down MD4, as this may affect networking benchmarks.  (And we don't
need super-strength crypto here.)


As far as the Fortuna generator being "better", it really represents a
philosophical divide between what I call Crypto academics" and "Crypto
engineers".  I won't go into that whole debate here, except to note
that the current /dev/random was designed with close consultation with
Colin Plumb, who wrote the random number generator found in PGP, and
indeed /dev/random is very close to that used in PGP.  In discussions
on sci.crypt, there were those who argued on both side of this issue,
with such notables as Peter Gutmann lining up against Jean-Luc.

>   + Removed entropy estimation
>    - Fortuna doesn't need it, vanilla-/dev/random and other Yarrow-like
>      PRNGs do to survive state compromise and other attacks.

Entropy estimation is a useful concept in that it attempts to limit
possible attacks caused by weaknesses in the crypto algorithms (such
what happened at this year's Crypto's conference, where MD4, MD5,
HAVAL, and SHA-0 were all weakened).  The designed used by PGP and
/dev/random both limit the amount of reliance placed in the crypto
algorithms, where as Fortuna and Yarrow both assume that crypto
primitives are 100% strong.  This is again a philosophical divide;
given that we have access to unpredicitability based on hardware
timings, we should limit the dependence on crypto algorithsm and to
design a system that is closer to "true randomness" as possible.  

>  - Current /dev/random's input mixing function is a linear function.  This is bad in crypto-circles.
>    Why?  Linear functions are communitive, associative and sometimes distributive.
>    Outputs from Linear function based PRNGs are very weak.

This is a red herring.  /dev/random is not a linear function based
PRNG.  We use a linear function for mixing, yes, but we do use SHA-1
as part of the output stage.  And based on how we use SHA-1, even if
arbitrary collisions can be found in SHA-1 (as has been found in
SHA-0) this wouldn't cause a failure of /dev/random's security ---
this is part of the design philosophy of trying to avoid relying over
much on the security of the crypto primitives, as much as possible.

							- Ted

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24  2:34 ` Jean-Luc Cooke
@ 2004-09-24  6:19   ` linux
  2004-09-24 21:42   ` linux
  1 sibling, 0 replies; 35+ messages in thread
From: linux @ 2004-09-24  6:19 UTC (permalink / raw)
  To: jlcooke; +Cc: cryptoapi, jmorris, linux-kernel, tytso

BTW, you write:
> It is regarded in crypto circles as the current state-of-the-art
> in cryptographically secure PRNGs.

The question this brings to mind is:
It is?  Can you point me to a single third-party paper on the subject?

There's nothing in the IACR preprint archive.  Nor citeseer,


The big difference between when /dev/random was designed and today:

- USB is a broadcast bus, and a lot (timing, at least) can be sniffed
  by a small dongle.  Wireless keyboards and mice are popular.  That
  sort of user data probably shouldn't be trusted any more.  (No harm
  mixing it in, just in case it is good, but accord it zero weight.)
- Clock speeds are a *lot* higher (> 1 GHz) and the timestamp counter is
  almost universally available.  Even an attacker with multiple antennas
  pointed at the computer is going to have a hard time figuring out on which
  tick of the clock an interrupt arrived even if they can see it.

Thus, the least-significant bits of the TSC are useful entropy on *every*
interrupt, timer included.


For a fun exercise, install a kernel hack to capture the TSC on every
timer interrupt.  Run it for a while on an idle system (processor in the
halt state, waiting for interrupts on a cycle-by-cycle basis).

Take the resultant points, subtract the best-fit line, and throw out any
outliers caused by delayed interrupts.

Now do some statistical analysis of the residue.  How much entropy do
you have from the timer interrupt?  Does it look random?  How many lsbits
can you take and still pass Marsaglia's DIEHARD suite?  Do any patterns
show up in an FFT?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24  4:38 ` Theodore Ts'o
@ 2004-09-24 12:54   ` Jean-Luc Cooke
  2004-09-24 17:43     ` Theodore Ts'o
  2004-09-24 13:44   ` Jean-Luc Cooke
  1 sibling, 1 reply; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-24 12:54 UTC (permalink / raw)
  To: Theodore Ts'o, linux-kernel

On Fri, Sep 24, 2004 at 12:38:51AM -0400, Theodore Ts'o wrote:
> I've taken a quick look at your patch, and here are some problems with it.
> 
> 
> 0.  Code style issues
> 
> Take a look at /usr/src/linux/Documentation/CodingStyle, ...

Will-do.  My bad.


> 1.  Don't leave out-of-date comments behind.  
> 
> Your patch makes significant changes, but you haven't updated the
> comments to reflect all of your changes. ...

> 
> 2.  The kernel will break if CONFIG_CRYPTO is false
> matter what.  This was a design decision that was made long ago, to
> simplify user space applications that could count on /dev/random ...

My naive point of view tells me either this design decision from days of
yore was not thought out properly (blasphemy!), or the cryptoapi needs to
be in kernel.

A compromise would be to have a primitive PRNG in random.c is no
CONFIG_CRYRPTO is present to keep things working.

> 3.  The TCP sequence numbers are broken

I see.  I'll make the change.  Thank you.

> As far as the Fortuna generator being "better", it really represents a
> philosophical divide between what I call Crypto academics" and "Crypto
> engineers".  I won't go into that whole debate here, except to note
> that the current /dev/random was designed with close consultation with
> Colin Plumb, who wrote the random number generator found in PGP, and
> indeed /dev/random is very close to that used in PGP.  In discussions
> on sci.crypt, there were those who argued on both side of this issue,
> with such notables as Peter Gutmann lining up against Jean-Luc.

Agreed.  This is why I've been dreading in posting the patch here.  The
current /dev/random is good, possibly the best OS-level RNG out there
right now.  Ted, if I've never said it before or ever again, you've done
a great job.  But my first impressions when I dove in were:
 - gah!  Why did someone go through so much trouble to make this hard to
   analyse?
 - humm, why not use the cryptoapi if we want random data?
 - why do linux users want information secure random numbers?  Wouldn't
   crypto-secure random numbers be what they really want?
  + this, I've learned, is not something you can argue well against.  It's
    a matter of taste ... like Brittany Spears.

I wanted something more structured running on my machines so I re-wrote
random.c to use Fortuna, no entropy estimators, and uses the cryptoapi.

For the record, I believe David Wagner saw the case for replacing the PRNG
with Fortuna holding water.  Even removing the entropy estimator.  But
coneeded that some people will want /dev/random to block, so let them eat
cake.

> >   + Removed entropy estimation
> >    - Fortuna doesn't need it, vanilla-/dev/random and other Yarrow-like
> >      PRNGs do to survive state compromise and other attacks.
> 
> Entropy estimation is a useful concept in that it attempts to limit
> possible attacks caused by weaknesses in the crypto algorithms (such
> what happened at this year's Crypto's conference, where MD4, MD5,
> HAVAL, and SHA-0 were all weakened).  The designed used by PGP and
> /dev/random both limit the amount of reliance placed in the crypto
> algorithms, where as Fortuna and Yarrow both assume that crypto
> primitives are 100% strong.  This is again a philosophical divide;
> given that we have access to unpredicitability based on hardware
> timings, we should limit the dependence on crypto algorithsm and to
> design a system that is closer to "true randomness" as possible.  

What if I told the SHA-1 implementation in random.c right now is weaker
than those hashs in terms of collisions?  The lack of padding in the
implementation is the cause.  HASH("a\0\0\0\0...") == HASH("a") There
are billions of other examples.

The academic vs. engineer analogy works the other way as well.  Fortuna's
security can be directly reduced to the security of the underlying
algorithms.  This is a good thing.  If the security of all applications
were reduced in the same way, the world would be a safer place (political
discussions not withstanding).

Vanilla random.c depends on SHA-1 be to be resistant to 1-st pre-image
attacks.  Fortuna depends on this as well with SHA-256 (or whatever
other hash you put in there).  The "folding over with XOR" method you
use to make random.c stronger can work against you as well.  It comes
down to "I've changed SHA-1 to make it stronger".  The logic question
becomes: "Then why doesn't everyone use it?"

JLC

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24  4:38 ` Theodore Ts'o
  2004-09-24 12:54   ` Jean-Luc Cooke
@ 2004-09-24 13:44   ` Jean-Luc Cooke
  1 sibling, 0 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-24 13:44 UTC (permalink / raw)
  To: Theodore Ts'o, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 753 bytes --]

On Fri, Sep 24, 2004 at 12:38:51AM -0400, Theodore Ts'o wrote:
> I'm also a bit concerned about how much time AES takes over the
> cut-down MD4, as this may affect networking benchmarks.  (And we don't
> need super-strength crypto here.)

Oh,

openssl speed md4 aes shows:

type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
md4              10708.72k    38240.96k   111170.47k   215872.85k 296828.93k
aes-128 cbc      32121.81k    32678.31k    33119.49k    33221.29k 33210.59k
aes-192 cbc      27915.92k    27868.52k    28418.08k    28677.12k 28721.15k
aes-256 cbc      24599.57k    25142.38k    25381.80k    25474.88k 25392.46k

Since we're using small blocks.

Attached is the patch with the problems Ted pointed out.

JLC

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: fortuna-2.6.8.1.patch --]
[-- Type: text/plain; charset=unknown-8bit, Size: 93297 bytes --]

diff -uNr linux-2.6.8.1-orig/include/linux/sysctl.h linux-2.6.8.1-fortuna/include/linux/sysctl.h
--- linux-2.6.8.1-orig/include/linux/sysctl.h	2004-08-14 12:55:33.000000000 +0200
+++ linux-2.6.8.1-fortuna/include/linux/sysctl.h	2004-09-13 18:55:43.000000000 +0200
@@ -198,7 +198,8 @@
 	RANDOM_READ_THRESH=3,
 	RANDOM_WRITE_THRESH=4,
 	RANDOM_BOOT_ID=5,
-	RANDOM_UUID=6
+	RANDOM_UUID=6,
+	RANDOM_DERIVE_SEED=7
 };
 
 /* /proc/sys/kernel/pty */
--- linux-2.6.8.1/drivers/char/random.c	2004-09-24 08:32:30.222610504 -0400
+++ linux-2.6.8.1-rand2/drivers/char/random.c	2004-09-24 09:31:30.251444320 -0400
@@ -2,9 +2,11 @@
  * random.c -- A strong random number generator
  *
  * Version 1.89, last modified 19-Sep-99
+ * Version 2.01, last modified 24-Sep-2004
  * 
  * Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999.  All
  * rights reserved.
+ * Copyright Jean-Luc Cooke, 2004.  All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,61 +42,157 @@
  */
 
 /*
- * (now, with legal B.S. out of the way.....) 
- * 
- * This routine gathers environmental noise from device drivers, etc.,
- * and returns good random numbers, suitable for cryptographic use.
- * Besides the obvious cryptographic uses, these numbers are also good
- * for seeding TCP sequence numbers, and other places where it is
- * desirable to have numbers which are not only random, but hard to
- * predict by an attacker.
- *
- * Theory of operation
- * ===================
- * 
- * Computers are very predictable devices.  Hence it is extremely hard
- * to produce truly random numbers on a computer --- as opposed to
- * pseudo-random numbers, which can easily generated by using a
- * algorithm.  Unfortunately, it is very easy for attackers to guess
- * the sequence of pseudo-random number generators, and for some
- * applications this is not acceptable.  So instead, we must try to
- * gather "environmental noise" from the computer's environment, which
- * must be hard for outside attackers to observe, and use that to
- * generate random numbers.  In a Unix environment, this is best done
- * from inside the kernel.
- * 
- * Sources of randomness from the environment include inter-keyboard
- * timings, inter-interrupt timings from some interrupts, and other
- * events which are both (a) non-deterministic and (b) hard for an
- * outside observer to measure.  Randomness from these sources are
- * added to an "entropy pool", which is mixed using a CRC-like function.
- * This is not cryptographically strong, but it is adequate assuming
- * the randomness is not chosen maliciously, and it is fast enough that
- * the overhead of doing it on every interrupt is very reasonable.
- * As random bytes are mixed into the entropy pool, the routines keep
- * an *estimate* of how many bits of randomness have been stored into
- * the random number generator's internal state.
- * 
- * When random bytes are desired, they are obtained by taking the SHA
- * hash of the contents of the "entropy pool".  The SHA hash avoids
- * exposing the internal state of the entropy pool.  It is believed to
- * be computationally infeasible to derive any useful information
- * about the input of SHA from its output.  Even if it is possible to
- * analyze SHA in some clever way, as long as the amount of data
- * returned from the generator is less than the inherent entropy in
- * the pool, the output data is totally unpredictable.  For this
- * reason, the routine decreases its internal estimate of how many
- * bits of "true randomness" are contained in the entropy pool as it
- * outputs random numbers.
- * 
- * If this estimate goes to zero, the routine can still generate
- * random numbers; however, an attacker may (at least in theory) be
- * able to infer the future output of the generator from prior
- * outputs.  This requires successful cryptanalysis of SHA, which is
- * not believed to be feasible, but there is a remote possibility.
- * Nonetheless, these numbers should be useful for the vast majority
- * of purposes.
- * 
+ * The entire PRNG used in this file was replaced using a variant of the Fortuna
+ * PRNG described in Practical Cryptography by Ferguson and Schnier.
+ *
+ * The changes to their design include:
+ *  - feeding the output of each pool back into their input to carry entropy
+ *    forward (avoids pool overflow attacks like
+ *    "dd if=/dev/zero of=/dev/random"
+ *
+ * Also, the entropy estimator was removed since it is not needed for
+ * cryptographically secure random data and such constructions are
+ * historically prone to attack
+ * [read Practical Cryptography].
+ *
+ * The Fortuna PRNG as described in Practical Cryptography is implemented here.
+ * 
+ * Pseudo-code follows.
+ *
+<b>create_entropy_pool(r)</b>
+ - create an entropy pool in "r"
+
+  r.pool0_len = 0;
+  r.reseed_count = 0;
+  r.derive_count = 0;
+  r.digestsize = // digest size for our hash
+  r.blocksize = // block size for our cipher
+  r.keysize = // key size for our cipher
+  for (i=0; i&lt;32; i++) {
+    crypto_digest_init(r.pool[i]);
+  }
+  memset(r.key, 0, r.keysize);
+  crypto_cipher_setkey(r.cipher, r.key, r.keysize);
+
+<b>add_entropy_words(r, in, nwords)</b>
+ - mix 32bit word array "in" which is "nwords" long into pool "r"
+
+  crypto_digest_update(r.pool[i], in, nwords*sizeof(in[0]));
+  if (r.pool_index == 0)
+    r.pool0_len += nwords*sizeof(in[0]);
+  r.pool_index = r.pool_index + 1  mod  (2<sup>number of pools</sup> - 1)
+  
+<b>random_reseed(r)</b>
+ - reseed the key from the pooling system
+
+  r.reseed_count++;
+  
+  crypto_digest_init(hash);
+  crypto_digest_update(hash, r.key, r.keysize);
+  
+  for (i=0; i&lt;32; i++) {
+    if (2<sup>i</sup> is a factor of r.reseed_count) {
+      crypto_digest_final(r.pool[i], tmp);
+      crypto_digest_init(r.poo[i]);
+      crypto_digest_update(hash, tmp, r.digestsize);
+  
+      // jlcooke: small change from Ferguson
+      crypto_digest_update(r.pool[i], tmp, r.digestsize);
+    }
+  }
+  
+  crypto_digest_final(hash, tmp);
+  crypto_cipher_setkey(r.cipher, tmp, r.keysize);
+  r.ctrValue = r.ctrValue + 1  mod  (2<sup>number of pools</sup> - 1)
+
+<b>extract_entropy(r, buf, nbytes, flags)</b>
+ - fill byte array "buf" with "nbytes" of random data from entropy pool "r"
+
+  random_reseed(r);
+  r.pool0_len = 0;
+  
+  while (nbytes &gt; 0) {
+    crypto_cipher_encrypt(r.cipher, tmp, r.ctrValue, r.blocksize);
+    r.ctrValue++; // modulo 2<sup>r.blocksize/8</sup> 
+  
+    //
+    // Copy r.blocksize of tmp to the user
+    // Unless nbytes is less than r.blocksize, in which case only copy nbytes
+    //  
+  
+    nbytes -= r.blocksize;
+  }
+  
+  // generate a new key
+  crypto_cipher_encrypt(r.cipher, r.key, r.ctrValue, r.blocksize);
+  crypto_cipher_setkey(r.cipher, r.key, r.keysize);
+  
+<b>derive_pool(r, buf)</b>
+ - Fill "buf" with the output from a 1-way transformation of all 32-pools
+
+  memset(tmp, 0, r.digestsize);
+  r.pool0_len = 0;
+  
+  for (i=0; i&lt;32; i++) {
+    crypto_digest_init(hash);
+  
+    crypto_digest_update(hash, tmp, r.digestsize);
+  
+    crypto_digest_final(r.pool[i], tmp);
+    crypto_digest_init(r.pool[i]);
+    crypto_digest_update(hash, tmp, r.digestsize);
+  
+    crypto_digest_update(hash, r.derive_count, sizeof(r.derive_count));
+  
+    crypto_digest_final(hash, tmp);
+  
+    // Replace all 0x00 in "tmp" with "0x01" because the API to return a byte
+    //  array does not exist.  Only a "return string" API is provided.  This
+    //  reduces the effective entropy of the output by 0.39%.
+    // 
+  
+    memcpy(&amp;buf[i*r.digestsize], tmp, r.digestsize);
+    r.derive_count++;
+  }
+ *
+ * Draft Security Statement/Analysis (Jean-Luc Cooke <jlcooke@certainkey.com>)
+ *
+ * The Fortuna PRNG is resilliant to all known and preventable PRNG attacks.
+ * Proof of strength to these attacks can be done by reduction to the security
+ * of the underlying cryptographic primitives.
+ *  * H = HASH(M)
+ *   + M={0,1}^Mlen  0 <= Mlen < infinity
+ *   + H={0,1}^Hlen  256 <= Hlen 
+ *  * C = ENCRYPT(K,M)
+ *   + K={0,1}^Klen  256 <= Klen
+ *   + M={0,1}^Mlen  Mlen = 128
+ *   + C={0,1}^Clen  Clen = 128
+ *
+ *  - Invertability of the output function
+ *    The state of the output function Output[i] = ENCRYPT(KEY, CTR++) is
+ *    {KEY,CTR}  To recover the state {KEY,CTR} the attacker must be able to
+ *    mount a known-plaintext or a known-ciphertext attack on the block cipher
+ *    C=ENCRYPT(K,M) with N blocks.
+ *    N = ReseedIntervalInSeconds * OutputRateInBytesPerSecond / BytesPerBlock
+ *    AES256 in CTR is secure from known-plaintext/ciphertext key recovery
+ *    attacks with N < 2^128
+ *    However, After 2^64 blocks (2^71 bits) an attacker would have a 0.5 chance
+ *    to guessing the next 128bit output.  N <<< 2^64
+ *
+ *  - Invertability of the pool mixing function
+ *    The pool mixing function H' = HASH(H' || M) is said to be non-inveretable
+ *    if H=HASH(MSG) is invertable.
+ *    There have been no invertability discoveries in SHA-256.
+ * 
+ *  - Manipulating pool mixing
+ *    An attack who has access to one or all of the entropy event sources may
+ *    be able to input mallicious event data to alter any one of the pool states
+ *    into a degenerate state.  This requires that the underlying H=HASH(MSG)
+ *    function is suseptible to a 1st pre-image attack.  SHA-256 has no such
+ *    known attacks.
+ */
+
+/*
  * Exported interfaces ---- output
  * ===============================
  * 
@@ -107,17 +205,13 @@
  * and place it in the requested buffer.
  * 
  * The two other interfaces are two character devices /dev/random and
- * /dev/urandom.  /dev/random is suitable for use when very high
- * quality randomness is desired (for example, for key generation or
- * one-time pads), as it will only return a maximum of the number of
- * bits of randomness (as estimated by the random number generator)
- * contained in the entropy pool.
- * 
- * The /dev/urandom device does not have this limit, and will return
- * as many bytes as are requested.  As more and more random bytes are
- * requested without giving time for the entropy pool to recharge,
- * this will result in random numbers that are merely cryptographically
- * strong.  For many applications, however, this is acceptable.
+ * /dev/urandom.  They are synonymous with eachother for legacy reasons
+ * 
+ * Both the devices will return as many bytes as are requested.  As more
+ * and more random bytes are requested without giving time for the entropy
+ * pool to recharge, this will result in random numbers that are merely
+ * cryptographically strong.  For many applications, however, this is
+ * acceptable.
  *
  * Exported interfaces ---- input
  * ==============================
@@ -160,32 +254,28 @@
  * following lines an appropriate script which is run during the boot
  * sequence: 
  *
- *	echo "Initializing random number generator..."
- *	random_seed=/var/run/random-seed
- *	# Carry a random seed from start-up to start-up
- *	# Load and then save the whole entropy pool
- *	if [ -f $random_seed ]; then
- *		cat $random_seed >/dev/urandom
- *	else
- *		touch $random_seed
- *	fi
- *	chmod 600 $random_seed
- *	poolfile=/proc/sys/kernel/random/poolsize
- *	[ -r $poolfile ] && bytes=`cat $poolfile` || bytes=512
- *	dd if=/dev/urandom of=$random_seed count=1 bs=$bytes
+ *      echo "Initializing random number generator..."
+ *      random_seed=/var/run/random-seed
+ *      # Carry a random seed from start-up to start-up
+ *      # Load and then save the whole entropy pool
+ *      if [ -f $random_seed ]; then
+ *              cat $random_seed >/dev/urandom
+ *      else
+ *              touch $random_seed
+ *      fi
+ *      chmod 600 $random_seed
+ *      dd if=/proc/sys/kernel/random/derive_seed of=$random_seed
  *
  * and the following lines in an appropriate script which is run as
  * the system is shutdown:
  *
- *	# Carry a random seed from shut-down to start-up
- *	# Save the whole entropy pool
- *	echo "Saving random seed..."
- *	random_seed=/var/run/random-seed
- *	touch $random_seed
- *	chmod 600 $random_seed
- *	poolfile=/proc/sys/kernel/random/poolsize
- *	[ -r $poolfile ] && bytes=`cat $poolfile` || bytes=512
- *	dd if=/dev/urandom of=$random_seed count=1 bs=$bytes
+ *      # Carry a random seed from shut-down to start-up
+ *      # Save the whole entropy pool
+ *      echo "Saving random seed..."
+ *      random_seed=/var/run/random-seed
+ *      touch $random_seed
+ *      chmod 600 $random_seed
+ *      dd if=/proc/sys/kernel/random/derive_seed of=$random_seed
  *
  * For example, on most modern systems using the System V init
  * scripts, such code fragments would be found in
@@ -215,22 +305,11 @@
  * Acknowledgements:
  * =================
  *
- * Ideas for constructing this random number generator were derived
- * from Pretty Good Privacy's random number generator, and from private
- * discussions with Phil Karn.  Colin Plumb provided a faster random
- * number generator, which speed up the mixing function of the entropy
- * pool, taken from PGPfone.  Dale Worley has also contributed many
- * useful ideas and suggestions to improve this driver.
+ * The design for this RNG come from Fortuna as explined above.
+ * Cryptographic implementations are used from the CryptoAPI.
  * 
  * Any flaws in the design are solely my responsibility, and should
- * not be attributed to the Phil, Colin, or any of authors of PGP.
- * 
- * The code for SHA transform was taken from Peter Gutmann's
- * implementation, which has been placed in the public domain.
- * The code for MD5 transform was taken from Colin Plumb's
- * implementation, which has been placed in the public domain.
- * The MD5 cryptographic checksum was devised by Ronald Rivest, and is
- * documented in RFC 1321, "The MD5 Message Digest Algorithm".
+ * not be attributed to the Ted, Phil, Colin, or any of authors of PGP.
  * 
  * Further background information on this topic may be obtained from
  * RFC 1750, "Randomness Recommendations for Security", by Donald
@@ -254,137 +333,43 @@
 #include <linux/interrupt.h>
 #include <linux/spinlock.h>
 #include <linux/percpu.h>
+#include <linux/crypto.h>
+#include <../crypto/internal.h>
 
+#include <asm/scatterlist.h>
 #include <asm/processor.h>
 #include <asm/uaccess.h>
 #include <asm/irq.h>
 #include <asm/io.h>
 
-/*
- * Configuration information
- */
-#define DEFAULT_POOL_SIZE 512
-#define SECONDARY_POOL_SIZE 128
-#define BATCH_ENTROPY_SIZE 256
-#define USE_SHA
-
-/*
- * The minimum number of bits of entropy before we wake up a read on
- * /dev/random.  Should be enough to do a significant reseed.
- */
-static int random_read_wakeup_thresh = 64;
-
-/*
- * If the entropy count falls under this number of bits, then we
- * should wake up processes which are selecting or polling on write
- * access to /dev/random.
- */
-static int random_write_wakeup_thresh = 128;
-
-/*
- * When the input pool goes over trickle_thresh, start dropping most
- * samples to avoid wasting CPU time and reduce lock contention.
- */
-
-static int trickle_thresh = DEFAULT_POOL_SIZE * 7;
-
-static DEFINE_PER_CPU(int, trickle_count) = 0;
+#if 0
+	#define DEBUG_PRINTK  printk
+#else
+	#define DEBUG_PRINTK  debug_printk
+static inline void debug_printk(const char *a, ...) {}
+#endif
 
 /*
- * A pool of size .poolwords is stirred with a primitive polynomial
- * of degree .poolwords over GF(2).  The taps for various sizes are
- * defined below.  They are chosen to be evenly spaced (minimum RMS
- * distance from evenly spaced; the numbers in the comments are a
- * scaled squared error sum) except for the last tap, which is 1 to
- * get the twisting happening as fast as possible.
+ * Configuration information
  */
-static struct poolinfo {
-	int	poolwords;
-	int	tap1, tap2, tap3, tap4, tap5;
-} poolinfo_table[] = {
-	/* x^2048 + x^1638 + x^1231 + x^819 + x^411 + x + 1  -- 115 */
-	{ 2048,	1638,	1231,	819,	411,	1 },
-
-	/* x^1024 + x^817 + x^615 + x^412 + x^204 + x + 1 -- 290 */
-	{ 1024,	817,	615,	412,	204,	1 },
-#if 0				/* Alternate polynomial */
-	/* x^1024 + x^819 + x^616 + x^410 + x^207 + x^2 + 1 -- 115 */
-	{ 1024,	819,	616,	410,	207,	2 },
-#endif
-
-	/* x^512 + x^411 + x^308 + x^208 + x^104 + x + 1 -- 225 */
-	{ 512,	411,	308,	208,	104,	1 },
-#if 0				/* Alternates */
-	/* x^512 + x^409 + x^307 + x^206 + x^102 + x^2 + 1 -- 95 */
-	{ 512,	409,	307,	206,	102,	2 },
-	/* x^512 + x^409 + x^309 + x^205 + x^103 + x^2 + 1 -- 95 */
-	{ 512,	409,	309,	205,	103,	2 },
-#endif
-
-	/* x^256 + x^205 + x^155 + x^101 + x^52 + x + 1 -- 125 */
-	{ 256,	205,	155,	101,	52,	1 },
-
-	/* x^128 + x^103 + x^76 + x^51 +x^25 + x + 1 -- 105 */
-	{ 128,	103,	76,	51,	25,	1 },
-#if 0	/* Alternate polynomial */
-	/* x^128 + x^103 + x^78 + x^51 + x^27 + x^2 + 1 -- 70 */
-	{ 128,	103,	78,	51,	27,	2 },
-#endif
-
-	/* x^64 + x^52 + x^39 + x^26 + x^14 + x + 1 -- 15 */
-	{ 64,	52,	39,	26,	14,	1 },
-
-	/* x^32 + x^26 + x^20 + x^14 + x^7 + x + 1 -- 15 */
-	{ 32,	26,	20,	14,	7,	1 },
-
-	{ 0,	0,	0,	0,	0,	0 },
-};
-
-#define POOLBITS	poolwords*32
-#define POOLBYTES	poolwords*4
+#define BATCH_ENTROPY_SIZE 512 /* how many events do we buffer?  BATCH_ENTROPY_SIZE/2 == how many we need before batch-submitting them */
+#define RANDOM_RESEED_INTERVAL 600 /* reseed the PRNG output state every 5mins */
+#define RANDOM_DEFAULT_CIPHER_ALGO "aes"
+#define RANDOM_DEFAULT_DIGEST_ALGO "sha256"
+
+#define DEFAULT_POOL_NUMBER 5 /* 2^{5} = 32 pools */
+#define MAXIMUM_POOL_NUMBER DEFAULT_POOL_NUMBER
+#define MINIMUM_POOL_NUMBER 2 /* 2^{2} = 4 pools */
+#define USE_SHA256
+#define RANDOM_MAX_DIGEST_SIZE 64 /* SHA512/WHIRLPOOL have 64bytes == 512 bits */
+#define RANDOM_MAX_BLOCK_SIZE  16 /* AES256 has 16byte blocks == 128 bits */
+#define RANDOM_MAX_KEY_SIZE    32 /* AES256 has 32byte keys == 256 bits */
+#define USE_AES256
 
 /*
- * For the purposes of better mixing, we use the CRC-32 polynomial as
- * well to make a twisted Generalized Feedback Shift Reigster
- *
- * (See M. Matsumoto & Y. Kurita, 1992.  Twisted GFSR generators.  ACM
- * Transactions on Modeling and Computer Simulation 2(3):179-194.
- * Also see M. Matsumoto & Y. Kurita, 1994.  Twisted GFSR generators
- * II.  ACM Transactions on Mdeling and Computer Simulation 4:254-266)
- *
- * Thanks to Colin Plumb for suggesting this.
- * 
- * We have not analyzed the resultant polynomial to prove it primitive;
- * in fact it almost certainly isn't.  Nonetheless, the irreducible factors
- * of a random large-degree polynomial over GF(2) are more than large enough
- * that periodicity is not a concern.
- * 
- * The input hash is much less sensitive than the output hash.  All
- * that we want of it is that it be a good non-cryptographic hash;
- * i.e. it not produce collisions when fed "random" data of the sort
- * we expect to see.  As long as the pool state differs for different
- * inputs, we have preserved the input entropy and done a good job.
- * The fact that an intelligent attacker can construct inputs that
- * will produce controlled alterations to the pool's state is not
- * important because we don't consider such inputs to contribute any
- * randomness.  The only property we need with respect to them is that
- * the attacker can't increase his/her knowledge of the pool's state.
- * Since all additions are reversible (knowing the final state and the
- * input, you can reconstruct the initial state), if an attacker has
- * any uncertainty about the initial state, he/she can only shuffle
- * that uncertainty about, but never cause any collisions (which would
- * decrease the uncertainty).
- *
- * The chosen system lets the state of the pool be (essentially) the input
- * modulo the generator polymnomial.  Now, for random primitive polynomials,
- * this is a universal class of hash functions, meaning that the chance
- * of a collision is limited by the attacker's knowledge of the generator
- * polynomail, so if it is chosen at random, an attacker can never force
- * a collision.  Here, we use a fixed polynomial, but we *can* assume that
- * ###--> it is unknown to the processes generating the input entropy. <-###
- * Because of this important property, this is a good, collision-resistant
- * hash; hash collisions will occur no more often than chance.
+ * Throttle mouse/keyboard/disk/interrupt entropy input to only add after this many jiffies/rdtsc counts
  */
+#define RANDOM_INPUT_THROTTLE  1000
 
 /*
  * Linux 2.2 compatibility
@@ -399,8 +384,10 @@
 /*
  * Static global variables
  */
+static int random_entropy_count; // jlc & cam have been together for 5 and 2/3 years as of the time this was written;
+static int random_read_wakeup_thresh = 0; // ignored now.
+static int random_write_wakeup_thresh = 0; // ignored now.
 static struct entropy_store *random_state; /* The default global store */
-static struct entropy_store *sec_random_state; /* secondary store */
 static DECLARE_WAIT_QUEUE_HEAD(random_read_wait);
 static DECLARE_WAIT_QUEUE_HEAD(random_write_wait);
 
@@ -411,71 +398,6 @@
 static void sysctl_init_random(struct entropy_store *random_state);
 #endif
 
-/*****************************************************************
- *
- * Utility functions, with some ASM defined functions for speed
- * purposes
- * 
- *****************************************************************/
-
-/*
- * Unfortunately, while the GCC optimizer for the i386 understands how
- * to optimize a static rotate left of x bits, it doesn't know how to
- * deal with a variable rotate of x bits.  So we use a bit of asm magic.
- */
-#if (!defined (__i386__))
-static inline __u32 rotate_left(int i, __u32 word)
-{
-	return (word << i) | (word >> (32 - i));
-	
-}
-#else
-static inline __u32 rotate_left(int i, __u32 word)
-{
-	__asm__("roll %%cl,%0"
-		:"=r" (word)
-		:"0" (word),"c" (i));
-	return word;
-}
-#endif
-
-/*
- * More asm magic....
- * 
- * For entropy estimation, we need to do an integral base 2
- * logarithm.  
- *
- * Note the "12bits" suffix - this is used for numbers between
- * 0 and 4095 only.  This allows a few shortcuts.
- */
-#if 0	/* Slow but clear version */
-static inline __u32 int_ln_12bits(__u32 word)
-{
-	__u32 nbits = 0;
-	
-	while (word >>= 1)
-		nbits++;
-	return nbits;
-}
-#else	/* Faster (more clever) version, courtesy Colin Plumb */
-static inline __u32 int_ln_12bits(__u32 word)
-{
-	/* Smear msbit right to make an n-bit mask */
-	word |= word >> 8;
-	word |= word >> 4;
-	word |= word >> 2;
-	word |= word >> 1;
-	/* Remove one bit to make this a logarithm */
-	word >>= 1;
-	/* Count the bits set in the word */
-	word -= (word >> 1) & 0x555;
-	word = (word & 0x333) + ((word >> 2) & 0x333);
-	word += (word >> 4);
-	word += (word >> 8);
-	return word & 15;
-}
-#endif
-
 #if 0
 #define DEBUG_ENT(fmt, arg...) printk(KERN_DEBUG "random: " fmt, ## arg)
 #else
@@ -490,15 +412,28 @@
  **********************************************************************/
 
 struct entropy_store {
-	/* mostly-read data: */
-	struct poolinfo poolinfo;
-	__u32		*pool;
+	const char *digestAlgo;
+	unsigned int  digestsize;
+	struct crypto_tfm *pools[1<<MAXIMUM_POOL_NUMBER];
+	/* optional, handy for statistics */
+	unsigned int pools_bytes[1<<MAXIMUM_POOL_NUMBER];
+
+	const char *cipherAlgo;
+	unsigned char key[RANDOM_MAX_DIGEST_SIZE];     /* the key */
+	unsigned int  keysize;
+	unsigned char iv[16];      /* the CTR value */
+	unsigned int  blocksize;
+	struct crypto_tfm *cipher;
+
+	unsigned int  pool_number; /* 2^pool_number # of pools */
+	unsigned int  pool_index;  /* current pool to add into */
+	unsigned int  pool0_len;   /* size of the first pool */
+	unsigned int  reseed_count; /* number of time we have reset */
+	struct crypto_tfm *reseedHash; /* digest used during random_reseed() */
+	struct crypto_tfm *networkCipher; /* cipher used for network randomness */
+	char networkCipher_ready;         /* flag indicating if networkCipher has been seeded */
 
-	/* read-write data: */
 	spinlock_t lock ____cacheline_aligned_in_smp;
-	unsigned	add_ptr;
-	int		entropy_count;
-	int		input_rotate;
 };
 
 /*
@@ -507,151 +442,107 @@
  *
  * Returns an negative error if there is a problem.
  */
-static int create_entropy_store(int size, struct entropy_store **ret_bucket)
+static int create_entropy_store(int pool_number_arg, struct entropy_store **ret_bucket)
 {
 	struct	entropy_store	*r;
-	struct	poolinfo	*p;
-	int	poolwords;
+	unsigned long pool_number;
+	int 	keysize, i, j;
 
-	poolwords = (size + 3) / 4; /* Convert bytes->words */
-	/* The pool size must be a multiple of 16 32-bit words */
-	poolwords = ((poolwords + 15) / 16) * 16;
-
-	for (p = poolinfo_table; p->poolwords; p++) {
-		if (poolwords == p->poolwords)
-			break;
-	}
-	if (p->poolwords == 0)
-		return -EINVAL;
+	pool_number = pool_number_arg;
+	if (pool_number < MINIMUM_POOL_NUMBER)
+		pool_number = MINIMUM_POOL_NUMBER;
 
 	r = kmalloc(sizeof(struct entropy_store), GFP_KERNEL);
-	if (!r)
+	if (!r) {
 		return -ENOMEM;
+	}
 
 	memset (r, 0, sizeof(struct entropy_store));
-	r->poolinfo = *p;
+	r->pool_number = pool_number;
+	r->digestAlgo = RANDOM_DEFAULT_DIGEST_ALGO;
 
-	r->pool = kmalloc(POOLBYTES, GFP_KERNEL);
-	if (!r->pool) {
-		kfree(r);
-		return -ENOMEM;
+DEBUG_PRINTK("create_entropy_store() pools=%u index=%u\n", 1<<pool_number, r->pool_index);
+	for (i=0; i<(1<<pool_number); i++) {
+DEBUG_PRINTK("create_entropy_store() i=%i index=%u\n", i, r->pool_index);
+		r->pools[i] = crypto_alloc_tfm(r->digestAlgo, 0);
+		if (r->pools[i] == NULL) {
+		  	for (j=0; j<i; j++) {
+				if (r->pools[j] != NULL) {
+					kfree(r->pools[j]);
+				}
+			}
+			kfree(r);
+			return -ENOMEM;
+		}
+		crypto_digest_init( r->pools[i] );
 	}
-	memset(r->pool, 0, POOLBYTES);
 	r->lock = SPIN_LOCK_UNLOCKED;
 	*ret_bucket = r;
-	return 0;
-}
 
-/* Clear the entropy pool and associated counters. */
-static void clear_entropy_store(struct entropy_store *r)
-{
-	r->add_ptr = 0;
-	r->entropy_count = 0;
-	r->input_rotate = 0;
-	memset(r->pool, 0, r->poolinfo.POOLBYTES);
-}
-#ifdef CONFIG_SYSCTL
-static void free_entropy_store(struct entropy_store *r)
-{
-	if (r->pool)
-		kfree(r->pool);
-	kfree(r);
-}
-#endif
-/*
- * This function adds a byte into the entropy "pool".  It does not
- * update the entropy estimate.  The caller should call
- * credit_entropy_store if this is appropriate.
- * 
- * The pool is stirred with a primitive polynomial of the appropriate
- * degree, and then twisted.  We twist by three bits at a time because
- * it's cheap to do so and helps slightly in the expected case where
- * the entropy is concentrated in the low-order bits.
- */
-static void add_entropy_words(struct entropy_store *r, const __u32 *in,
-			      int nwords)
-{
-	static __u32 const twist_table[8] = {
-		         0, 0x3b6e20c8, 0x76dc4190, 0x4db26158,
-		0xedb88320, 0xd6d6a3e8, 0x9b64c2b0, 0xa00ae278 };
-	unsigned long i, add_ptr, tap1, tap2, tap3, tap4, tap5;
-	int new_rotate, input_rotate;
-	int wordmask = r->poolinfo.poolwords - 1;
-	__u32 w, next_w;
-	unsigned long flags;
+	r->cipherAlgo = RANDOM_DEFAULT_CIPHER_ALGO;
+	if ((r->cipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) {
+	  	return -ENOMEM;
+	}
 
-	/* Taps are constant, so we can load them without holding r->lock.  */
-	tap1 = r->poolinfo.tap1;
-	tap2 = r->poolinfo.tap2;
-	tap3 = r->poolinfo.tap3;
-	tap4 = r->poolinfo.tap4;
-	tap5 = r->poolinfo.tap5;
-	next_w = *in++;
+	/* If the HASH's output is greater then the cipher's keysize, truncate
+	 * to the cipher's keysize */
+	keysize = crypto_tfm_alg_max_keysize(r->cipher);
+	r->digestsize = crypto_tfm_alg_digestsize(r->pools[0]);
+	r->blocksize = crypto_tfm_alg_blocksize(r->cipher);
 
-	spin_lock_irqsave(&r->lock, flags);
-	prefetch_range(r->pool, wordmask);
-	input_rotate = r->input_rotate;
-	add_ptr = r->add_ptr;
-
-	while (nwords--) {
-		w = rotate_left(input_rotate, next_w);
-		if (nwords > 0)
-			next_w = *in++;
-		i = add_ptr = (add_ptr - 1) & wordmask;
-		/*
-		 * Normally, we add 7 bits of rotation to the pool.
-		 * At the beginning of the pool, add an extra 7 bits
-		 * rotation, so that successive passes spread the
-		 * input bits across the pool evenly.
-		 */
-		new_rotate = input_rotate + 14;
-		if (i)
-			new_rotate = input_rotate + 7;
-		input_rotate = new_rotate & 31;
-
-		/* XOR in the various taps */
-		w ^= r->pool[(i + tap1) & wordmask];
-		w ^= r->pool[(i + tap2) & wordmask];
-		w ^= r->pool[(i + tap3) & wordmask];
-		w ^= r->pool[(i + tap4) & wordmask];
-		w ^= r->pool[(i + tap5) & wordmask];
-		w ^= r->pool[i];
-		r->pool[i] = (w >> 3) ^ twist_table[w & 7];
+	r->keysize = (keysize < r->digestsize) ? keysize : r->digestsize;
+
+	if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) {
+		return -EINVAL;
 	}
 
-	r->input_rotate = input_rotate;
-	r->add_ptr = add_ptr;
+	/* digest used duing random-reseed() */
+	if ((r->reseedHash=crypto_alloc_tfm(r->digestAlgo, 0)) == NULL) {
+		return -ENOMEM;
+	}
+	/* cipher used for network randomness, init to key={zerovector}
+	 * for now */
+	if ((r->networkCipher=crypto_alloc_tfm(r->cipherAlgo, 0)) == NULL) {
+		return -ENOMEM;
+	}
 
-	spin_unlock_irqrestore(&r->lock, flags);
+	return 0;
 }
 
 /*
- * Credit (or debit) the entropy store with n bits of entropy
+ * This function adds a byte into the entropy "pool".
  */
-static void credit_entropy_store(struct entropy_store *r, int nbits)
+static void add_entropy_words(struct entropy_store *r, const __u32 *in,
+			      int nwords)
 {
 	unsigned long flags;
+	struct scatterlist sg[1];
+	static unsigned int totalBytes=0;
+
+	if (r == NULL) {
+		return;
+	}
 
 	spin_lock_irqsave(&r->lock, flags);
 
-	if (r->entropy_count + nbits < 0) {
-		DEBUG_ENT("negative entropy/overflow (%d+%d)\n",
-			  r->entropy_count, nbits);
-		r->entropy_count = 0;
-	} else if (r->entropy_count + nbits > r->poolinfo.POOLBITS) {
-		r->entropy_count = r->poolinfo.POOLBITS;
-	} else {
-		r->entropy_count += nbits;
-		if (nbits)
-			DEBUG_ENT("%04d %04d : added %d bits to %s\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count,
-				  nbits,
-				  r == sec_random_state ? "secondary" :
-				  r == random_state ? "primary" : "unknown");
+	totalBytes += nwords * sizeof(__u32);
+	r->pools_bytes[r->pool_index] += nwords * sizeof(__u32);
+
+	sg[0].page = virt_to_page(in);
+	sg[0].offset = offset_in_page(in);
+	sg[0].length = nwords*sizeof(__u32);
+	crypto_digest_update(r->pools[r->pool_index], sg, 1);
+
+	if (r->pool_index == 0) {
+		r->pool0_len += nwords*sizeof(__u32);
 	}
 
+	/* idx = (idx + 1) mod ( (2^N)-1 ) */
+	r->pool_index = (r->pool_index + 1) & ((1<<r->pool_number)-1);
+
 	spin_unlock_irqrestore(&r->lock, flags);
+DEBUG_PRINTK("0 add_entropy_words() nwords=%u pool[i].bytes=%u total=%u\n",
+	nwords, r->pools_bytes[r->pool_index], totalBytes);
 }
 
 /**********************************************************************
@@ -668,10 +559,10 @@
 };
 
 static struct sample *batch_entropy_pool, *batch_entropy_copy;
-static int	batch_head, batch_tail;
+static int      batch_head, batch_tail;
 static spinlock_t batch_lock = SPIN_LOCK_UNLOCKED;
 
-static int	batch_max;
+static int      batch_max;
 static void batch_entropy_process(void *private_);
 static DECLARE_WORK(batch_work, batch_entropy_process, NULL);
 
@@ -703,19 +594,20 @@
 	int new;
 	unsigned long flags;
 
-	if (!batch_max)
+	if (!batch_max) {
 		return;
+	}
 
 	spin_lock_irqsave(&batch_lock, flags);
 
 	batch_entropy_pool[batch_head].data[0] = a;
 	batch_entropy_pool[batch_head].data[1] = b;
-	batch_entropy_pool[batch_head].credit = num;
+	batch_entropy_pool[batch_head].credit = 0;
 
 	if (((batch_head - batch_tail) & (batch_max-1)) >= (batch_max / 2)) {
 		/*
-		 * Schedule it for the next timer tick:
-		 */
+		* Schedule it for the next timer tick:
+		*/
 		schedule_delayed_work(&batch_work, 1);
 	}
 
@@ -733,13 +625,11 @@
 
 /*
  * Flush out the accumulated entropy operations, adding entropy to the passed
- * store (normally random_state).  If that store has enough entropy, alternate
- * between randomizing the data of the primary and secondary stores.
+ * store (normally random_state).
  */
 static void batch_entropy_process(void *private_)
 {
-	struct entropy_store *r	= (struct entropy_store *) private_, *p;
-	int max_entropy = r->poolinfo.POOLBITS;
+	struct entropy_store *r = (struct entropy_store *) private_;
 	unsigned head, tail;
 
 	/* Mixing into the pool is expensive, so copy over the batch
@@ -750,7 +640,7 @@
 	spin_lock_irq(&batch_lock);
 
 	memcpy(batch_entropy_copy, batch_entropy_pool,
-	       batch_max*sizeof(struct sample));
+	batch_max*sizeof(struct sample));
 
 	head = batch_head;
 	tail = batch_tail;
@@ -758,61 +648,30 @@
 
 	spin_unlock_irq(&batch_lock);
 
-	p = r;
 	while (head != tail) {
-		if (r->entropy_count >= max_entropy) {
-			r = (r == sec_random_state) ?	random_state :
-							sec_random_state;
-			max_entropy = r->poolinfo.POOLBITS;
-		}
 		add_entropy_words(r, batch_entropy_copy[tail].data, 2);
-		credit_entropy_store(r, batch_entropy_copy[tail].credit);
 		tail = (tail+1) & (batch_max-1);
 	}
-	if (p->entropy_count >= random_read_wakeup_thresh)
-		wake_up_interruptible(&random_read_wait);
 }
 
+
 /*********************************************************************
  *
  * Entropy input management
  *
  *********************************************************************/
 
-/* There is one of these per entropy source */
-struct timer_rand_state {
-	__u32		last_time;
-	__s32		last_delta,last_delta2;
-	int		dont_count_entropy:1;
-};
-
-static struct timer_rand_state keyboard_timer_state;
-static struct timer_rand_state mouse_timer_state;
-static struct timer_rand_state extract_timer_state;
-static struct timer_rand_state *irq_timer_state[NR_IRQS];
-
 /*
  * This function adds entropy to the entropy "pool" by using timing
- * delays.  It uses the timer_rand_state structure to make an estimate
- * of how many bits of entropy this call has added to the pool.
+ * delays.
  *
  * The number "num" is also added to the pool - it should somehow describe
- * the type of event which just happened.  This is currently 0-255 for
- * keyboard scan codes, and 256 upwards for interrupts.
- * On the i386, this is assumed to be at most 16 bits, and the high bits
- * are used for a high-resolution timer.
- *
+ * the type of event which just happened.
  */
-static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
+static void add_timer_randomness(unsigned num)
 {
-	__u32		time;
-	__s32		delta, delta2, delta3;
-	int		entropy = 0;
-
-	/* if over the trickle threshold, use only 1 in 4096 samples */
-	if ( random_state->entropy_count > trickle_thresh &&
-	     (__get_cpu_var(trickle_count)++ & 0xfff))
-		return;
+	static __u32	lasttime=0;
+	__u32	time;
 
 #if defined (__i386__) || defined (__x86_64__)
 	if (cpu_has_tsc) {
@@ -822,480 +681,57 @@
 	} else {
 		time = jiffies;
 	}
-#elif defined (__sparc_v9__)
-	unsigned long tick = tick_ops->get_tick();
-
-	time = (unsigned int) tick;
-	num ^= (tick >> 32UL);
 #else
 	time = jiffies;
 #endif
 
-	/*
-	 * Calculate number of bits of randomness we probably added.
-	 * We take into account the first, second and third-order deltas
-	 * in order to make our estimate.
-	 */
-	if (!state->dont_count_entropy) {
-		delta = time - state->last_time;
-		state->last_time = time;
-
-		delta2 = delta - state->last_delta;
-		state->last_delta = delta;
-
-		delta3 = delta2 - state->last_delta2;
-		state->last_delta2 = delta2;
-
-		if (delta < 0)
-			delta = -delta;
-		if (delta2 < 0)
-			delta2 = -delta2;
-		if (delta3 < 0)
-			delta3 = -delta3;
-		if (delta > delta2)
-			delta = delta2;
-		if (delta > delta3)
-			delta = delta3;
-
-		/*
-		 * delta is now minimum absolute delta.
-		 * Round down by 1 bit on general principles,
-		 * and limit entropy entimate to 12 bits.
-		 */
-		delta >>= 1;
-		delta &= (1 << 12) - 1;
-
-		entropy = int_ln_12bits(delta);
+	/* Throttle our input to add_entropy_words() */
+	if ((time-lasttime) < RANDOM_INPUT_THROTTLE) {
+		return;
 	}
-	batch_entropy_store(num, time, entropy);
+	lasttime = time;
+
+	batch_entropy_store(num, time, 0);
 }
 
 void add_keyboard_randomness(unsigned char scancode)
 {
-	static unsigned char last_scancode;
-	/* ignore autorepeat (multiple key down w/o key up) */
-	if (scancode != last_scancode) {
-		last_scancode = scancode;
-		add_timer_randomness(&keyboard_timer_state, scancode);
-	}
+	/* jlcooke: we don't care about auto-repeats,
+	 * they can't hurt us anymore */
+	add_timer_randomness(scancode);
 }
 
 EXPORT_SYMBOL(add_keyboard_randomness);
 
 void add_mouse_randomness(__u32 mouse_data)
 {
-	add_timer_randomness(&mouse_timer_state, mouse_data);
+	add_timer_randomness(mouse_data);
 }
 
 EXPORT_SYMBOL(add_mouse_randomness);
 
 void add_interrupt_randomness(int irq)
 {
-	if (irq >= NR_IRQS || irq_timer_state[irq] == 0)
+	if (irq >= NR_IRQS)
 		return;
 
-	add_timer_randomness(irq_timer_state[irq], 0x100+irq);
+	/* jlcooke: no need to add 0x100 ... not random! :P */
+	add_timer_randomness(irq);
 }
 
 EXPORT_SYMBOL(add_interrupt_randomness);
 
 void add_disk_randomness(struct gendisk *disk)
 {
-	if (!disk || !disk->random)
+	if (!disk)
 		return;
-	/* first major is 1, so we get >= 0x200 here */
-	add_timer_randomness(disk->random, 0x100+MKDEV(disk->major, disk->first_minor));
+
+	/* jlcooke: no need to add 0x100 ... not random! :P */
+	add_timer_randomness(MKDEV(disk->major, disk->first_minor));
 }
 
 EXPORT_SYMBOL(add_disk_randomness);
 
-/******************************************************************
- *
- * Hash function definition
- *
- *******************************************************************/
-
-/*
- * This chunk of code defines a function
- * void HASH_TRANSFORM(__u32 digest[HASH_BUFFER_SIZE + HASH_EXTRA_SIZE],
- * 		__u32 const data[16])
- * 
- * The function hashes the input data to produce a digest in the first
- * HASH_BUFFER_SIZE words of the digest[] array, and uses HASH_EXTRA_SIZE
- * more words for internal purposes.  (This buffer is exported so the
- * caller can wipe it once rather than this code doing it each call,
- * and tacking it onto the end of the digest[] array is the quick and
- * dirty way of doing it.)
- *
- * It so happens that MD5 and SHA share most of the initial vector
- * used to initialize the digest[] array before the first call:
- * 1) 0x67452301
- * 2) 0xefcdab89
- * 3) 0x98badcfe
- * 4) 0x10325476
- * 5) 0xc3d2e1f0 (SHA only)
- * 
- * For /dev/random purposes, the length of the data being hashed is
- * fixed in length, so appending a bit count in the usual way is not
- * cryptographically necessary.
- */
-
-#ifdef USE_SHA
-
-#define HASH_BUFFER_SIZE 5
-#define HASH_EXTRA_SIZE 80
-#define HASH_TRANSFORM SHATransform
-
-/* Various size/speed tradeoffs are available.  Choose 0..3. */
-#define SHA_CODE_SIZE 0
-
-/*
- * SHA transform algorithm, taken from code written by Peter Gutmann,
- * and placed in the public domain.
- */
-
-/* The SHA f()-functions.  */
-
-#define f1(x,y,z)   ( z ^ (x & (y^z)) )		/* Rounds  0-19: x ? y : z */
-#define f2(x,y,z)   (x ^ y ^ z)			/* Rounds 20-39: XOR */
-#define f3(x,y,z)   ( (x & y) + (z & (x ^ y)) )	/* Rounds 40-59: majority */
-#define f4(x,y,z)   (x ^ y ^ z)			/* Rounds 60-79: XOR */
-
-/* The SHA Mysterious Constants */
-
-#define K1  0x5A827999L			/* Rounds  0-19: sqrt(2) * 2^30 */
-#define K2  0x6ED9EBA1L			/* Rounds 20-39: sqrt(3) * 2^30 */
-#define K3  0x8F1BBCDCL			/* Rounds 40-59: sqrt(5) * 2^30 */
-#define K4  0xCA62C1D6L			/* Rounds 60-79: sqrt(10) * 2^30 */
-
-#define ROTL(n,X)  ( ( ( X ) << n ) | ( ( X ) >> ( 32 - n ) ) )
-
-#define subRound(a, b, c, d, e, f, k, data) \
-    ( e += ROTL( 5, a ) + f( b, c, d ) + k + data, b = ROTL( 30, b ) )
-
-
-static void SHATransform(__u32 digest[85], __u32 const data[16])
-{
-    __u32 A, B, C, D, E;     /* Local vars */
-    __u32 TEMP;
-    int	i;
-#define W (digest + HASH_BUFFER_SIZE)	/* Expanded data array */
-
-    /*
-     * Do the preliminary expansion of 16 to 80 words.  Doing it
-     * out-of-line line this is faster than doing it in-line on
-     * register-starved machines like the x86, and not really any
-     * slower on real processors.
-     */
-    memcpy(W, data, 16*sizeof(__u32));
-    for (i = 0; i < 64; i++) {
-	    TEMP = W[i] ^ W[i+2] ^ W[i+8] ^ W[i+13];
-	    W[i+16] = ROTL(1, TEMP);
-    }
-
-    /* Set up first buffer and local data buffer */
-    A = digest[ 0 ];
-    B = digest[ 1 ];
-    C = digest[ 2 ];
-    D = digest[ 3 ];
-    E = digest[ 4 ];
-
-    /* Heavy mangling, in 4 sub-rounds of 20 iterations each. */
-#if SHA_CODE_SIZE == 0
-    /*
-     * Approximately 50% of the speed of the largest version, but
-     * takes up 1/16 the space.  Saves about 6k on an i386 kernel.
-     */
-    for (i = 0; i < 80; i++) {
-	if (i < 40) {
-	    if (i < 20)
-		TEMP = f1(B, C, D) + K1;
-	    else
-		TEMP = f2(B, C, D) + K2;
-	} else {
-	    if (i < 60)
-		TEMP = f3(B, C, D) + K3;
-	    else
-		TEMP = f4(B, C, D) + K4;
-	}
-	TEMP += ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-#elif SHA_CODE_SIZE == 1
-    for (i = 0; i < 20; i++) {
-	TEMP = f1(B, C, D) + K1 + ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-    for (; i < 40; i++) {
-	TEMP = f2(B, C, D) + K2 + ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-    for (; i < 60; i++) {
-	TEMP = f3(B, C, D) + K3 + ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-    for (; i < 80; i++) {
-	TEMP = f4(B, C, D) + K4 + ROTL(5, A) + E + W[i];
-	E = D; D = C; C = ROTL(30, B); B = A; A = TEMP;
-    }
-#elif SHA_CODE_SIZE == 2
-    for (i = 0; i < 20; i += 5) {
-	subRound( A, B, C, D, E, f1, K1, W[ i   ] );
-	subRound( E, A, B, C, D, f1, K1, W[ i+1 ] );
-	subRound( D, E, A, B, C, f1, K1, W[ i+2 ] );
-	subRound( C, D, E, A, B, f1, K1, W[ i+3 ] );
-	subRound( B, C, D, E, A, f1, K1, W[ i+4 ] );
-    }
-    for (; i < 40; i += 5) {
-	subRound( A, B, C, D, E, f2, K2, W[ i   ] );
-	subRound( E, A, B, C, D, f2, K2, W[ i+1 ] );
-	subRound( D, E, A, B, C, f2, K2, W[ i+2 ] );
-	subRound( C, D, E, A, B, f2, K2, W[ i+3 ] );
-	subRound( B, C, D, E, A, f2, K2, W[ i+4 ] );
-    }
-    for (; i < 60; i += 5) {
-	subRound( A, B, C, D, E, f3, K3, W[ i   ] );
-	subRound( E, A, B, C, D, f3, K3, W[ i+1 ] );
-	subRound( D, E, A, B, C, f3, K3, W[ i+2 ] );
-	subRound( C, D, E, A, B, f3, K3, W[ i+3 ] );
-	subRound( B, C, D, E, A, f3, K3, W[ i+4 ] );
-    }
-    for (; i < 80; i += 5) {
-	subRound( A, B, C, D, E, f4, K4, W[ i   ] );
-	subRound( E, A, B, C, D, f4, K4, W[ i+1 ] );
-	subRound( D, E, A, B, C, f4, K4, W[ i+2 ] );
-	subRound( C, D, E, A, B, f4, K4, W[ i+3 ] );
-	subRound( B, C, D, E, A, f4, K4, W[ i+4 ] );
-    }
-#elif SHA_CODE_SIZE == 3 /* Really large version */
-    subRound( A, B, C, D, E, f1, K1, W[  0 ] );
-    subRound( E, A, B, C, D, f1, K1, W[  1 ] );
-    subRound( D, E, A, B, C, f1, K1, W[  2 ] );
-    subRound( C, D, E, A, B, f1, K1, W[  3 ] );
-    subRound( B, C, D, E, A, f1, K1, W[  4 ] );
-    subRound( A, B, C, D, E, f1, K1, W[  5 ] );
-    subRound( E, A, B, C, D, f1, K1, W[  6 ] );
-    subRound( D, E, A, B, C, f1, K1, W[  7 ] );
-    subRound( C, D, E, A, B, f1, K1, W[  8 ] );
-    subRound( B, C, D, E, A, f1, K1, W[  9 ] );
-    subRound( A, B, C, D, E, f1, K1, W[ 10 ] );
-    subRound( E, A, B, C, D, f1, K1, W[ 11 ] );
-    subRound( D, E, A, B, C, f1, K1, W[ 12 ] );
-    subRound( C, D, E, A, B, f1, K1, W[ 13 ] );
-    subRound( B, C, D, E, A, f1, K1, W[ 14 ] );
-    subRound( A, B, C, D, E, f1, K1, W[ 15 ] );
-    subRound( E, A, B, C, D, f1, K1, W[ 16 ] );
-    subRound( D, E, A, B, C, f1, K1, W[ 17 ] );
-    subRound( C, D, E, A, B, f1, K1, W[ 18 ] );
-    subRound( B, C, D, E, A, f1, K1, W[ 19 ] );
-
-    subRound( A, B, C, D, E, f2, K2, W[ 20 ] );
-    subRound( E, A, B, C, D, f2, K2, W[ 21 ] );
-    subRound( D, E, A, B, C, f2, K2, W[ 22 ] );
-    subRound( C, D, E, A, B, f2, K2, W[ 23 ] );
-    subRound( B, C, D, E, A, f2, K2, W[ 24 ] );
-    subRound( A, B, C, D, E, f2, K2, W[ 25 ] );
-    subRound( E, A, B, C, D, f2, K2, W[ 26 ] );
-    subRound( D, E, A, B, C, f2, K2, W[ 27 ] );
-    subRound( C, D, E, A, B, f2, K2, W[ 28 ] );
-    subRound( B, C, D, E, A, f2, K2, W[ 29 ] );
-    subRound( A, B, C, D, E, f2, K2, W[ 30 ] );
-    subRound( E, A, B, C, D, f2, K2, W[ 31 ] );
-    subRound( D, E, A, B, C, f2, K2, W[ 32 ] );
-    subRound( C, D, E, A, B, f2, K2, W[ 33 ] );
-    subRound( B, C, D, E, A, f2, K2, W[ 34 ] );
-    subRound( A, B, C, D, E, f2, K2, W[ 35 ] );
-    subRound( E, A, B, C, D, f2, K2, W[ 36 ] );
-    subRound( D, E, A, B, C, f2, K2, W[ 37 ] );
-    subRound( C, D, E, A, B, f2, K2, W[ 38 ] );
-    subRound( B, C, D, E, A, f2, K2, W[ 39 ] );
-    
-    subRound( A, B, C, D, E, f3, K3, W[ 40 ] );
-    subRound( E, A, B, C, D, f3, K3, W[ 41 ] );
-    subRound( D, E, A, B, C, f3, K3, W[ 42 ] );
-    subRound( C, D, E, A, B, f3, K3, W[ 43 ] );
-    subRound( B, C, D, E, A, f3, K3, W[ 44 ] );
-    subRound( A, B, C, D, E, f3, K3, W[ 45 ] );
-    subRound( E, A, B, C, D, f3, K3, W[ 46 ] );
-    subRound( D, E, A, B, C, f3, K3, W[ 47 ] );
-    subRound( C, D, E, A, B, f3, K3, W[ 48 ] );
-    subRound( B, C, D, E, A, f3, K3, W[ 49 ] );
-    subRound( A, B, C, D, E, f3, K3, W[ 50 ] );
-    subRound( E, A, B, C, D, f3, K3, W[ 51 ] );
-    subRound( D, E, A, B, C, f3, K3, W[ 52 ] );
-    subRound( C, D, E, A, B, f3, K3, W[ 53 ] );
-    subRound( B, C, D, E, A, f3, K3, W[ 54 ] );
-    subRound( A, B, C, D, E, f3, K3, W[ 55 ] );
-    subRound( E, A, B, C, D, f3, K3, W[ 56 ] );
-    subRound( D, E, A, B, C, f3, K3, W[ 57 ] );
-    subRound( C, D, E, A, B, f3, K3, W[ 58 ] );
-    subRound( B, C, D, E, A, f3, K3, W[ 59 ] );
-
-    subRound( A, B, C, D, E, f4, K4, W[ 60 ] );
-    subRound( E, A, B, C, D, f4, K4, W[ 61 ] );
-    subRound( D, E, A, B, C, f4, K4, W[ 62 ] );
-    subRound( C, D, E, A, B, f4, K4, W[ 63 ] );
-    subRound( B, C, D, E, A, f4, K4, W[ 64 ] );
-    subRound( A, B, C, D, E, f4, K4, W[ 65 ] );
-    subRound( E, A, B, C, D, f4, K4, W[ 66 ] );
-    subRound( D, E, A, B, C, f4, K4, W[ 67 ] );
-    subRound( C, D, E, A, B, f4, K4, W[ 68 ] );
-    subRound( B, C, D, E, A, f4, K4, W[ 69 ] );
-    subRound( A, B, C, D, E, f4, K4, W[ 70 ] );
-    subRound( E, A, B, C, D, f4, K4, W[ 71 ] );
-    subRound( D, E, A, B, C, f4, K4, W[ 72 ] );
-    subRound( C, D, E, A, B, f4, K4, W[ 73 ] );
-    subRound( B, C, D, E, A, f4, K4, W[ 74 ] );
-    subRound( A, B, C, D, E, f4, K4, W[ 75 ] );
-    subRound( E, A, B, C, D, f4, K4, W[ 76 ] );
-    subRound( D, E, A, B, C, f4, K4, W[ 77 ] );
-    subRound( C, D, E, A, B, f4, K4, W[ 78 ] );
-    subRound( B, C, D, E, A, f4, K4, W[ 79 ] );
-#else
-#error Illegal SHA_CODE_SIZE
-#endif
-
-    /* Build message digest */
-    digest[ 0 ] += A;
-    digest[ 1 ] += B;
-    digest[ 2 ] += C;
-    digest[ 3 ] += D;
-    digest[ 4 ] += E;
-
-	/* W is wiped by the caller */
-#undef W
-}
-
-#undef ROTL
-#undef f1
-#undef f2
-#undef f3
-#undef f4
-#undef K1	
-#undef K2
-#undef K3	
-#undef K4	
-#undef subRound
-	
-#else /* !USE_SHA - Use MD5 */
-
-#define HASH_BUFFER_SIZE 4
-#define HASH_EXTRA_SIZE 0
-#define HASH_TRANSFORM MD5Transform
-	
-/*
- * MD5 transform algorithm, taken from code written by Colin Plumb,
- * and put into the public domain
- */
-
-/* The four core functions - F1 is optimized somewhat */
-
-/* #define F1(x, y, z) (x & y | ~x & z) */
-#define F1(x, y, z) (z ^ (x & (y ^ z)))
-#define F2(x, y, z) F1(z, x, y)
-#define F3(x, y, z) (x ^ y ^ z)
-#define F4(x, y, z) (y ^ (x | ~z))
-
-/* This is the central step in the MD5 algorithm. */
-#define MD5STEP(f, w, x, y, z, data, s) \
-	( w += f(x, y, z) + data,  w = w<<s | w>>(32-s),  w += x )
-
-/*
- * The core of the MD5 algorithm, this alters an existing MD5 hash to
- * reflect the addition of 16 longwords of new data.  MD5Update blocks
- * the data and converts bytes into longwords for this routine.
- */
-static void MD5Transform(__u32 buf[HASH_BUFFER_SIZE], __u32 const in[16])
-{
-	__u32 a, b, c, d;
-
-	a = buf[0];
-	b = buf[1];
-	c = buf[2];
-	d = buf[3];
-
-	MD5STEP(F1, a, b, c, d, in[ 0]+0xd76aa478,  7);
-	MD5STEP(F1, d, a, b, c, in[ 1]+0xe8c7b756, 12);
-	MD5STEP(F1, c, d, a, b, in[ 2]+0x242070db, 17);
-	MD5STEP(F1, b, c, d, a, in[ 3]+0xc1bdceee, 22);
-	MD5STEP(F1, a, b, c, d, in[ 4]+0xf57c0faf,  7);
-	MD5STEP(F1, d, a, b, c, in[ 5]+0x4787c62a, 12);
-	MD5STEP(F1, c, d, a, b, in[ 6]+0xa8304613, 17);
-	MD5STEP(F1, b, c, d, a, in[ 7]+0xfd469501, 22);
-	MD5STEP(F1, a, b, c, d, in[ 8]+0x698098d8,  7);
-	MD5STEP(F1, d, a, b, c, in[ 9]+0x8b44f7af, 12);
-	MD5STEP(F1, c, d, a, b, in[10]+0xffff5bb1, 17);
-	MD5STEP(F1, b, c, d, a, in[11]+0x895cd7be, 22);
-	MD5STEP(F1, a, b, c, d, in[12]+0x6b901122,  7);
-	MD5STEP(F1, d, a, b, c, in[13]+0xfd987193, 12);
-	MD5STEP(F1, c, d, a, b, in[14]+0xa679438e, 17);
-	MD5STEP(F1, b, c, d, a, in[15]+0x49b40821, 22);
-
-	MD5STEP(F2, a, b, c, d, in[ 1]+0xf61e2562,  5);
-	MD5STEP(F2, d, a, b, c, in[ 6]+0xc040b340,  9);
-	MD5STEP(F2, c, d, a, b, in[11]+0x265e5a51, 14);
-	MD5STEP(F2, b, c, d, a, in[ 0]+0xe9b6c7aa, 20);
-	MD5STEP(F2, a, b, c, d, in[ 5]+0xd62f105d,  5);
-	MD5STEP(F2, d, a, b, c, in[10]+0x02441453,  9);
-	MD5STEP(F2, c, d, a, b, in[15]+0xd8a1e681, 14);
-	MD5STEP(F2, b, c, d, a, in[ 4]+0xe7d3fbc8, 20);
-	MD5STEP(F2, a, b, c, d, in[ 9]+0x21e1cde6,  5);
-	MD5STEP(F2, d, a, b, c, in[14]+0xc33707d6,  9);
-	MD5STEP(F2, c, d, a, b, in[ 3]+0xf4d50d87, 14);
-	MD5STEP(F2, b, c, d, a, in[ 8]+0x455a14ed, 20);
-	MD5STEP(F2, a, b, c, d, in[13]+0xa9e3e905,  5);
-	MD5STEP(F2, d, a, b, c, in[ 2]+0xfcefa3f8,  9);
-	MD5STEP(F2, c, d, a, b, in[ 7]+0x676f02d9, 14);
-	MD5STEP(F2, b, c, d, a, in[12]+0x8d2a4c8a, 20);
-
-	MD5STEP(F3, a, b, c, d, in[ 5]+0xfffa3942,  4);
-	MD5STEP(F3, d, a, b, c, in[ 8]+0x8771f681, 11);
-	MD5STEP(F3, c, d, a, b, in[11]+0x6d9d6122, 16);
-	MD5STEP(F3, b, c, d, a, in[14]+0xfde5380c, 23);
-	MD5STEP(F3, a, b, c, d, in[ 1]+0xa4beea44,  4);
-	MD5STEP(F3, d, a, b, c, in[ 4]+0x4bdecfa9, 11);
-	MD5STEP(F3, c, d, a, b, in[ 7]+0xf6bb4b60, 16);
-	MD5STEP(F3, b, c, d, a, in[10]+0xbebfbc70, 23);
-	MD5STEP(F3, a, b, c, d, in[13]+0x289b7ec6,  4);
-	MD5STEP(F3, d, a, b, c, in[ 0]+0xeaa127fa, 11);
-	MD5STEP(F3, c, d, a, b, in[ 3]+0xd4ef3085, 16);
-	MD5STEP(F3, b, c, d, a, in[ 6]+0x04881d05, 23);
-	MD5STEP(F3, a, b, c, d, in[ 9]+0xd9d4d039,  4);
-	MD5STEP(F3, d, a, b, c, in[12]+0xe6db99e5, 11);
-	MD5STEP(F3, c, d, a, b, in[15]+0x1fa27cf8, 16);
-	MD5STEP(F3, b, c, d, a, in[ 2]+0xc4ac5665, 23);
-
-	MD5STEP(F4, a, b, c, d, in[ 0]+0xf4292244,  6);
-	MD5STEP(F4, d, a, b, c, in[ 7]+0x432aff97, 10);
-	MD5STEP(F4, c, d, a, b, in[14]+0xab9423a7, 15);
-	MD5STEP(F4, b, c, d, a, in[ 5]+0xfc93a039, 21);
-	MD5STEP(F4, a, b, c, d, in[12]+0x655b59c3,  6);
-	MD5STEP(F4, d, a, b, c, in[ 3]+0x8f0ccc92, 10);
-	MD5STEP(F4, c, d, a, b, in[10]+0xffeff47d, 15);
-	MD5STEP(F4, b, c, d, a, in[ 1]+0x85845dd1, 21);
-	MD5STEP(F4, a, b, c, d, in[ 8]+0x6fa87e4f,  6);
-	MD5STEP(F4, d, a, b, c, in[15]+0xfe2ce6e0, 10);
-	MD5STEP(F4, c, d, a, b, in[ 6]+0xa3014314, 15);
-	MD5STEP(F4, b, c, d, a, in[13]+0x4e0811a1, 21);
-	MD5STEP(F4, a, b, c, d, in[ 4]+0xf7537e82,  6);
-	MD5STEP(F4, d, a, b, c, in[11]+0xbd3af235, 10);
-	MD5STEP(F4, c, d, a, b, in[ 2]+0x2ad7d2bb, 15);
-	MD5STEP(F4, b, c, d, a, in[ 9]+0xeb86d391, 21);
-
-	buf[0] += a;
-	buf[1] += b;
-	buf[2] += c;
-	buf[3] += d;
-}
-
-#undef F1
-#undef F2
-#undef F3
-#undef F4
-#undef MD5STEP
-
-#endif /* !USE_SHA */
-
 /*********************************************************************
  *
  * Entropy extraction routines
@@ -1305,169 +741,145 @@
 #define EXTRACT_ENTROPY_USER		1
 #define EXTRACT_ENTROPY_SECONDARY	2
 #define EXTRACT_ENTROPY_LIMIT		4
-#define TMP_BUF_SIZE			(HASH_BUFFER_SIZE + HASH_EXTRA_SIZE)
-#define SEC_XFER_SIZE			(TMP_BUF_SIZE*4)
+#define CRYPTO_MAX_BLOCK_SIZE		32
 
 static ssize_t extract_entropy(struct entropy_store *r, void * buf,
 			       size_t nbytes, int flags);
 
+static inline void increment_iv(unsigned char *IV, const unsigned int IVsize) {
+	unsigned int i;
+	for (i=0; i<IVsize; i++) {
+		if ( ++(IV[i]) ) {
+			break;
+		}
+	}
+}
+
 /*
- * This utility inline function is responsible for transfering entropy
- * from the primary pool to the secondary extraction pool. We make
- * sure we pull enough for a 'catastrophic reseed'.
- */
-static inline void xfer_secondary_pool(struct entropy_store *r,
-				       size_t nbytes, __u32 *tmp)
-{
-	if (r->entropy_count < nbytes * 8 &&
-	    r->entropy_count < r->poolinfo.POOLBITS) {
-		int bytes = max_t(int, random_read_wakeup_thresh / 8,
-				min_t(int, nbytes, TMP_BUF_SIZE));
-
-		DEBUG_ENT("%04d %04d : going to reseed %s with %d bits "
-			  "(%d of %d requested)\n",
-			  random_state->entropy_count,
-			  sec_random_state->entropy_count,
-			  r == sec_random_state ? "secondary" : "unknown",
-			  bytes * 8, nbytes * 8, r->entropy_count);
-
-		bytes=extract_entropy(random_state, tmp, bytes,
-				      EXTRACT_ENTROPY_LIMIT);
-		add_entropy_words(r, tmp, bytes);
-		credit_entropy_store(r, bytes*8);
+ * Fortuna's Reseed is ...
+ */
+static void random_reseed(struct entropy_store *r) {
+	struct scatterlist sg[1];
+	int i;
+	unsigned char tmp[RANDOM_MAX_DIGEST_SIZE];
+
+	r->reseed_count++;
+
+	crypto_digest_init(r->reseedHash);
+
+	sg[0].page = virt_to_page(r->key);
+	sg[0].offset = offset_in_page(r->key);
+	sg[0].length = r->keysize;
+	crypto_digest_update(r->reseedHash, sg, 1);
+
+#define TESTBIT(VAL, N)\
+  ( ((VAL) >> (N)) & 1 )
+	for (i=0; i<(1<<r->pool_number); i++) {
+		/* using pool[i] if r->reseed_count is divisible by 2^i
+		 * since 2^0 == 1, we always use pool[0]
+		 */
+		if ( (i==0)  ||  TESTBIT(r->reseed_count,i)==0 ) {
+			crypto_digest_final(r->pools[i], tmp);
+
+			sg[0].page = virt_to_page(tmp);
+			sg[0].offset = offset_in_page(tmp);
+			sg[0].length = r->keysize;
+			crypto_digest_update(r->reseedHash, sg, 1);
+
+			crypto_digest_init(r->pools[i]);
+			/* should each pool carry it's past state forward? */
+			crypto_digest_update(r->pools[i], sg, 1);
+		} else {
+			/* pool N can only be used once every 2^N times */
+			break;
+		}
 	}
+#undef TESTBIT
+
+	crypto_digest_final(r->reseedHash, r->key);
+	crypto_cipher_setkey(r->cipher, r->key, r->keysize);
+	increment_iv(r->iv, r->blocksize);
 }
 
 /*
  * This function extracts randomness from the "entropy pool", and
- * returns it in a buffer.  This function computes how many remaining
- * bits of entropy are left in the pool, but it does not restrict the
- * number of bytes that are actually obtained.  If the EXTRACT_ENTROPY_USER
+ * returns it in a buffer.  If the EXTRACT_ENTROPY_USER
  * flag is given, then the buf pointer is assumed to be in user space.
- *
- * If the EXTRACT_ENTROPY_SECONDARY flag is given, then we are actually
- * extracting entropy from the secondary pool, and can refill from the
- * primary pool if needed.
- *
- * Note: extract_entropy() assumes that .poolwords is a multiple of 16 words.
  */
 static ssize_t extract_entropy(struct entropy_store *r, void * buf,
 			       size_t nbytes, int flags)
 {
 	ssize_t ret, i;
-	__u32 tmp[TMP_BUF_SIZE];
-	__u32 x;
+	__u32 tmp[CRYPTO_MAX_BLOCK_SIZE];
 	unsigned long cpuflags;
+	struct scatterlist sgiv[1],
+			   sgtmp[1];
 
-
-	/* Redundant, but just in case... */
-	if (r->entropy_count > r->poolinfo.POOLBITS)
-		r->entropy_count = r->poolinfo.POOLBITS;
-
-	if (flags & EXTRACT_ENTROPY_SECONDARY)
-		xfer_secondary_pool(r, nbytes, tmp);
-
-	/* Hold lock while accounting */
+	/* lock while we're reseeding */
 	spin_lock_irqsave(&r->lock, cpuflags);
 
-	DEBUG_ENT("%04d %04d : trying to extract %d bits from %s\n",
-		  random_state->entropy_count,
-		  sec_random_state->entropy_count,
-		  nbytes * 8,
-		  r == sec_random_state ? "secondary" :
-		  r == random_state ? "primary" : "unknown");
-
-	if (flags & EXTRACT_ENTROPY_LIMIT && nbytes >= r->entropy_count / 8)
-		nbytes = r->entropy_count / 8;
-
-	if (r->entropy_count / 8 >= nbytes)
-		r->entropy_count -= nbytes*8;
-	else
-		r->entropy_count = 0;
+	random_reseed(r);
+	r->pool0_len = 0;
 
-	if (r->entropy_count < random_write_wakeup_thresh)
-		wake_up_interruptible(&random_write_wait);
+	spin_unlock_irqrestore(&r->lock, cpuflags);
 
-	DEBUG_ENT("%04d %04d : debiting %d bits from %s%s\n",
-		  random_state->entropy_count,
-		  sec_random_state->entropy_count,
-		  nbytes * 8,
-		  r == sec_random_state ? "secondary" :
-		  r == random_state ? "primary" : "unknown",
-		  flags & EXTRACT_ENTROPY_LIMIT ? "" : " (unlimited)");
+	/*
+	 * don't output any data until we reseed at least once
+	 * But this causes problems at boot-time.  So weĺl assume since they
+	 * don't wait to the PRNG to setup, they don't really new strong random
+	 * data
+	*/
+	/*
+	if (r->reseed_count == 0)
+		return 0;
+	*/
 
-	spin_unlock_irqrestore(&r->lock, cpuflags);
+	sgiv[0].page = virt_to_page(r->iv);
+	sgiv[0].offset = offset_in_page(r->iv);
+	sgiv[0].length = r->blocksize;
+	sgtmp[0].page = virt_to_page(tmp);
+	sgtmp[0].offset = offset_in_page(tmp);
+	sgtmp[0].length = r->blocksize;
 
 	ret = 0;
 	while (nbytes) {
-		/*
-		 * Check if we need to break out or reschedule....
-		 */
-		if ((flags & EXTRACT_ENTROPY_USER) && need_resched()) {
-			if (signal_pending(current)) {
-				if (ret == 0)
-					ret = -ERESTARTSYS;
-				break;
-			}
+		crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, r->blocksize);
+		increment_iv(r->iv, r->blocksize);
 
-			DEBUG_ENT("%04d %04d : extract feeling sleepy (%d bytes left)\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count, nbytes);
-
-			schedule();
-
-			DEBUG_ENT("%04d %04d : extract woke up\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count);
-		}
-
-		/* Hash the pool to get the output */
-		tmp[0] = 0x67452301;
-		tmp[1] = 0xefcdab89;
-		tmp[2] = 0x98badcfe;
-		tmp[3] = 0x10325476;
-#ifdef USE_SHA
-		tmp[4] = 0xc3d2e1f0;
-#endif
-		/*
-		 * As we hash the pool, we mix intermediate values of
-		 * the hash back into the pool.  This eliminates
-		 * backtracking attacks (where the attacker knows
-		 * the state of the pool plus the current outputs, and
-		 * attempts to find previous ouputs), unless the hash
-		 * function can be inverted.
-		 */
-		for (i = 0, x = 0; i < r->poolinfo.poolwords; i += 16, x+=2) {
-			HASH_TRANSFORM(tmp, r->pool+i);
-			add_entropy_words(r, &tmp[x%HASH_BUFFER_SIZE], 1);
-		}
-		
-		/*
-		 * In case the hash function has some recognizable
-		 * output pattern, we fold it in half.
-		 */
-		for (i = 0; i <  HASH_BUFFER_SIZE/2; i++)
-			tmp[i] ^= tmp[i + (HASH_BUFFER_SIZE+1)/2];
-#if HASH_BUFFER_SIZE & 1	/* There's a middle word to deal with */
-		x = tmp[HASH_BUFFER_SIZE/2];
-		x ^= (x >> 16);		/* Fold it in half */
-		((__u16 *)tmp)[HASH_BUFFER_SIZE-1] = (__u16)x;
-#endif
-		
 		/* Copy data to destination buffer */
-		i = min(nbytes, HASH_BUFFER_SIZE*sizeof(__u32)/2);
+		i = (nbytes < 16) ? nbytes : 16;
 		if (flags & EXTRACT_ENTROPY_USER) {
 			i -= copy_to_user(buf, (__u8 const *)tmp, i);
 			if (!i) {
 				ret = -EFAULT;
 				break;
 			}
-		} else
+		} else {
 			memcpy(buf, (__u8 const *)tmp, i);
+		}
 		nbytes -= i;
 		buf += i;
 		ret += i;
 	}
+	
+	/* generate a new key */
+	/* take into account the possibility that keysize >= blocksize */
+	for (i=0; i+r->blocksize<=r->keysize; i+=r->blocksize) {
+		sgtmp[0].page = virt_to_page( r->key+i );
+		sgtmp[0].offset = offset_in_page( r->key+i );
+		sgtmp[0].length = r->blocksize;
+		crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1);
+		increment_iv(r->iv, r->blocksize);
+	}
+	sgtmp[0].page = virt_to_page( r->key+i );
+	sgtmp[0].offset = offset_in_page( r->key+i );
+	sgtmp[0].length = r->blocksize-i;
+	crypto_cipher_encrypt(r->cipher, sgtmp, sgiv, 1);
+	increment_iv(r->iv, r->blocksize);
+
+	if (crypto_cipher_setkey(r->cipher, r->key, r->keysize)) {
+		return -EINVAL;
+	}
 
 	/* Wipe data just returned from memory */
 	memset(tmp, 0, sizeof(tmp));
@@ -1482,10 +894,7 @@
  */
 void get_random_bytes(void *buf, int nbytes)
 {
-	if (sec_random_state)  
-		extract_entropy(sec_random_state, (char *) buf, nbytes, 
-				EXTRACT_ENTROPY_SECONDARY);
-	else if (random_state)
+	if (random_state)
 		extract_entropy(random_state, (char *) buf, nbytes, 0);
 	else
 		printk(KERN_NOTICE "get_random_bytes called before "
@@ -1500,57 +909,16 @@
  *
  *********************************************************************/
 
-/*
- * Initialize the random pool with standard stuff.
- *
- * NOTE: This is an OS-dependent function.
- */
-static void init_std_data(struct entropy_store *r)
-{
-	struct timeval 	tv;
-	__u32		words[2];
-	char 		*p;
-	int		i;
-
-	do_gettimeofday(&tv);
-	words[0] = tv.tv_sec;
-	words[1] = tv.tv_usec;
-	add_entropy_words(r, words, 2);
-
-	/*
-	 *	This doesn't lock system.utsname. However, we are generating
-	 *	entropy so a race with a name set here is fine.
-	 */
-	p = (char *) &system_utsname;
-	for (i = sizeof(system_utsname) / sizeof(words); i; i--) {
-		memcpy(words, p, sizeof(words));
-		add_entropy_words(r, words, sizeof(words)/4);
-		p += sizeof(words);
-	}
-}
-
 static int __init rand_initialize(void)
 {
-	int i;
-
-	if (create_entropy_store(DEFAULT_POOL_SIZE, &random_state))
-		goto err;
-	if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state))
-		goto err;
-	if (create_entropy_store(SECONDARY_POOL_SIZE, &sec_random_state))
+	if (create_entropy_store(DEFAULT_POOL_NUMBER, &random_state))
 		goto err;
-	clear_entropy_store(random_state);
-	clear_entropy_store(sec_random_state);
-	init_std_data(random_state);
+        if (batch_entropy_init(BATCH_ENTROPY_SIZE, random_state))
+                goto err;
+
 #ifdef CONFIG_SYSCTL
 	sysctl_init_random(random_state);
 #endif
-	for (i = 0; i < NR_IRQS; i++)
-		irq_timer_state[i] = NULL;
-	memset(&keyboard_timer_state, 0, sizeof(struct timer_rand_state));
-	memset(&mouse_timer_state, 0, sizeof(struct timer_rand_state));
-	memset(&extract_timer_state, 0, sizeof(struct timer_rand_state));
-	extract_timer_state.dont_count_entropy = 1;
 	return 0;
 err:
 	return -1;
@@ -1559,139 +927,33 @@
 
 void rand_initialize_irq(int irq)
 {
-	struct timer_rand_state *state;
-	
-	if (irq >= NR_IRQS || irq_timer_state[irq])
-		return;
-
-	/*
-	 * If kmalloc returns null, we just won't use that entropy
-	 * source.
-	 */
-	state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL);
-	if (state) {
-		memset(state, 0, sizeof(struct timer_rand_state));
-		irq_timer_state[irq] = state;
-	}
+	/* we don't use timers anymore, we just use the current time */
 }
  
 void rand_initialize_disk(struct gendisk *disk)
 {
-	struct timer_rand_state *state;
-	
-	/*
-	 * If kmalloc returns null, we just won't use that entropy
-	 * source.
-	 */
-	state = kmalloc(sizeof(struct timer_rand_state), GFP_KERNEL);
-	if (state) {
-		memset(state, 0, sizeof(struct timer_rand_state));
-		disk->random = state;
-	}
-}
-
-static ssize_t
-random_read(struct file * file, char __user * buf, size_t nbytes, loff_t *ppos)
-{
-	DECLARE_WAITQUEUE(wait, current);
-	ssize_t			n, retval = 0, count = 0;
-	
-	if (nbytes == 0)
-		return 0;
-
-	while (nbytes > 0) {
-		n = nbytes;
-		if (n > SEC_XFER_SIZE)
-			n = SEC_XFER_SIZE;
-
-		DEBUG_ENT("%04d %04d : reading %d bits, p: %d s: %d\n",
-			  random_state->entropy_count,
-			  sec_random_state->entropy_count,
-			  n*8, random_state->entropy_count,
-			  sec_random_state->entropy_count);
-
-		n = extract_entropy(sec_random_state, buf, n,
-				    EXTRACT_ENTROPY_USER |
-				    EXTRACT_ENTROPY_LIMIT |
-				    EXTRACT_ENTROPY_SECONDARY);
-
-		DEBUG_ENT("%04d %04d : read got %d bits (%d still needed)\n",
-			  random_state->entropy_count,
-			  sec_random_state->entropy_count,
-			  n*8, (nbytes-n)*8);
-
-		if (n == 0) {
-			if (file->f_flags & O_NONBLOCK) {
-				retval = -EAGAIN;
-				break;
-			}
-			if (signal_pending(current)) {
-				retval = -ERESTARTSYS;
-				break;
-			}
-
-			DEBUG_ENT("%04d %04d : sleeping?\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count);
-
-			set_current_state(TASK_INTERRUPTIBLE);
-			add_wait_queue(&random_read_wait, &wait);
-
-			if (sec_random_state->entropy_count / 8 == 0)
-				schedule();
-
-			set_current_state(TASK_RUNNING);
-			remove_wait_queue(&random_read_wait, &wait);
-
-			DEBUG_ENT("%04d %04d : waking up\n",
-				  random_state->entropy_count,
-				  sec_random_state->entropy_count);
-
-			continue;
-		}
-
-		if (n < 0) {
-			retval = n;
-			break;
-		}
-		count += n;
-		buf += n;
-		nbytes -= n;
-		break;		/* This break makes the device work */
-				/* like a named pipe */
-	}
-
-	/*
-	 * If we gave the user some bytes, update the access time.
-	 */
-	if (count)
-		file_accessed(file);
-	
-	return (count ? count : retval);
+	/* we don't use timers anymore, we just use the current time */
 }
 
 static ssize_t
 urandom_read(struct file * file, char __user * buf,
 		      size_t nbytes, loff_t *ppos)
 {
-	return extract_entropy(sec_random_state, buf, nbytes,
+	return extract_entropy(random_state, buf, nbytes,
 			       EXTRACT_ENTROPY_USER |
 			       EXTRACT_ENTROPY_SECONDARY);
 }
 
+static ssize_t
+random_read(struct file * file, char __user * buf, size_t nbytes, loff_t *ppos)
+{
+	return urandom_read(file, buf, nbytes, ppos);
+}
+
 static unsigned int
 random_poll(struct file *file, poll_table * wait)
 {
-	unsigned int mask;
-
-	poll_wait(file, &random_read_wait, wait);
-	poll_wait(file, &random_write_wait, wait);
-	mask = 0;
-	if (random_state->entropy_count >= random_read_wakeup_thresh)
-		mask |= POLLIN | POLLRDNORM;
-	if (random_state->entropy_count < random_write_wakeup_thresh)
-		mask |= POLLOUT | POLLWRNORM;
-	return mask;
+	return POLLIN | POLLRDNORM  |  POLLOUT | POLLWRNORM;
 }
 
 static ssize_t
@@ -1701,12 +963,13 @@
 	int		ret = 0;
 	size_t		bytes;
 	__u32 		buf[16];
-	const char 	__user *p = buffer;
+	const char __user	*p = buffer;
 	size_t		c = count;
 
 	while (c > 0) {
 		bytes = min(c, sizeof(buf));
 
+DEBUG_PRINTK("random_write() %p, %p, %u\n", &buf, p, bytes);
 		bytes -= copy_from_user(&buf, p, bytes);
 		if (!bytes) {
 			ret = -EFAULT;
@@ -1730,67 +993,25 @@
 random_ioctl(struct inode * inode, struct file * file,
 	     unsigned int cmd, unsigned long arg)
 {
-	int *tmp, size, ent_count;
-	int __user *p = (int __user *)arg;
+	int size, ent_count;
+	int __user *p = (int __user *) arg;
 	int retval;
-	unsigned long flags;
 	
 	switch (cmd) {
 	case RNDGETENTCNT:
-		ent_count = random_state->entropy_count;
-		if (put_user(ent_count, p))
+		if (put_user(random_entropy_count, p))
 			return -EFAULT;
 		return 0;
 	case RNDADDTOENTCNT:
-		if (!capable(CAP_SYS_ADMIN))
-			return -EPERM;
-		if (get_user(ent_count, p))
-			return -EFAULT;
-		credit_entropy_store(random_state, ent_count);
-		/*
-		 * Wake up waiting processes if we have enough
-		 * entropy.
-		 */
-		if (random_state->entropy_count >= random_read_wakeup_thresh)
-			wake_up_interruptible(&random_read_wait);
+		/* entropy accounting removed. */
 		return 0;
 	case RNDGETPOOL:
-		if (!capable(CAP_SYS_ADMIN))
-			return -EPERM;
-		if (get_user(size, p) ||
-		    put_user(random_state->poolinfo.poolwords, p++))
-			return -EFAULT;
-		if (size < 0)
-			return -EFAULT;
-		if (size > random_state->poolinfo.poolwords)
-			size = random_state->poolinfo.poolwords;
-
-		/* prepare to atomically snapshot pool */
-
-		tmp = kmalloc(size * sizeof(__u32), GFP_KERNEL);
-
-		if (!tmp)
-			return -ENOMEM;
-
-		spin_lock_irqsave(&random_state->lock, flags);
-		ent_count = random_state->entropy_count;
-		memcpy(tmp, random_state->pool, size * sizeof(__u32));
-		spin_unlock_irqrestore(&random_state->lock, flags);
-
-		if (!copy_to_user(p, tmp, size * sizeof(__u32))) {
-			kfree(tmp);
-			return -EFAULT;
-		}
-
-		kfree(tmp);
-
-		if(put_user(ent_count, p++))
-			return -EFAULT;
-
+		/* jlcooke: never get the raw pool!!! */
 		return 0;
 	case RNDADDENTROPY:
 		if (!capable(CAP_SYS_ADMIN))
 			return -EPERM;
+		p = (int *) arg;
 		if (get_user(ent_count, p++))
 			return -EFAULT;
 		if (ent_count < 0)
@@ -1801,25 +1022,12 @@
 				      size, &file->f_pos);
 		if (retval < 0)
 			return retval;
-		credit_entropy_store(random_state, ent_count);
-		/*
-		 * Wake up waiting processes if we have enough
-		 * entropy.
-		 */
-		if (random_state->entropy_count >= random_read_wakeup_thresh)
-			wake_up_interruptible(&random_read_wait);
 		return 0;
 	case RNDZAPENTCNT:
-		if (!capable(CAP_SYS_ADMIN))
-			return -EPERM;
-		random_state->entropy_count = 0;
+		/* entropy accounting removed. */
 		return 0;
 	case RNDCLEARPOOL:
-		/* Clear the entropy pool and associated counters. */
-		if (!capable(CAP_SYS_ADMIN))
-			return -EPERM;
-		clear_entropy_store(random_state);
-		init_std_data(random_state);
+		/* jlcooke: this is maddness! Never clear the entropy pool */
 		return 0;
 	default:
 		return -EINVAL;
@@ -1875,71 +1083,111 @@
 static int min_write_thresh, max_write_thresh;
 static char sysctl_bootid[16];
 
-/*
- * This function handles a request from the user to change the pool size 
- * of the primary entropy store.
- */
-static int change_poolsize(int poolsize)
-{
-	struct entropy_store	*new_store, *old_store;
-	int			ret;
-	
-	if ((ret = create_entropy_store(poolsize, &new_store)))
-		return ret;
-
-	add_entropy_words(new_store, random_state->pool,
-			  random_state->poolinfo.poolwords);
-	credit_entropy_store(new_store, random_state->entropy_count);
-
-	sysctl_init_random(new_store);
-	old_store = random_state;
-	random_state = batch_work.data = new_store;
-	free_entropy_store(old_store);
-	return 0;
-}
-
 static int proc_do_poolsize(ctl_table *table, int write, struct file *filp,
 			    void __user *buffer, size_t *lenp, loff_t *ppos)
 {
-	int	ret;
+	int ret;
 
-	sysctl_poolsize = random_state->poolinfo.POOLBYTES;
+	if (write) {
+		/* you can't change the poolsize, but we'll let you think
+		 * you can for legacy reasons.
+		 */
+		return 0;
+	}
 
+	sysctl_poolsize = (1<<random_state->pool_number) *
+				random_state->pools[0]->__crt_alg->cra_ctxsize;
 	ret = proc_dointvec(table, write, filp, buffer, lenp, ppos);
-	if (ret || !write ||
-	    (sysctl_poolsize == random_state->poolinfo.POOLBYTES))
-		return ret;
 
-	return change_poolsize(sysctl_poolsize);
+	return ret;
 }
 
-static int poolsize_strategy(ctl_table *table, int __user *name, int nlen,
+static int poolsize_strategy(ctl_table *table, int *name, int nlen,
 			     void __user *oldval, size_t __user *oldlenp,
 			     void __user *newval, size_t newlen, void **context)
 {
-	int	len;
-	
-	sysctl_poolsize = random_state->poolinfo.POOLBYTES;
-
-	/*
-	 * We only handle the write case, since the read case gets
-	 * handled by the default handler (and we don't care if the
-	 * write case happens twice; it's harmless).
+	/* you can't set a poolsize strtegy because it doesn't change in
+	 * size anymore
 	 */
-	if (newval && newlen) {
-		len = newlen;
-		if (len > table->maxlen)
-			len = table->maxlen;
-		if (copy_from_user(table->data, newval, len))
-			return -EFAULT;
+	return 0;
+}
+
+static int proc_derive_seed(ctl_table *table, int write, struct file *filp,
+				void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+	static unsigned char	*hextab = "0123456789abcdef";
+	static unsigned int	derive_count=0;
+	 /* hex length of derived seed */
+	static unsigned char	buf[(1<<MAXIMUM_POOL_NUMBER) *
+				RANDOM_MAX_DIGEST_SIZE *8/4];
+	unsigned long flags;
+	ctl_table       fake_table;
+	unsigned char   tmp[RANDOM_MAX_DIGEST_SIZE];
+	unsigned int	i,j;
+	struct scatterlist sg[3];
+	int ret;
+	void *p;
+
+DEBUG_PRINTK("proc_derive_seed() 0\n");
+
+	spin_lock_irqsave(&random_state->lock, flags);
+	random_state->pool0_len = 0;
+
+	memset(buf, 0, random_state->pool_number * 2*random_state->digestsize);
+
+	/* the carry-state from pool to pool */
+	memset(tmp, 0, random_state->digestsize);
+
+	for (i=0; i<(1<<random_state->pool_number); i++) {
+		crypto_digest_init(random_state->reseedHash);
+
+		/*
+		 * carry the digest from the previous output so a derive seed
+		 * from a lightly seeded state is indistinguishavble from a
+		 * heavily seeded one
+		 */
+		p = &tmp;
+		sg[0].page = virt_to_page(p);
+		sg[0].offset = offset_in_page(p);
+		sg[0].length = sizeof(tmp);
+
+		/* finalize and digest the i-th pool */
+		crypto_digest_final(random_state->pools[i], tmp);
+		crypto_digest_init(random_state->pools[i]);
+		p = &tmp;
+		sg[1].page = virt_to_page(p);
+		sg[1].offset = offset_in_page(p);
+		sg[1].length = sizeof(tmp);
+
+		/*
+		 * digest in a counter to ensure the final hash can change even if the
+		 * message does not
+		 */
+		p = &derive_count;
+		sg[2].page = virt_to_page(p);
+		sg[2].offset = offset_in_page(p);
+		sg[2].length = sizeof(derive_count);
+
+		crypto_digest_digest(random_state->reseedHash, sg, 3, tmp);
+		for (j=0; j<random_state->digestsize; j++) {
+			buf[2*(i*random_state->digestsize +j)  ] = hextab[ (tmp[j] >> 4) & 0xf ];
+			buf[2*(i*random_state->digestsize +j)+1] = hextab[ (tmp[j]     ) & 0xf ];
+		}
+		derive_count++;
 	}
 
-	if (sysctl_poolsize != random_state->poolinfo.POOLBYTES)
-		return change_poolsize(sysctl_poolsize);
+	spin_unlock_irqrestore(&random_state->lock, flags);
 
-	return 0;
+	fake_table.data = buf;
+	fake_table.maxlen = (1<<random_state->pool_number) *
+				2*random_state->digestsize;
+
+	ret = proc_dostring(&fake_table, write, filp, buffer, lenp, ppos);
+
+	return ret;
 }
 
+
 /*
  * These functions is used to return both the bootid UUID, and random
  * UUID.  The difference is in whether table->data is NULL; if it is,
@@ -1975,7 +1223,7 @@
 	return proc_dostring(&fake_table, write, filp, buffer, lenp, ppos);
 }
 
-static int uuid_strategy(ctl_table *table, int __user *name, int nlen,
+static int uuid_strategy(ctl_table *table, int *name, int nlen,
 			 void __user *oldval, size_t __user *oldlenp,
 			 void __user *newval, size_t newlen, void **context)
 {
@@ -2011,38 +1259,41 @@
 		.procname	= "poolsize",
 		.data		= &sysctl_poolsize,
 		.maxlen		= sizeof(int),
+		/* you can't change the poolsize, but we'll let you think you
+		 * can for legacy reasons.
+		 */
 		.mode		= 0644,
 		.proc_handler	= &proc_do_poolsize,
 		.strategy	= &poolsize_strategy,
 	},
 	{
-		.ctl_name	= RANDOM_ENTROPY_COUNT,
-		.procname	= "entropy_avail",
-		.maxlen		= sizeof(int),
-		.mode		= 0444,
-		.proc_handler	= &proc_dointvec,
-	},
+		.ctl_name       = RANDOM_ENTROPY_COUNT,
+		.procname       = "entropy_avail",
+		.maxlen         = sizeof(int),
+		.mode           = 0444,
+		.proc_handler   = &proc_dointvec,
+        },
 	{
-		.ctl_name	= RANDOM_READ_THRESH,
-		.procname	= "read_wakeup_threshold",
-		.data		= &random_read_wakeup_thresh,
-		.maxlen		= sizeof(int),
-		.mode		= 0644,
-		.proc_handler	= &proc_dointvec_minmax,
-		.strategy	= &sysctl_intvec,
-		.extra1		= &min_read_thresh,
-		.extra2		= &max_read_thresh,
+		.ctl_name       = RANDOM_READ_THRESH,
+		.procname       = "read_wakeup_threshold",
+		.data           = &random_read_wakeup_thresh,
+		.maxlen         = sizeof(int),
+		.mode           = 0644,
+		.proc_handler   = &proc_dointvec_minmax,
+		.strategy       = &sysctl_intvec,
+		.extra1         = &min_read_thresh,
+		.extra2         = &max_read_thresh,
 	},
 	{
-		.ctl_name	= RANDOM_WRITE_THRESH,
-		.procname	= "write_wakeup_threshold",
-		.data		= &random_write_wakeup_thresh,
-		.maxlen		= sizeof(int),
-		.mode		= 0644,
-		.proc_handler	= &proc_dointvec_minmax,
-		.strategy	= &sysctl_intvec,
-		.extra1		= &min_write_thresh,
-		.extra2		= &max_write_thresh,
+		.ctl_name       = RANDOM_WRITE_THRESH,
+		.procname       = "write_wakeup_threshold",
+		.data           = &random_write_wakeup_thresh,
+		.maxlen         = sizeof(int),
+		.mode           = 0644,
+		.proc_handler   = &proc_dointvec_minmax,
+		.strategy       = &sysctl_intvec,
+		.extra1         = &min_write_thresh,
+		.extra2         = &max_write_thresh,
 	},
 	{
 		.ctl_name	= RANDOM_BOOT_ID,
@@ -2061,15 +1312,25 @@
 		.proc_handler	= &proc_do_uuid,
 		.strategy	= &uuid_strategy,
 	},
+	{
+		.ctl_name	= RANDOM_DERIVE_SEED,
+		.procname	= "derive_seed",
+		.maxlen		= MAXIMUM_POOL_NUMBER * RANDOM_MAX_DIGEST_SIZE,
+		.mode		= 0400,
+		.proc_handler	= &proc_derive_seed,
+	},
 	{ .ctl_name = 0 }
 };
 
-static void sysctl_init_random(struct entropy_store *random_state)
+static void sysctl_init_random(struct entropy_store *r)
 {
 	min_read_thresh = 8;
 	min_write_thresh = 0;
-	max_read_thresh = max_write_thresh = random_state->poolinfo.POOLBITS;
-	random_table[1].data = &random_state->entropy_count;
+	random_entropy_count =
+	max_read_thresh =
+	max_write_thresh = (1<<r->pool_number) *
+				r->pools[0]->__crt_alg->cra_ctxsize;
+	random_table[1].data = &random_entropy_count;
 }
 #endif 	/* CONFIG_SYSCTL */
 
@@ -2081,135 +1342,23 @@
 
 /*
  * TCP initial sequence number picking.  This uses the random number
- * generator to pick an initial secret value.  This value is hashed
- * along with the TCP endpoint information to provide a unique
- * starting point for each pair of TCP endpoints.  This defeats
- * attacks which rely on guessing the initial TCP sequence number.
- * This algorithm was suggested by Steve Bellovin.
+ * generator to pick an initial secret value.  This value is encrypted
+ * with the TCP endpoint information to provide a unique starting point
+ * for each pair of TCP endpoints.  This defeats attacks which rely on
+ * guessing the initial TCP sequence number.
  *
  * Using a very strong hash was taking an appreciable amount of the total
- * TCP connection establishment time, so this is a weaker hash,
- * compensated for by changing the secret periodically.
+ * TCP connection establishment time, so this now uses AES256
+ * 
+ * openssl spped md4 aes shows aes256 is 2.5 times faster then basic md4 for
+ * the block sizes we're dealing with.
+ * type          16 bytes   64 bytes  256 bytes  1024 bytes  8192 bytes
+ * md4           10708.72k  38240.96k 111170.47k  215872.85k  296828.93k
+ * aes-128 cbc   32121.81k  32678.31k  33119.49k   33221.29k   33210.59k
+ * aes-192 cbc   27915.92k  27868.52k  28418.08k   28677.12k   28721.15k
+ * aes-256 cbc   24599.57k  25142.38k  25381.80k   25474.88k   25392.46k
  */
 
-/* F, G and H are basic MD4 functions: selection, majority, parity */
-#define F(x, y, z) ((z) ^ ((x) & ((y) ^ (z))))
-#define G(x, y, z) (((x) & (y)) + (((x) ^ (y)) & (z)))
-#define H(x, y, z) ((x) ^ (y) ^ (z))
-
-/*
- * The generic round function.  The application is so specific that
- * we don't bother protecting all the arguments with parens, as is generally
- * good macro practice, in favor of extra legibility.
- * Rotation is separate from addition to prevent recomputation
- */
-#define ROUND(f, a, b, c, d, x, s)	\
-	(a += f(b, c, d) + x, a = (a << s) | (a >> (32-s)))
-#define K1 0
-#define K2 013240474631UL
-#define K3 015666365641UL
-
-/*
- * Basic cut-down MD4 transform.  Returns only 32 bits of result.
- */
-static __u32 halfMD4Transform (__u32 const buf[4], __u32 const in[8])
-{
-	__u32	a = buf[0], b = buf[1], c = buf[2], d = buf[3];
-
-	/* Round 1 */
-	ROUND(F, a, b, c, d, in[0] + K1,  3);
-	ROUND(F, d, a, b, c, in[1] + K1,  7);
-	ROUND(F, c, d, a, b, in[2] + K1, 11);
-	ROUND(F, b, c, d, a, in[3] + K1, 19);
-	ROUND(F, a, b, c, d, in[4] + K1,  3);
-	ROUND(F, d, a, b, c, in[5] + K1,  7);
-	ROUND(F, c, d, a, b, in[6] + K1, 11);
-	ROUND(F, b, c, d, a, in[7] + K1, 19);
-
-	/* Round 2 */
-	ROUND(G, a, b, c, d, in[1] + K2,  3);
-	ROUND(G, d, a, b, c, in[3] + K2,  5);
-	ROUND(G, c, d, a, b, in[5] + K2,  9);
-	ROUND(G, b, c, d, a, in[7] + K2, 13);
-	ROUND(G, a, b, c, d, in[0] + K2,  3);
-	ROUND(G, d, a, b, c, in[2] + K2,  5);
-	ROUND(G, c, d, a, b, in[4] + K2,  9);
-	ROUND(G, b, c, d, a, in[6] + K2, 13);
-
-	/* Round 3 */
-	ROUND(H, a, b, c, d, in[3] + K3,  3);
-	ROUND(H, d, a, b, c, in[7] + K3,  9);
-	ROUND(H, c, d, a, b, in[2] + K3, 11);
-	ROUND(H, b, c, d, a, in[6] + K3, 15);
-	ROUND(H, a, b, c, d, in[1] + K3,  3);
-	ROUND(H, d, a, b, c, in[5] + K3,  9);
-	ROUND(H, c, d, a, b, in[0] + K3, 11);
-	ROUND(H, b, c, d, a, in[4] + K3, 15);
-
-	return buf[1] + b;	/* "most hashed" word */
-	/* Alternative: return sum of all words? */
-}
-
-#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
-
-static __u32 twothirdsMD4Transform (__u32 const buf[4], __u32 const in[12])
-{
-	__u32	a = buf[0], b = buf[1], c = buf[2], d = buf[3];
-
-	/* Round 1 */
-	ROUND(F, a, b, c, d, in[ 0] + K1,  3);
-	ROUND(F, d, a, b, c, in[ 1] + K1,  7);
-	ROUND(F, c, d, a, b, in[ 2] + K1, 11);
-	ROUND(F, b, c, d, a, in[ 3] + K1, 19);
-	ROUND(F, a, b, c, d, in[ 4] + K1,  3);
-	ROUND(F, d, a, b, c, in[ 5] + K1,  7);
-	ROUND(F, c, d, a, b, in[ 6] + K1, 11);
-	ROUND(F, b, c, d, a, in[ 7] + K1, 19);
-	ROUND(F, a, b, c, d, in[ 8] + K1,  3);
-	ROUND(F, d, a, b, c, in[ 9] + K1,  7);
-	ROUND(F, c, d, a, b, in[10] + K1, 11);
-	ROUND(F, b, c, d, a, in[11] + K1, 19);
-
-	/* Round 2 */
-	ROUND(G, a, b, c, d, in[ 1] + K2,  3);
-	ROUND(G, d, a, b, c, in[ 3] + K2,  5);
-	ROUND(G, c, d, a, b, in[ 5] + K2,  9);
-	ROUND(G, b, c, d, a, in[ 7] + K2, 13);
-	ROUND(G, a, b, c, d, in[ 9] + K2,  3);
-	ROUND(G, d, a, b, c, in[11] + K2,  5);
-	ROUND(G, c, d, a, b, in[ 0] + K2,  9);
-	ROUND(G, b, c, d, a, in[ 2] + K2, 13);
-	ROUND(G, a, b, c, d, in[ 4] + K2,  3);
-	ROUND(G, d, a, b, c, in[ 6] + K2,  5);
-	ROUND(G, c, d, a, b, in[ 8] + K2,  9);
-	ROUND(G, b, c, d, a, in[10] + K2, 13);
-
-	/* Round 3 */
-	ROUND(H, a, b, c, d, in[ 3] + K3,  3);
-	ROUND(H, d, a, b, c, in[ 7] + K3,  9);
-	ROUND(H, c, d, a, b, in[11] + K3, 11);
-	ROUND(H, b, c, d, a, in[ 2] + K3, 15);
-	ROUND(H, a, b, c, d, in[ 6] + K3,  3);
-	ROUND(H, d, a, b, c, in[10] + K3,  9);
-	ROUND(H, c, d, a, b, in[ 1] + K3, 11);
-	ROUND(H, b, c, d, a, in[ 5] + K3, 15);
-	ROUND(H, a, b, c, d, in[ 9] + K3,  3);
-	ROUND(H, d, a, b, c, in[ 0] + K3,  9);
-	ROUND(H, c, d, a, b, in[ 4] + K3, 11);
-	ROUND(H, b, c, d, a, in[ 8] + K3, 15);
-
-	return buf[1] + b;	/* "most hashed" word */
-	/* Alternative: return sum of all words? */
-}
-#endif
-
-#undef ROUND
-#undef F
-#undef G
-#undef H
-#undef K1
-#undef K2
-#undef K3
 
 /* This should not be decreased so low that ISNs wrap too fast. */
 #define REKEY_INTERVAL	300
@@ -2237,79 +1386,70 @@
 #define HASH_BITS	24
 #define HASH_MASK	( (1<<HASH_BITS)-1 )
 
-static struct keydata {
-	time_t rekey_time;
-	__u32	count;		// already shifted to the final position
-	__u32	secret[12];
-} ____cacheline_aligned ip_keydata[2];
-
 static spinlock_t ip_lock = SPIN_LOCK_UNLOCKED;
-static unsigned int ip_cnt;
 
-static struct keydata *__check_and_rekey(time_t time)
+static __u32 network_random_read32(void)
 {
-	struct keydata *keyptr;
+	static u8			ctr[16];    /* max block size? */
+	static struct scatterlist	sgctr[1];
+	static unsigned int		master_count=0;
+	static time_t			lastRekey=0;
+
+	struct scatterlist sgtmp[1];
+	unsigned int	count;
+	unsigned char	tmp[16];
+	struct timeval	tv;
+
+        rmb();
 	spin_lock_bh(&ip_lock);
-	keyptr = &ip_keydata[ip_cnt&1];
-	if (!keyptr->rekey_time || (time - keyptr->rekey_time) > REKEY_INTERVAL) {
-		keyptr = &ip_keydata[1^(ip_cnt&1)];
-		keyptr->rekey_time = time;
-		get_random_bytes(keyptr->secret, sizeof(keyptr->secret));
-		keyptr->count = (ip_cnt&COUNT_MASK)<<HASH_BITS;
+
+	count = ++master_count;
+	increment_iv(ctr, random_state->blocksize);
+
+	do_gettimeofday(&tv);
+	if (lastRekey==0  || (tv.tv_sec - lastRekey) < REKEY_INTERVAL) {
+		lastRekey = tv.tv_sec;
+
+		sgctr[0].page = virt_to_page(ctr);
+		sgctr[0].offset = offset_in_page(ctr);
+		sgctr[0].length = 16;
+
+		if (!random_state->networkCipher_ready) {
+			u8 secret[32]; /* max key size? */
+			get_random_bytes(secret, random_state->keysize);
+			crypto_cipher_setkey(random_state->networkCipher,
+						(const u8*)secret,
+						random_state->keysize);
+			random_state->networkCipher_ready = 1;
+		}
+
 		mb();
-		ip_cnt++;
-	}
-	spin_unlock_bh(&ip_lock);
-	return keyptr;
-}
+        }
 
-static inline struct keydata *check_and_rekey(time_t time)
-{
-	struct keydata *keyptr = &ip_keydata[ip_cnt&1];
+        spin_unlock_bh(&ip_lock);
 
-	rmb();
-	if (!keyptr->rekey_time || (time - keyptr->rekey_time) > REKEY_INTERVAL) {
-		keyptr = __check_and_rekey(time);
-	}
+	sgtmp[0].page = virt_to_page(tmp);
+	sgtmp[0].offset = offset_in_page(tmp);
+	sgtmp[0].length = random_state->blocksize;
+	/* tmp[]/sg[0] = Enc(Sec, CTR++) */
+	crypto_cipher_encrypt(random_state->networkCipher, sgtmp, sgctr, 1);
+	increment_iv(ctr, random_state->blocksize);
 
-	return keyptr;
+	/* seq# needs to be random-ish, but incresing */
+	return (tmp[0] & COUNT_MASK) + (count << (32-COUNT_BITS));
 }
 
 #if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
 __u32 secure_tcpv6_sequence_number(__u32 *saddr, __u32 *daddr,
 				   __u16 sport, __u16 dport)
 {
-	struct timeval 	tv;
-	__u32		seq;
-	__u32		hash[12];
-	struct keydata *keyptr;
-
-	/* The procedure is the same as for IPv4, but addresses are longer.
-	 * Thus we must use twothirdsMD4Transform.
-	 */
-
-	do_gettimeofday(&tv);	/* We need the usecs below... */
-	keyptr = check_and_rekey(tv.tv_sec);
-
-	memcpy(hash, saddr, 16);
-	hash[4]=(sport << 16) + dport;
-	memcpy(&hash[5],keyptr->secret,sizeof(__u32)*7);
-
-	seq = twothirdsMD4Transform(daddr, hash) & HASH_MASK;
-	seq += keyptr->count;
-	seq += tv.tv_usec + tv.tv_sec*1000000;
-
-	return seq;
+	return network_random_read32();
 }
 EXPORT_SYMBOL(secure_tcpv6_sequence_number);
 
 __u32 secure_ipv6_id(__u32 *daddr)
 {
-	struct keydata *keyptr;
-
-	keyptr = check_and_rekey(get_seconds());
-
-	return halfMD4Transform(daddr, keyptr->secret);
+	return network_random_read32();
 }
 
 EXPORT_SYMBOL(secure_ipv6_id);
@@ -2319,75 +1459,20 @@
 __u32 secure_tcp_sequence_number(__u32 saddr, __u32 daddr,
 				 __u16 sport, __u16 dport)
 {
-	struct timeval 	tv;
-	__u32		seq;
-	__u32	hash[4];
-	struct keydata *keyptr;
-
-	/*
-	 * Pick a random secret every REKEY_INTERVAL seconds.
-	 */
-	do_gettimeofday(&tv);	/* We need the usecs below... */
-	keyptr = check_and_rekey(tv.tv_sec);
-
-	/*
-	 *  Pick a unique starting offset for each TCP connection endpoints
-	 *  (saddr, daddr, sport, dport).
-	 *  Note that the words are placed into the starting vector, which is 
-	 *  then mixed with a partial MD4 over random data.
-	 */
-	hash[0]=saddr;
-	hash[1]=daddr;
-	hash[2]=(sport << 16) + dport;
-	hash[3]=keyptr->secret[11];
-
-	seq = halfMD4Transform(hash, keyptr->secret) & HASH_MASK;
-	seq += keyptr->count;
-	/*
-	 *	As close as possible to RFC 793, which
-	 *	suggests using a 250 kHz clock.
-	 *	Further reading shows this assumes 2 Mb/s networks.
-	 *	For 10 Mb/s Ethernet, a 1 MHz clock is appropriate.
-	 *	That's funny, Linux has one built in!  Use it!
-	 *	(Networks are faster now - should this be increased?)
-	 */
-	seq += tv.tv_usec + tv.tv_sec*1000000;
-#if 0
-	printk("init_seq(%lx, %lx, %d, %d) = %d\n",
-	       saddr, daddr, sport, dport, seq);
-#endif
-	return seq;
+	return network_random_read32();
 }
 
 EXPORT_SYMBOL(secure_tcp_sequence_number);
 
-/*  The code below is shamelessly stolen from secure_tcp_sequence_number().
- *  All blames to Andrey V. Savochkin <saw@msu.ru>.
- */
 __u32 secure_ip_id(__u32 daddr)
 {
-	struct keydata *keyptr;
-	__u32 hash[4];
-
-	keyptr = check_and_rekey(get_seconds());
-
-	/*
-	 *  Pick a unique starting offset for each IP destination.
-	 *  The dest ip address is placed in the starting vector,
-	 *  which is then hashed with random data.
-	 */
-	hash[0] = daddr;
-	hash[1] = keyptr->secret[9];
-	hash[2] = keyptr->secret[10];
-	hash[3] = keyptr->secret[11];
-
-	return halfMD4Transform(hash, keyptr->secret);
+	return network_random_read32();
 }
 
 #ifdef CONFIG_SYN_COOKIES
 /*
  * Secure SYN cookie computation. This is the algorithm worked out by
- * Dan Bernstein and Eric Schenk.
+ * Jean-Luc Cooke
  *
  * For linux I implement the 1 minute counter by looking at the jiffies clock.
  * The count is passed in as a parameter, so this code doesn't much care.
@@ -2396,50 +1481,54 @@
 #define COOKIEBITS 24	/* Upper bits store count */
 #define COOKIEMASK (((__u32)1 << COOKIEBITS) - 1)
 
-static int	syncookie_init;
-static __u32	syncookie_secret[2][16-3+HASH_BUFFER_SIZE];
-
 __u32 secure_tcp_syn_cookie(__u32 saddr, __u32 daddr, __u16 sport,
 		__u16 dport, __u32 sseq, __u32 count, __u32 data)
 {
-	__u32 	tmp[16 + HASH_BUFFER_SIZE + HASH_EXTRA_SIZE];
-	__u32	seq;
-
-	/*
-	 * Pick two random secrets the first time we need a cookie.
-	 */
-	if (syncookie_init == 0) {
-		get_random_bytes(syncookie_secret, sizeof(syncookie_secret));
-		syncookie_init = 1;
-	}
+	struct scatterlist sg[1];
+	__u32	tmp[4];
 
 	/*
 	 * Compute the secure sequence number.
-	 * The output should be:
-   	 *   HASH(sec1,saddr,sport,daddr,dport,sec1) + sseq + (count * 2^24)
-	 *      + (HASH(sec2,saddr,sport,daddr,dport,count,sec2) % 2^24).
-	 * Where sseq is their sequence number and count increases every
-	 * minute by 1.
-	 * As an extra hack, we add a small "data" value that encodes the
-	 * MSS into the second hash value.
+	 * 
+	 * Output is the 32bit tag of a CBC-MAC of PT={count,0,0,0} with IV={addr,daddr,sport|dport,sseq}
+	 *   cookie = <8bit count> || truncate_24bit( Encrypt(Sec, {saddr,daddr,sport|dport,sseq}) )
+	 * 
+	 * DJB wrote (http://cr.yp.to/syncookies/archive) about how to do this with hash algorithms
+	 * - we can replace two SHA1s used in the previous kernel with two AESs and make things 3x faster
+	 * - I'd like to propose we remove the use of two whittenings with a single operation since we
+	 *   were only using addition modulo 2^32 of all these values anyways.  Not to mention the hashs
+	 *   differ only in that the second processes more data... why drop the first hash?  We did learn
+	 *   that addition is commutative and associative long ago.
+	 * - by replacing two SHA1s and addition modulo 2^32 with encryption of a 32bit value using AES-CTR
+	 *   we've made it 1,000,000,000 times easier to understand what is going on.
+	 * - Todo: we should rekey the cipher peridoically... if we do this, some packets will now fail 
+	 *   our checking system... is this ok?  How can we get around this?  Rekey's would ideally happen
+	 *   once per minute (6 million TCP connections per minute is a unrealistic enough security margin)
 	 */
 
-	memcpy(tmp+3, syncookie_secret[0], sizeof(syncookie_secret[0]));
-	tmp[0]=saddr;
-	tmp[1]=daddr;
-	tmp[2]=(sport << 16) + dport;
-	HASH_TRANSFORM(tmp+16, tmp);
-	seq = tmp[17] + sseq + (count << COOKIEBITS);
-
-	memcpy(tmp+3, syncookie_secret[1], sizeof(syncookie_secret[1]));
-	tmp[0]=saddr;
-	tmp[1]=daddr;
-	tmp[2]=(sport << 16) + dport;
-	tmp[3] = count;	/* minute counter */
-	HASH_TRANSFORM(tmp+16, tmp);
+	tmp[0] = saddr;
+	tmp[1] = daddr;
+	tmp[2] = (sport << 16) + dport;
+	tmp[3] = sseq;
+
+	sg[0].page = virt_to_page(tmp);
+	sg[0].offset = offset_in_page(tmp);
+	sg[0].length = 16;
+	if (!random_state->networkCipher_ready) {
+		u8 secret[32];
+		get_random_bytes(secret, sizeof(secret));
+		if (crypto_cipher_setkey(random_state->networkCipher,
+					secret, random_state->keysize)) {
+			return 0;
+		}
+		random_state->networkCipher_ready = 1;
+	}
+	/* tmp[]/sg[0] = Enc(Sec, {saddr,daddr,sport|dport,sseq}) */
+	crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1);
 
-	/* Add in the second hash and the data */
-	return seq + ((tmp[17] + data) & COOKIEMASK);
+	/* cookie = CTR encrypt of 8-bit-count and 24-bit-data */
+	return tmp[0] ^ ( (count << COOKIEBITS) |
+			(data >> (sizeof(__u32)*8-COOKIEBITS)) );
 }
 
 /*
@@ -2454,32 +1543,32 @@
 __u32 check_tcp_syn_cookie(__u32 cookie, __u32 saddr, __u32 daddr, __u16 sport,
 		__u16 dport, __u32 sseq, __u32 count, __u32 maxdiff)
 {
-	__u32 	tmp[16 + HASH_BUFFER_SIZE + HASH_EXTRA_SIZE];
-	__u32	diff;
+	struct scatterlist sg[1];
+	__u32 tmp[4], thiscount, diff;
 
-	if (syncookie_init == 0)
+	if (random_state == NULL  ||  !random_state->networkCipher_ready)
 		return (__u32)-1;	/* Well, duh! */
 
-	/* Strip away the layers from the cookie */
-	memcpy(tmp+3, syncookie_secret[0], sizeof(syncookie_secret[0]));
-	tmp[0]=saddr;
-	tmp[1]=daddr;
-	tmp[2]=(sport << 16) + dport;
-	HASH_TRANSFORM(tmp+16, tmp);
-	cookie -= tmp[17] + sseq;
-	/* Cookie is now reduced to (count * 2^24) ^ (hash % 2^24) */
-
-	diff = (count - (cookie >> COOKIEBITS)) & ((__u32)-1 >> COOKIEBITS);
-	if (diff >= maxdiff)
-		return (__u32)-1;
-
-	memcpy(tmp+3, syncookie_secret[1], sizeof(syncookie_secret[1]));
 	tmp[0] = saddr;
 	tmp[1] = daddr;
 	tmp[2] = (sport << 16) + dport;
-	tmp[3] = count - diff;	/* minute counter */
-	HASH_TRANSFORM(tmp+16, tmp);
+	tmp[3] = sseq;
+	sg[0].page = virt_to_page(tmp);
+	sg[0].offset = offset_in_page(tmp);
+	sg[0].length = 16;
+	crypto_cipher_encrypt(random_state->networkCipher, sg, sg, 1);
+
+	/* CTR decrypt the cookie */
+	cookie ^= tmp[0]; 
+
+	/* top 8 bits are 'count' */
+	thiscount = cookie >> COOKIEBITS; 
+
+	diff = count - thiscount;
+	if (diff >= maxdiff)
+		return (__u32)-1;
 
-	return (cookie - tmp[17]) & COOKIEMASK;	/* Leaving the data behind */
+	/* bottom 24 bits are 'data' */
+	return cookie >> (sizeof(__u32)*8-COOKIEBITS);
 }
 #endif

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24 12:54   ` Jean-Luc Cooke
@ 2004-09-24 17:43     ` Theodore Ts'o
  2004-09-24 17:59       ` Jean-Luc Cooke
  2004-09-24 18:43       ` James Morris
  0 siblings, 2 replies; 35+ messages in thread
From: Theodore Ts'o @ 2004-09-24 17:43 UTC (permalink / raw)
  To: Jean-Luc Cooke; +Cc: linux-kernel

On Fri, Sep 24, 2004 at 08:54:57AM -0400, Jean-Luc Cooke wrote:
> On Fri, Sep 24, 2004 at 12:38:51AM -0400, Theodore Ts'o wrote:
> > 2.  The kernel will break if CONFIG_CRYPTO is false
> > matter what.  This was a design decision that was made long ago, to
> > simplify user space applications that could count on /dev/random ...
> 
> My naive point of view tells me either this design decision from days of
> yore was not thought out properly (blasphemy!), or the cryptoapi needs to
> be in kernel.

There is some historical issues here --- namely, back in the early
1990's crypto still had significant export control issues, so we
didn't want to put any crypto code into the core kernel.  So we didn't
have *any* encryption algorithms in the kernel at all.  As to whether
or not cryptoapi needs to be mandatory in the kernel, the question is
aside from /dev/random, do most people need to have crypto in the
kernel?  If they're not using ipsec, or crypto loop devices, etc.,
they might not want to have the crypto api in their kernel
unconditionally.

That aside, it's been demonstrated through a lot of experience, to the
point of it being a principle of software engineering, that optional
interfaces significantly complicate the users of that interface.  In
order to encourage applications to use /dev/random, we wanted to make
it something that people could guarantee would be there.  Random
numbers are important!

> A compromise would be to have a primitive PRNG in random.c is no
> CONFIG_CRYRPTO is present to keep things working.

Now *that*'s an extremely ill-considered idea.  It means that an
application can without any warning, can have its strong source of
random numbers replaced with a weak random number generator.  It
should be blatently obvious why this is a specatularily bad, horrific
idea.

>  - why do linux users want information secure random numbers?  Wouldn't
>    crypto-secure random numbers be what they really want?

If they only want crypto-secure random numbers, they can do it in
userspace.  Information secure random numbers is something the kernel
can provide, because it has low-level access to entrpoy sources.  So
why not try to do the best possible job? 

This by the way your complaint that /dev/random is "too slow" is a
complete red herring.  When do you need more than 6 megs of random
numbers per second?  And if the application just needs crypto-secure
random numbers, then the application can just extract 32 bytes or so
of randomness from /dev/random, and then do the CRNG in userspace, at
which point it will be even faster, since the data won't have to
copied from kernel to userspace.

> The design used by PGP and
> > /dev/random both limit the amount of reliance placed in the crypto
> > algorithms, where as Fortuna and Yarrow both assume that crypto
> > primitives are 100% strong.  This is again a philosophical divide;
> > given that we have access to unpredicitability based on hardware
> > timings, we should limit the dependence on crypto algorithsm and to
> > design a system that is closer to "true randomness" as possible.  
> 
> What if I told the SHA-1 implementation in random.c right now is weaker
> than those hashs in terms of collisions?  The lack of padding in the
> implementation is the cause.  HASH("a\0\0\0\0...") == HASH("a") There
> are billions of other examples.

This is another red herring.  First of all, we're not using the hash
as a MAC, or in any way where we would care about collisions.
Secondly, all of the places where we take a hash, we are always doing
it 16 bytes at a time, which is SHA's block size, so that there's no
need for any padding.  And although you didn't complain about it,
that's also why we don't need to mix in the length in the padding;
extension attacks just simply aren't an issue, since the way we are
using the hash, that just simply an issue as far as the strength of
/dev/random.


> Vanilla random.c depends on SHA-1 be to be resistant to 1-st pre-image
> attacks.  Fortuna depends on this as well with SHA-256 (or whatever
> other hash you put in there).  

Incorrect, vanilla random.c does *not* depend on SHA-1's resistance to
1st pre-image attacks.  In other words, even if you did have an oracle
which given a SHA-1 hash will give you a string which hashes to that
value, /dev/random's security properties would not be affected.  Just
because you have *a* string which hashes to that value, that won't
help you find the contents of the pool.

That's my whole point.  We have not changed SHA-1 to make it stronger;
we simply have carefully designed /dev/random to minimize its reliance
on crypto primitives, since we have so much entropy available to us
from the hardware.  Fortuna, in contrast, has the property that if its
cryptoprimitives are broken, you might as well go home.

						- Ted

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24 17:43     ` Theodore Ts'o
@ 2004-09-24 17:59       ` Jean-Luc Cooke
  2004-09-24 20:44         ` Scott Robert Ladd
  2004-09-24 21:34         ` Theodore Ts'o
  2004-09-24 18:43       ` James Morris
  1 sibling, 2 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-24 17:59 UTC (permalink / raw)
  To: Theodore Ts'o, linux-kernel

If I submitted a patch that gave users the choice of swapping my Fortuna for
the current /dev/random, would you be cool with that then?

Our discussions on the matter always seem to move to areas where we can never
agree.

On Fri, Sep 24, 2004 at 01:43:01PM -0400, Theodore Ts'o wrote:
> > A compromise would be to have a primitive PRNG in random.c is no
> > CONFIG_CRYRPTO is present to keep things working.
> 
> Now *that*'s an extremely ill-considered idea.  It means that an
> application can without any warning, can have its strong source of
> random numbers replaced with a weak random number generator.  It
> should be blatently obvious why this is a specatularily bad, horrific
> idea.

Easy fix - use CryptoAPI in our PRNGs and make it standard in the kernel.  :)

You can see we're going in circles.

> If they only want crypto-secure random numbers, they can do it in
> userspace.  Information secure random numbers is something the kernel
> can provide, because it has low-level access to entrpoy sources.  So
> why not try to do the best possible job? 

Sure.  I hate Brittney Spears, but I will not deny people the choice.

> This by the way your complaint that /dev/random is "too slow" is a
> complete red herring.  When do you need more than 6 megs of random
> numbers per second?  And if the application just needs crypto-secure
> random numbers, then the application can just extract 32 bytes or so
> of randomness from /dev/random, and then do the CRNG in userspace, at
> which point it will be even faster, since the data won't have to
> copied from kernel to userspace.

I never complained that it was too slow.  I've just noticed that when ever a
patch is submitted there are only 3 reasons to accept it:
 - does it do something we havn't done before?
 - does it do something faster / smaller?
 - is it in someway better then what's there now?

I did my best to alliviate #2.  #3 I've decided I'll never be able to
convince enough people for an all-out replacement.  I'd be happy with a
configuration choice.

> > What if I told the SHA-1 implementation in random.c right now is weaker
> > than those hashs in terms of collisions?  The lack of padding in the
> > implementation is the cause.  HASH("a\0\0\0\0...") == HASH("a") There
> > are billions of other examples.
> 
> This is another red herring.  First of all, we're not using the hash
> as a MAC, or in any way where we would care about collisions.
> Secondly, all of the places where we take a hash, we are always doing
> it 16 bytes at a time, which is SHA's block size, so that there's no
> need for any padding.  And although you didn't complain about it,
> that's also why we don't need to mix in the length in the padding;
> extension attacks just simply aren't an issue, since the way we are
> using the hash, that just simply an issue as far as the strength of
> /dev/random.

Woh there.  Didn't you just say "see, these hashes are weakened.  That's
bad".  Now I just demonstrated the same thing with your SHA1 implementation
and you throw that "red-herring" phrase out again?

Point of history when breaking a hash:
 - first a method for collisions is found
 - then comes 2nd pre-image
 - then comes complete inversion

MD4 case and point.  Any how.  I've given up trying to sell a replacement.
Can users have an option to switch?

JLC

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24 17:43     ` Theodore Ts'o
  2004-09-24 17:59       ` Jean-Luc Cooke
@ 2004-09-24 18:43       ` James Morris
  2004-09-24 19:09         ` Matt Mackall
  2004-09-24 20:03         ` Lee Revell
  1 sibling, 2 replies; 35+ messages in thread
From: James Morris @ 2004-09-24 18:43 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Jean-Luc Cooke, linux-kernel, mpm

On Fri, 24 Sep 2004, Theodore Ts'o wrote:

> have *any* encryption algorithms in the kernel at all.  As to whether
> or not cryptoapi needs to be mandatory in the kernel, the question is
> aside from /dev/random, do most people need to have crypto in the
> kernel?  If they're not using ipsec, or crypto loop devices, etc.,
> they might not want to have the crypto api in their kernel
> unconditionally.

As far as I know embedded folk do not want the crypto API to be mandatory,
although I think Matt Mackall wanted to try and make something work
(perhaps a subset just for /dev/random use).


- James
-- 
James Morris
<jmorris@redhat.com>




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24 18:43       ` James Morris
@ 2004-09-24 19:09         ` Matt Mackall
  2004-09-24 20:03         ` Lee Revell
  1 sibling, 0 replies; 35+ messages in thread
From: Matt Mackall @ 2004-09-24 19:09 UTC (permalink / raw)
  To: James Morris; +Cc: Theodore Ts'o, Jean-Luc Cooke, linux-kernel

On Fri, Sep 24, 2004 at 02:43:07PM -0400, James Morris wrote:
> On Fri, 24 Sep 2004, Theodore Ts'o wrote:
> 
> > have *any* encryption algorithms in the kernel at all.  As to whether
> > or not cryptoapi needs to be mandatory in the kernel, the question is
> > aside from /dev/random, do most people need to have crypto in the
> > kernel?  If they're not using ipsec, or crypto loop devices, etc.,
> > they might not want to have the crypto api in their kernel
> > unconditionally.
> 
> As far as I know embedded folk do not want the crypto API to be mandatory,
> although I think Matt Mackall wanted to try and make something work
> (perhaps a subset just for /dev/random use).

I want to move a couple critical hash algorithms into lib/ as has been done
with the CRC code. Then cryptoapi and /dev/random and a couple other
things (htree comes to mind) could share code without inflicting the
cryptoapi overhead and context limitations on everyone.

(currently about 4k messages behind on lkml, sorry for not chiming in sooner)

-- 
Mathematics is the supreme nostalgia of our time.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24 18:43       ` James Morris
  2004-09-24 19:09         ` Matt Mackall
@ 2004-09-24 20:03         ` Lee Revell
  1 sibling, 0 replies; 35+ messages in thread
From: Lee Revell @ 2004-09-24 20:03 UTC (permalink / raw)
  To: James Morris; +Cc: Theodore Ts'o, Jean-Luc Cooke, linux-kernel, mpm

On Fri, 2004-09-24 at 14:43, James Morris wrote:
> On Fri, 24 Sep 2004, Theodore Ts'o wrote:
> 
> > have *any* encryption algorithms in the kernel at all.  As to whether
> > or not cryptoapi needs to be mandatory in the kernel, the question is
> > aside from /dev/random, do most people need to have crypto in the
> > kernel?  If they're not using ipsec, or crypto loop devices, etc.,
> > they might not want to have the crypto api in their kernel
> > unconditionally.
> 
> As far as I know embedded folk do not want the crypto API to be mandatory,
> although I think Matt Mackall wanted to try and make something work
> (perhaps a subset just for /dev/random use).

/dev/random used to be a source of high latencies, but Ingo's patches 
fix this.  There was not a lot of CPU overhead but the latency was was a
problem for serious audio use.  But, audio is a unique set of
requirements, it's somewhere between desktop and embedded and hard-RT.

This could certainly be a problem for the embedded folks due to space or
CPU concerns, but the latency problem seems to be solved.

Lee  


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24 17:59       ` Jean-Luc Cooke
@ 2004-09-24 20:44         ` Scott Robert Ladd
  2004-09-24 21:34         ` Theodore Ts'o
  1 sibling, 0 replies; 35+ messages in thread
From: Scott Robert Ladd @ 2004-09-24 20:44 UTC (permalink / raw)
  To: Jean-Luc Cooke; +Cc: Theodore Ts'o, linux-kernel

Jean-Luc Cooke wrote:
> If I submitted a patch that gave users the choice of swapping my Fortuna for
> the current /dev/random, would you be cool with that then?

I would certainly appreciate this option, given that my customers often 
have very different ideas of what they need. I don't see how it hurts 
the kernel to have a choice for /dev/random.

-- 
Scott Robert Ladd
site: http://www.coyotegulch.com
blog: http://chaoticcoyote.blogspot.com

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24 17:59       ` Jean-Luc Cooke
  2004-09-24 20:44         ` Scott Robert Ladd
@ 2004-09-24 21:34         ` Theodore Ts'o
  2004-09-25 14:51           ` Jean-Luc Cooke
  1 sibling, 1 reply; 35+ messages in thread
From: Theodore Ts'o @ 2004-09-24 21:34 UTC (permalink / raw)
  To: Jean-Luc Cooke; +Cc: linux-kernel

On Fri, Sep 24, 2004 at 01:59:29PM -0400, Jean-Luc Cooke wrote:
> > If they only want crypto-secure random numbers, they can do it in
> > userspace.  Information secure random numbers is something the kernel
> > can provide, because it has low-level access to entrpoy sources.  So
> > why not try to do the best possible job? 
> 
> Sure.  I hate Brittney Spears, but I will not deny people the choice.

The principle of avoiding kernel bloat means that if it doesn't have
to be done in the kernel, it should be done in userspace.  If all
you're providing is an CRNG, the question then is why should it be
done in kernel, when it could be done just as easily in userspace, and
using /dev/random as its input?

> > This is another red herring.  First of all, we're not using the hash
> > as a MAC, or in any way where we would care about collisions.
> > Secondly, all of the places where we take a hash, we are always doing
> > it 16 bytes at a time, which is SHA's block size, so that there's no
> > need for any padding.  And although you didn't complain about it,
> > that's also why we don't need to mix in the length in the padding;
> > extension attacks just simply aren't an issue, since the way we are
> > using the hash, that just simply an issue as far as the strength of
> > /dev/random.
> 
> Woh there.  Didn't you just say "see, these hashes are weakened.  That's
> bad".  Now I just demonstrated the same thing with your SHA1 implementation
> and you throw that "red-herring" phrase out again?

No, what I'm saying is that crypto primitives can get weakened; this
is a fact of life.  SHA-0, MD4, MD5, etc. are now useless as general
purpose cryptographic hashes.  Fortuna makes the assumptions that
crypto primitives will never break, as it relies on them so heavily.
I have a problem with this, since I remember ten years ago when people
were as confident in MD5 as you appear to be in SHA-256 today.

Crypto academics are fond of talking about how you can "prove" that
Fortuna is secure.  But that proof handwaves around the fact that we
have no capability of proving whether SHA-1, or SHA-256, is truly
secure.

In contrast, /dev/random doesn't have this dependence, which (a) is a
good thing, and (b) why it doesn't bother with the SHA finalization
step.  It's simply not necessary.

						- Ted

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24  2:34 ` Jean-Luc Cooke
  2004-09-24  6:19   ` linux
@ 2004-09-24 21:42   ` linux
  2004-09-25 14:54     ` Jean-Luc Cooke
  1 sibling, 1 reply; 35+ messages in thread
From: linux @ 2004-09-24 21:42 UTC (permalink / raw)
  To: jlcooke; +Cc: cryptoapi, jmorris, linux-kernel, tytso

> What if I told the SHA-1 implementation in random.c right now is weaker
> than those hashs in terms of collisions?  The lack of padding in the
> implementation is the cause.  HASH("a\0\0\0\0...") == HASH("a") There
> are billions of other examples.

EXCUSE me?  You're a little unclear, so I don't want to be attacking strawmen
of my own devising, but are you claiming the failure to do Merkle-Damgaard
padding in the output mixing operation of /dev/random is a WEAKNESS?

If true, this is a level of cluelessness incompatible with being trusted
to design decent crypto.

The entire purpose of Merkle-Damgaard padding (also know as
Merkle-Damgaard strengthening) is to include the length in the data
hashed, to make hashing variable-sized messages as secure as fixed-size
messages.  If what you are hashing is, by design, always fixed-length,
this is completely unnecessary.

If I were designing a protocol for message interchange, I might add
the padding anyway, just to use pre-existing primitives easily, but
for a 100% internal use like a PRNG, let's see... I can reduce code
size AND implementation complexity AND run time without ANY security
consequences, and there are no interoperability issues...

I could argue it's a design flaw to *include* the padding.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24 21:34         ` Theodore Ts'o
@ 2004-09-25 14:51           ` Jean-Luc Cooke
  0 siblings, 0 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-25 14:51 UTC (permalink / raw)
  To: Theodore Ts'o, linux-kernel

On Fri, Sep 24, 2004 at 05:34:52PM -0400, Theodore Ts'o wrote:
> > Woh there.  Didn't you just say "see, these hashes are weakened.  That's
> > bad".  Now I just demonstrated the same thing with your SHA1 implementation
> > and you throw that "red-herring" phrase out again?
> 
> No, what I'm saying is that crypto primitives can get weakened; this
> is a fact of life.  SHA-0, MD4, MD5, etc. are now useless as general
> purpose cryptographic hashes.  Fortuna makes the assumptions that
> crypto primitives will never break, as it relies on them so heavily.
> I have a problem with this, since I remember ten years ago when people
> were as confident in MD5 as you appear to be in SHA-256 today.

http://eprint.iacr.org/2004/207.pdf

SHA-256 showing indications of weakness.  Fortuna's algorithms can be
replaced at compile-time.  I may even consider doing them at run-time.

> Crypto academics are fond of talking about how you can "prove" that
> Fortuna is secure.  But that proof handwaves around the fact that we
> have no capability of proving whether SHA-1, or SHA-256, is truly
> secure.

Our issues are that we are *both* handwaving.

JLC

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-24 21:42   ` linux
@ 2004-09-25 14:54     ` Jean-Luc Cooke
  2004-09-25 18:43       ` Theodore Ts'o
  2004-09-26  2:31       ` linux
  0 siblings, 2 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-25 14:54 UTC (permalink / raw)
  To: linux; +Cc: jmorris, cryptoapi, tytso, linux-kernel

On Fri, Sep 24, 2004 at 09:42:30PM -0000, linux@horizon.com wrote:
> > What if I told the SHA-1 implementation in random.c right now is weaker
> > than those hashs in terms of collisions?  The lack of padding in the
> > implementation is the cause.  HASH("a\0\0\0\0...") == HASH("a") There
> > are billions of other examples.
> 
> EXCUSE me?  

...

> I could argue it's a design flaw to *include* the padding.

I was trying to point out a flaw in Ted's logic.  He said "we've recently
discoverd these hashs are weak because we found collsions.  Current
/dev/random doesn't care about this."

I certainly wasn't saying padding was a requirment.  But I was trying to
point out that the SHA-1 implementaion crrently in /dev/random by design is
collision vulnerable.  Collision resistance isn't a requirment for it's
purposes obviously.

Guess my pointing this out is a lost cause.

JLC


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-25 14:54     ` Jean-Luc Cooke
@ 2004-09-25 18:43       ` Theodore Ts'o
  2004-09-26  1:42         ` Jean-Luc Cooke
  2004-09-26  2:31       ` linux
  1 sibling, 1 reply; 35+ messages in thread
From: Theodore Ts'o @ 2004-09-25 18:43 UTC (permalink / raw)
  To: Jean-Luc Cooke; +Cc: linux, jmorris, cryptoapi, linux-kernel

On Sat, Sep 25, 2004 at 10:54:44AM -0400, Jean-Luc Cooke wrote:
> 
> I was trying to point out a flaw in Ted's logic.  He said "we've recently
> discoverd these hashs are weak because we found collsions.  Current
> /dev/random doesn't care about this."
> 
> I certainly wasn't saying padding was a requirment.  But I was trying to
> point out that the SHA-1 implementaion crrently in /dev/random by design is
> collision vulnerable.  Collision resistance isn't a requirment for it's
> purposes obviously.

You still haven't shown the flaw in the logic.  My point is that an
over-reliance on crypto primitives is dangerous, especially given
recent developments.  Fortuna relies on the crypto primitives much
more than /dev/random does.  Ergo, if you consider weaknesses in
crypto primitives to be a potential problem, then it might be
reasonable to take a somewhat more jaundiced view towards Fortuna
compared with other alternatives.

Whether or not /dev/random performs the SHA finalization step (which
adds the padding and the length to the hash) is completely and totally
irrelevant to this particular line of reasoning.  

And actually, not doing the padding does not make the crypto hash
vulnerable to collisions, as you claim.  This is because in
/dev/random, we are always using the full block size of the crypto
hash.  It is true that it is vulernable to extension attacks, but
that's irrelevant to this particular usage of the SHA-1 round
function.  Whether or not we should trust the design of something as
critical to the security of security applications as /dev/random to
someone who fails to grasp the difference between these two rather
basic issues is something I will leave to the others on LKML.

							- Ted

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-25 18:43       ` Theodore Ts'o
@ 2004-09-26  1:42         ` Jean-Luc Cooke
  2004-09-26  5:23           ` Theodore Ts'o
  2004-09-26  6:46           ` linux
  0 siblings, 2 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-26  1:42 UTC (permalink / raw)
  To: Theodore Ts'o, linux, jmorris, cryptoapi, linux-kernel

On Sat, Sep 25, 2004 at 02:43:52PM -0400, Theodore Ts'o wrote:
> You still haven't shown the flaw in the logic.  My point is that an
> over-reliance on crypto primitives is dangerous, especially given
> recent developments.  Fortuna relies on the crypto primitives much
> more than /dev/random does.  Ergo, if you consider weaknesses in
> crypto primitives to be a potential problem, then it might be
> reasonable to take a somewhat more jaundiced view towards Fortuna
> compared with other alternatives.

Correct me if I'm wrong here.

You claimed that the collision techniques found for the UFN design hashs
(sha0, md5, md5, haval, ripemd) demonstrated the need to not rely on hash
algorithms for a RNG.  Right?

And I showed that the SHA-1 in random.c now can produce collisions.  So, if
your argument against the fallen UFN hashs above (SHA-1 is a UFN hash also
btw.  We can probably expect more annoucments from the crypto community in
early 2005) should it not apply to SHA-1 in random.c?

Or did I misunderstand you?  Were you just mentioning the weakened algorithms
as a "what if they were more serious discoveries?  Wouldn't be be nice if we
didn't rely on them?" ?

The decision to place trust in a entropy estimation scheme vs. a crypto
algorithm we have different views on.  I can live with that.

> Whether or not /dev/random performs the SHA finalization step (which
> adds the padding and the length to the hash) is completely and totally
> irrelevant to this particular line of reasoning.  

I "completly and totally" agree.  I'm pointing out that no added padding
makes me, the new guy reading your code, work harder to decide if it's a
weakness.  You shouldn't do that to people if you can avoid it.  Just like
you shouldn't obfuscate code, even if it doesn't "weaken" its implementation.
It's just rude.  Take the performance penalty to avoid scaring people away
from a very important peice of the kernel.

> ... Whether or not we should trust the design of something as
> critical to the security of security applications as /dev/random to
> someone who fails to grasp the difference between these two rather
> basic issues is something I will leave to the others on LKML.

... biting my toung ... so hard it bleeds ...

The quantitaive aspects of the two RNGs in question are not being discussed.
It's the qualitative aspects we do not see eye to eye on.  So I will no
longer suggest replacing the status-quo.  I'd like to submit a patch to let
users chose at compile-time under Cryptographic options weither to drop in
Fortuna.

Ted, can we leave it at this?

JLC

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-25 14:54     ` Jean-Luc Cooke
  2004-09-25 18:43       ` Theodore Ts'o
@ 2004-09-26  2:31       ` linux
  1 sibling, 0 replies; 35+ messages in thread
From: linux @ 2004-09-26  2:31 UTC (permalink / raw)
  To: jlcooke; +Cc: cryptoapi, jmorris, linux-kernel, tytso

> I was trying to point out a flaw in Ted's logic.  He said "we've recently
> discoverd these hashes are weak because we found collsions.  Current
> /dev/random doesn't care about this."

And he's exactly right.  The only attack that would be vaguely relevant
to /dev/random's use would be a (first) preimage attack, and even that's
probably not helpful.

There *is* no flaw in his logic.  The attack we need to guard against
is, given hash(x) and a (currently mostly linear) state mixing function
mix(), one that would let you compute (partial information about)
y[i+1] = hash(x[i+1]) from y[1] = hash(x[1]) ... y[i] = hash(x[i])
where x[i] = mix(x[i-1]).

Given that y[i] is much smaller than x[i], you'd need to put together
a lot of them to derive something, and that's distinctly harder than
a single-output preimage attack.

> I certainly wasn't saying padding was a requirment.  But I was trying to
> point out that the SHA-1 implementaion crrently in /dev/random by design is
> collision vulnerable.  Collision resistance isn't a requirment for its
> purposes obviously.

No, it is, by design, 100% collision-resistant.  An attacker neither
sees nor controls the input x, so cannot use a collision attack.
Thus, it's resistant to collisions in the same way that it's resistant
to AIDS.

[There's actually a flaw in my logic.  I know Ted knows about it, because
he implemented a specific defense in the /dev/random code against it; it's
just not 100% information-theoretic ironclad.  If anyone else can spot
it, award yourself a clue point.  But it's still not a plausible attack.]

FURTHERMORE, even if an attacker *could* control the input, it's still
exactly as collision resistant as unmodified SHA-1.  Because it only
accepts fixed-size input blocks, padding is unnecessary and irrelevant
to security.  Careful padding is ONLY required if you are working with
VARIABLE-SIZED input.

The fact that collision resistance is not a security requirement is a
third point.

> Guess my pointing this out is a lost cause.

In much the same way that pointing out that the earth is flat is a
lost cause.  If you want people to believe nonsense, you need to dress
it up a lot and call it a religion.

As for Ted's words:
> Whether or not we should trust the design of something as
> critical to the security of security applications as /dev/random to
> someone who fails to grasp the difference between these two rather
> basic issues is something I will leave to the others on LKML.

Fortuna may be a good idea after all (I disagree, but I can imagine
being persuaded otherwise), but it has a very bad advocate right now.
Would anyone else like to pick up the torch?


By the way, I'd like to repeat my earlier question: you say Fortuna ia
well-regarded in crypto circles.  Can you cite a single paper to back
that conclusion?  Name a single well-known cryptographer, other than
the authors, who has looked at it in some detail?

There might be one, but I don't know of any.  I respect the authors
enough to know that even they recognize that an algorithm's designers
sometimes have blind spots.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-26  1:42         ` Jean-Luc Cooke
@ 2004-09-26  5:23           ` Theodore Ts'o
  2004-09-27  0:50             ` linux
  2004-09-26  6:46           ` linux
  1 sibling, 1 reply; 35+ messages in thread
From: Theodore Ts'o @ 2004-09-26  5:23 UTC (permalink / raw)
  To: Jean-Luc Cooke; +Cc: linux, jmorris, cryptoapi, linux-kernel

On Sat, Sep 25, 2004 at 09:42:18PM -0400, Jean-Luc Cooke wrote:
> On Sat, Sep 25, 2004 at 02:43:52PM -0400, Theodore Ts'o wrote:
> > You still haven't shown the flaw in the logic.  My point is that an
> > over-reliance on crypto primitives is dangerous, especially given
> > recent developments.  Fortuna relies on the crypto primitives much
> > more than /dev/random does.  Ergo, if you consider weaknesses in
> > crypto primitives to be a potential problem, then it might be
> > reasonable to take a somewhat more jaundiced view towards Fortuna
> > compared with other alternatives.
> 
> Correct me if I'm wrong here.
> 
> You claimed that the collision techniques found for the UFN design hashs
> (sha0, md5, md5, haval, ripemd) demonstrated the need to not rely on hash
> algorithms for a RNG.  Right?

For Fortuna, correct.  This is why I believe /dev/random's current
design to be superior.

> And I showed that the SHA-1 in random.c now can produce collisions.  So, if
> your argument against the fallen UFN hashs above (SHA-1 is a UFN hash also
> btw.  We can probably expect more annoucments from the crypto community in
> early 2005) should it not apply to SHA-1 in random.c?

(1) Your method of "producing collisions" assumed that /dev/random was
of the form HASH("a\0\0\0...") == HASH("a) --- i.e., you were
kvetching about the lack of padding.  But we've already agreed that
the padding argument isn't applicable for /dev/random, since it only
hashes block-sizes at the same time.  (2) Even if there were real
collisions demonstrated in SHA-1's cryptographic core at some point in
the future, it wouldn't harm the security of the algorithm, since
/dev/random doesn't depend on SHA-1 being resistant against
collisions.  (Similarly, HMAC-MD5 is still safe for now since it also
is designed such that the ability to find collisions do not harm its
security.  It's a matter of how you use the cryptographic primitives.)

> Or did I misunderstand you?  Were you just mentioning the weakened algorithms
> as a "what if they were more serious discoveries?  Wouldn't be be nice if we
> didn't rely on them?" ?

That's correct.  It is my contention that Fortuna is brittle in this
regard, especially in comparison to /dev/random current design.

And you still haven't pointed out the logic flaw in any argument but
your own.

						- Ted

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-26  1:42         ` Jean-Luc Cooke
  2004-09-26  5:23           ` Theodore Ts'o
@ 2004-09-26  6:46           ` linux
  2004-09-26 16:32             ` Jean-Luc Cooke
  1 sibling, 1 reply; 35+ messages in thread
From: linux @ 2004-09-26  6:46 UTC (permalink / raw)
  To: jlcooke; +Cc: cryptoapi, jmorris, linux, linux-kernel, tytso

> You claimed that the collision techniques found for the UFN design hashs
> (sha0, md5, md5, haval, ripemd) demonstrated the need to not rely on hash
> algorithms for a RNG.  Right?

I'm putting words into Ted's mouth, but it seemed clear to me he said it
was good not to rely *entirely* on the ahsh algorithms.

> And I showed that the SHA-1 in random.c now can produce collisions.

This, I do not recall.  I must have missed it.  Will you please show me
two inputs that, when fed to the SHA-1 in random.c, will produce
identical output?

> So, if your argument against the fallen UFN hashs above (SHA-1 is a UFN
> hash also btw.  We can probably expect more annoucments from the crypto
> community in early 2005) should it not apply to SHA-1 in random.c?

No, not at all.  The point is that the current random.c design DOES NOT
RELY on the security of the hash function.  Ted could drop MD4 in there
and it still couldn't be broken, although using a better-regarded hash
function just feels better.

> Or did I misunderstand you?  Were you just mentioning the weakened algorithms
> as a "what if they were more serious discoveries?  Wouldn't be be nice if we
> didn't rely on them?" ?

Yes.  And Fortuna's *only* layer of armor is the block cipher.  Yes,
it's a damn good layer of armor, but defense in depth sure helps.

That is NOT to say that lots of half-assed algorithms piled on top of
each other makes good crypto, but if you can have a good primitive and
*then* use it safely as well, that's better.

For example, AES is supposed to be resistant to adaptive chosen
plaintext/ciphertext attacks.  Suppose you are given two ciphertexts
and two corresponding plaintexts, but not which corresponds to which.
And then you are given access to an oracle which will, using the same
key as was used on the plaintext/ciphertext pairs, give you the plaintext
for any ciphertet that's not one of the two, and the ciphertext for any
plaintext that's not one of the two.  The orace can answer basically an
infinite number of questions (well, 2^128-2) and you can look at one set
of answers before posing the next.

AES is supposed to prevent you from figuring out, with all that help,
which plaintext of the two goes with which ciphertext, with more than 50%
certainty.  I.e. you are given an infinite series of such challenges and
offered even-odds bets on your answer.  In the long run, you shouldn't
be able to make money.

Yes, AES *should* be able to hold up even to that, but that's really
placing all your eggs in one basket.  If you can give it more help
without weakening other parts, that's Good Design.

If I'm designing a protocol, I'll try to design it so that an attacker
*doesn't* have access to such an oracle, or the responses are too slow
to make billions of them, or asking more than a few dozen questions will
raise alarms, or some such.  I'll change keys so the time in which an
attacker has to mount their attack is limited.  I'll do any of a number
of things which let the German navy keep half of their U-boat traffic
out of the hands of Bletchley park even through they didn't know there
were vast gaping holes in the underlying cipher.

> The decision to place trust in a entropy estimation scheme vs. a crypto
> algorithm we have different views on.  I can live with that.

Better crypto is fine.  But why *throw out* the entropy estimation and
rely *entirely* on the crypto?  Feel free to argue that the crypto in
Fortuna is better (although Ted is making some strong points that it
*isn't*), but is it necessary to throw the baby out with the bathwater?
Can't you get the best of both worlds?

> I "completly and totally" agree.  I'm pointing out that no added padding
> makes me, the new guy reading your code, work harder to decide if it's a
> weakness.  You shouldn't do that to people if you can avoid it.

Sorry, but if you know enough to know why the padding is necessary, you
should know when it isn't.  Feel free to say "isn't this a weakness?
I read in $BOOK that that padding was important to prevent some attacks"
and propose a comment patch.  But to say "this is crap because I don't
understand one little detail and you should replace it with my shiny
new 2005 model" when it's your ignorance and not a real problem is
unbelievably arrogant.

> Just like you shouldn't obfuscate code, even if it doesn't "weaken"
> its implementation.  It's just rude.  Take the performance penalty to
> avoid scaring people away from a very important peice of the kernel.

Tell it to the marines.  I'd say "tell it to Linus", because he'll laugh
louder, but his time is valuable to me.

Part of the Linux developer's credo, learned at Linus' knee, is that
Performance Matters.  If you don't worry about 5% all the time, after 15
revisions you've running at half speed and it's a lot of work to catch up.

The -mm guys have been doing backflips for years to try to get good
paging behaviour without high run-time overhead.  This is one of the
major reasons why the kernel refuses to promise a stable binary
interface to kernel modules.  Rearranging the order of fields in a
strucure for better cache performance is a minor revision.

In fact, large parts of /dev/random deliberately *don't* care about
performance.  The entire output mixing stage is not performance
critical, and is deliberately slow.

What *is* critical is the input mixing stage, because that happens at
interrupt time, and many many people care passionately about interrupt
latency.  And /dev/random wants to be non-optional, always there for
people to use so they don't have to invent their own half-assed
equivalent.

> The quantitaive aspects of the two RNGs in question are not being discussed.
> It's the qualitative aspects we do not see eye to eye on.  So I will no
> longer suggest replacing the status-quo.  I'd like to submit a patch to let
> users chose at compile-time under Cryptographic options weither to drop in
> Fortuna.
> 
> Ted, can we leave it at this?

You're welcome to write the patch.  But I have to warn you, if you
hope to get it into the standard kernel rather than just have a
separately maintained patch, you'll need to persuade Linus or someone
he trusts (who in theis case is probably Ted) that your patch is
a) better in some way or another than the existing code, and
b) important enough to warrant the maintenance burden that having
   two sets of equivalent code imposes.

You're being offered a lot of clues.  Please, take some.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-26  6:46           ` linux
@ 2004-09-26 16:32             ` Jean-Luc Cooke
  0 siblings, 0 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-26 16:32 UTC (permalink / raw)
  To: linux; +Cc: jmorris, cryptoapi, tytso, linux-kernel

On Sun, Sep 26, 2004 at 06:46:17AM -0000, linux@horizon.com wrote:
> > And I showed that the SHA-1 in random.c now can produce collisions.
> 
> This, I do not recall.  I must have missed it.  Will you please show me
> two inputs that, when fed to the SHA-1 in random.c, will produce
> identical output?

SHA-1 without padding, sure.

hash("a") = hash("a\0") = hash("a\0\0") = ...
hash("b") = hash("b\0") = hash("b\0\0") = ...
hash("c") = hash("c\0") = hash("c\0\0") = ...

I've failed in my attempt to present a good argument for Fortuna.  Guess I'll
just sit on this patch.  Is this above a big issue?  No because as you two
pointed out the hash() uses full block sizes.

This is a trying thread for me to continue, by no fault of yours.  I thought
I made it very clear when I started that I saw *no* vulnerability in the
current /dev/random.  This did not prevent Ted and yourself to ignore this
statement as immediately assume when I say "you could have done this better"
to mean "ha!  I've hax0rd your silly code, I'm l33t." - an infuriating blow
to my professionalism.  Then I simply follow that up with insult to injury
by trying to clear up the whole mess and only making things worse.

> > Or did I misunderstand you?  Were you just mentioning the weakened algorithms
> > as a "what if they were more serious discoveries?  Wouldn't be be nice if we
> > didn't rely on them?" ?
> 
> Yes.  And Fortuna's *only* layer of armor is the block cipher.  Yes,
> it's a damn good layer of armor, but defense in depth sure helps.
> 
> That is NOT to say that lots of half-assed algorithms piled on top of
> each other makes good crypto, but if you can have a good primitive and
> *then* use it safely as well, that's better.
> 
> For example, AES is supposed to be resistant to adaptive chosen
> plaintext/ciphertext attacks.  Suppose you are given two ciphertexts
> and two corresponding plaintexts, but not which corresponds to which.
> And then you are given access to an oracle which will, using the same
> key as was used on the plaintext/ciphertext pairs, give you the plaintext
> for any ciphertet that's not one of the two, and the ciphertext for any
> plaintext that's not one of the two.  The orace can answer basically an
> infinite number of questions (well, 2^128-2) and you can look at one set
> of answers before posing the next.
> 
> AES is supposed to prevent you from figuring out, with all that help,
> which plaintext of the two goes with which ciphertext, with more than 50%
> certainty.  I.e. you are given an infinite series of such challenges and
> offered even-odds bets on your answer.  In the long run, you shouldn't
> be able to make money.
> 
> Yes, AES *should* be able to hold up even to that, but that's really
> placing all your eggs in one basket.  If you can give it more help
> without weakening other parts, that's Good Design.
> 
> If I'm designing a protocol, I'll try to design it so that an attacker
> *doesn't* have access to such an oracle, or the responses are too slow
> to make billions of them, or asking more than a few dozen questions will
> raise alarms, or some such.  I'll change keys so the time in which an
> attacker has to mount their attack is limited.  I'll do any of a number
> of things which let the German navy keep half of their U-boat traffic
> out of the hands of Bletchley park even through they didn't know there
> were vast gaping holes in the underlying cipher.

If say, the key for the AES256-CTR layer changed after every block-read from
/dev/random?

> > The decision to place trust in a entropy estimation scheme vs. a crypto
> > algorithm we have different views on.  I can live with that.
> 
> Better crypto is fine.  But why *throw out* the entropy estimation and
> rely *entirely* on the crypto?  Feel free to argue that the crypto in
> Fortuna is better (although Ted is making some strong points that it
> *isn't*), but is it necessary to throw the baby out with the bathwater?
> Can't you get the best of both worlds?

My past arguments for removing entropy estimation were hand-waving at best
(rate of /dev/random output ~= rate of event sources' activity like
keyboards, disks, etc).  This could (not likely) lead to information about
what the system is doing.  If an attacker could open and close tcp ports, or
ping an ethernet card to generate IRQs which are fed into the PRNG and
increasing the entropy count - would this be usable in an attack?  Not likely.
Would you want to close-off this avenue of attack?  Majority seems to say
"no", but I personally would like to.  And that is where my argument falls
apart.

> > I "completly and totally" agree.  I'm pointing out that no added padding
> > makes me, the new guy reading your code, work harder to decide if it's a
> > weakness.  You shouldn't do that to people if you can avoid it.
> 
> Sorry, but if you know enough to know why the padding is necessary, you
> should know when it isn't.  Feel free to say "isn't this a weakness?
> I read in $BOOK that that padding was important to prevent some attacks"
> and propose a comment patch.  But to say "this is crap because I don't
> understand one little detail and you should replace it with my shiny
> new 2005 model" when it's your ignorance and not a real problem is
> unbelievably arrogant.

Sigh.  Perhaps I need to be excruciatingly clear:
  - SHA1-nopadding() is less secure than SHA1-withpadding()
  - It doesn't apply to random.c

I though it was clear ... clearly I was delusional.

> > Just like you shouldn't obfuscate code, even if it doesn't "weaken"
> > its implementation.  It's just rude.  Take the performance penalty to
> > avoid scaring people away from a very important peice of the kernel.
> 
> Tell it to the marines.  I'd say "tell it to Linus", because he'll laugh
> louder, but his time is valuable to me.
> 
> Part of the Linux developer's credo, learned at Linus' knee, is that
> Performance Matters.  If you don't worry about 5% all the time, after 15
> revisions you've running at half speed and it's a lot of work to catch up.

I see.  And in the -mm examples, is the code easily readable for other
os-MemMgt types?  If no, then I guess random.c is not the exception and I
apologize.

> What *is* critical is the input mixing stage, because that happens at
> interrupt time, and many many people care passionately about interrupt
> latency.  And /dev/random wants to be non-optional, always there for
> people to use so they don't have to invent their own half-assed
> equivalent.

And the ring-buffer system which delays the expensive mixing stages untill a
a sort interrupt does a great job (current and my fortuna-patch).  Different
being, fortuna-patch appears to be 2x faster.

> > The quantitaive aspects of the two RNGs in question are not being discussed.
> > It's the qualitative aspects we do not see eye to eye on.  So I will no
> > longer suggest replacing the status-quo.  I'd like to submit a patch to let
> > users chose at compile-time under Cryptographic options weither to drop in
> > Fortuna.
> > 
> > Ted, can we leave it at this?
> 
> You're welcome to write the patch.  But I have to warn you, if you
> hope to get it into the standard kernel rather than just have a
> separately maintained patch, you'll need to persuade Linus or someone
> he trusts (who in theis case is probably Ted) that your patch is
> a) better in some way or another than the existing code, and
> b) important enough to warrant the maintenance burden that having
>    two sets of equivalent code imposes.
> 
> You're being offered a lot of clues.  Please, take some.

I appreciate the feedback for what it's worth.  Thanks.

JLC

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-26  5:23           ` Theodore Ts'o
@ 2004-09-27  0:50             ` linux
  2004-09-27 13:07               ` Jean-Luc Cooke
  2004-09-27 14:23               ` Theodore Ts'o
  0 siblings, 2 replies; 35+ messages in thread
From: linux @ 2004-09-27  0:50 UTC (permalink / raw)
  To: jlcooke; +Cc: cryptoapi, jmorris, linux-kernel, linux, tytso

>> This, I do not recall.  I must have missed it.  Will you please show me
>> two inputs that, when fed to the SHA-1 in random.c, will produce
>> identical output?

> SHA-1 without padding, sure.

> hash("a") = hash("a\0") = hash("a\0\0") = ...
> hash("b") = hash("b\0") = hash("b\0\0") = ...
> hash("c") = hash("c\0") = hash("c\0\0") = ...

And how do I hash one byte with SHA-1 *without padding*?  The only
hashing code I can find in random.c works 64 bytes at a time.
What are the other 63 bytes?

(I agree that that *naive* padding leads to collisions, but random.c
doesn't do ANY padding.)

> I see.  And in the -mm examples, is the code easily readable for other
> os-MemMgt types?  If no, then I guess random.c is not the exception and I
> apologize.

The Linux core -mm code is a fairly legendary piece of Heavy Wizardry.
To paraphrase, "do not meddle in the affairs of /usr/src/linux/mm/, for
it is subtle and quick to anger."  There *are* people who understand it,
and it *is* designed (not a decaying pile of old hacks that *nobody*
understands how it works like some software), but it's also a remarkably
steep learning curve.  A basic overview isn't so hard to acquire, but the
locking rules have subtle details.  There are places where someone very good
noticed that a given lock doesn't have to be taken on a fast path if you
avoid doing certain things anywhere else that you'd think would be legal.

And so if someone tries to add code to do the "obvious" thing, the
lock-free fast path develops a race condition.  And we all know what
fun race conditions are to debug.

Fortunately, some people see this as a challenge and Linux is blessed with
some extremely skilled VM hackers.  And some of them even write and publish
books on the subject.  But while a working VM system can be clear, making it
go fast leads to a certain amount of tension with the clarity goal.

> And the ring-buffer system which delays the expensive mixing stages untill a
> a sort interrupt does a great job (current and my fortuna-patch).  Difference
> being, fortuna-patch appears to be 2x faster.

Ooh, cool!  Must play with to steal the speed benefits.  Thank you!

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-23 23:43 [PROPOSAL/PATCH] Fortuna PRNG in /dev/random Jean-Luc Cooke
  2004-09-24  4:38 ` Theodore Ts'o
@ 2004-09-27  4:58 ` Theodore Ts'o
       [not found]   ` <20040927133203.GF28317@certainkey.com>
  1 sibling, 1 reply; 35+ messages in thread
From: Theodore Ts'o @ 2004-09-27  4:58 UTC (permalink / raw)
  To: Jean-Luc Cooke; +Cc: linux-kernel


I recently posted the following article on sci.crypt, which has a more
detailed analysis of the design of JLC's proposed patch to random.c

					- Ted


From: tytso@mit.edu (Theodore Y. Ts'o)
Subject: Re: new /dev/random
Newsgroups: sci.crypt
Date: 27 Sep 2004 00:05:32 -0400
Organization: Massachusetts Institute of Technology

Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:
>Huh?  JLC's patch *is* Fortuna.  

Actually, it isn't Fortuna.  But more on that in a moment....

>However, IMO, JLC's patch (Fortuna)
>should not go into the kernel in its present form, and the kernel
>maintainers should reject it.  It should not be a configuration
>option.  It has too much potential of screwing the user, until the
>entropy accounting is restored.

The problem is that Fortuna's design isn't really particularly
compatible with entropy accounting.  Each pool only contains 256 bits,
and by definition, the pool can not possibly store more entropy than
that.  Once you extract 256 bits, you have to wait a second before you
can drain whatever entropy might be in pool #1, then two seconds
before you can drain whatever entropy might be in pool #2, then four
seconds before you can drain whataver might be in pool #3, and so on.
This means that even if all of the pools are completely filled, in
order to extract 2048 bits of entropy (for an long-term RSA key pair,
for example), this would require waiting for a little over 4 minutes
(255 seconds, to be precise).  To extract 4096 bits of entropy, we
would have to wait 18 hours, 12 minutes, and 15 seconds (65535 seconds).

Indeed, one of the complaints that I have about the whole Fortuna
design is that from the entropy perspective, 25% of the entropy is
stashed away in pools that will never be used for over six *months*,
with 50% of the pools never getting used until after 18 hours or more.
Of course in that time, those pools will get filled, refilled and
overfilled, many times over, uselessly wasting entropy.  Entropy is a
precious resource; it should not be so thoughtlessly squandered.

.... but of course, waiting over 18 hours to before sufficient amounts
of entropy cascades through the pool structure in order to generate a
4096-bit RSA key isn't a problem with JLC's patch, because it doesn't
implement the 2^k second delay for each pool, as specified by the
Fortuna design.  Instead, it reseeds at every call to extract_entropy,
and every 2^k reseeds, it uses a particular pool.  But in order to
provide resistance to the state-extension attack --- which is the only
justification for replacing /dev/random's current algorithm with
Fortuna, and Fortuna's raison de etre --- you have to wait until a
pool has a sufficient amount of entropy in order to provide for a
catastrophic reseeding.  Because the rate at which the pools are drawn
down is dependent on the extraction rate, not based on a time basis,
or based on some estimate of the amount of entropy collected in each
of the pools, JLC's proposed patch is vulnerable to state extension
attack.  In other words, the proposed patch doesn't even do what it
sets out to do!!

P.S.  Despite the fact that JLC's patch is vulnerable to the state
extension attack, because it does not faithfully implement the Fortuna
design, it still squanders entropy.  In fact, because under normal
operations, reads to /dev/random are happen even less frequently than
once a second, over 50% of the collected entropy could be stored for
**years** before it is ever used, with the net result that the
high-level entropy pools will get overfilled, and the entropy wasted.
This is despite the fact that in the attack scenario, the attacker can
still force the high-order pools to be used before sufficient entropy
can be stored.  So with respect to these two defects, it is the worst
of both worlds.

P.P.S.  Despite the fact that JLC's patch defines a #define
RANDOM_RESEED_INTERVAL, which might lead one to believe that it is
using a time-based cascading, in fact, that #define is never used in
his patch.  Despite the fact that a certain party has been seen
whining about "obfuscated" code being hard being "rude", I won't go
down that particular path.  Nevertheless, the JLC's patch, with a
profusion of unsued #define's, and dead code from the original
/dev/random that is incompletely removed, has obfuscation not from the
subtle design standpoint, but from the sloppy coding perspective,
which IMHO is far worse (although of course, this can be corrected).
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Theodore Ts'o			http://web.mit.edu/user/tytso/www/
   Everybody's playing the game, but nobody's rules are the same!

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-27  0:50             ` linux
@ 2004-09-27 13:07               ` Jean-Luc Cooke
  2004-09-27 14:23               ` Theodore Ts'o
  1 sibling, 0 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-27 13:07 UTC (permalink / raw)
  To: linux; +Cc: jmorris, cryptoapi, tytso, linux-kernel

On Mon, Sep 27, 2004 at 12:50:33AM -0000, linux@horizon.com wrote:
> > SHA-1 without padding, sure.
> 
> > hash("a") = hash("a\0") = hash("a\0\0") = ...
> > hash("b") = hash("b\0") = hash("b\0\0") = ...
> > hash("c") = hash("c\0") = hash("c\0\0") = ...
> 
> And how do I hash one byte with SHA-1 *without padding*?  The only
> hashing code I can find in random.c works 64 bytes at a time.
> What are the other 63 bytes?
> 
> (I agree that that *naive* padding leads to collisions, but random.c
> doesn't do ANY padding.)

And I guess it is my fault to assume "no padding" is naive padding.

> > I see.  And in the -mm examples, is the code easily readable for other
> > os-MemMgt types?  If no, then I guess random.c is not the exception and I
> > apologize.
> 
> The Linux core -mm code is a fairly legendary piece of Heavy Wizardry.
> To paraphrase, "do not meddle in the affairs of /usr/src/linux/mm/, for
> it is subtle and quick to anger."  There *are* people who understand it,
> and it *is* designed (not a decaying pile of old hacks that *nobody*
> understands how it works like some software), but it's also a remarkably
> steep learning curve.  A basic overview isn't so hard to acquire, but the
> locking rules have subtle details.  There are places where someone very good
> noticed that a given lock doesn't have to be taken on a fast path if you
> avoid doing certain things anywhere else that you'd think would be legal.
> 
> And so if someone tries to add code to do the "obvious" thing, the
> lock-free fast path develops a race condition.  And we all know what
> fun race conditions are to debug.
> 
> Fortunately, some people see this as a challenge and Linux is blessed with
> some extremely skilled VM hackers.  And some of them even write and publish
> books on the subject.  But while a working VM system can be clear, making it
> go fast leads to a certain amount of tension with the clarity goal.

Freightning ... but informative thank you.

> > And the ring-buffer system which delays the expensive mixing stages untill a
> > a sort interrupt does a great job (current and my fortuna-patch).  Difference
> > being, fortuna-patch appears to be 2x faster.
> 
> Ooh, cool!  Must play with to steal the speed benefits.  Thank you!

I'll have a patch for a "enable in crypto options" and "blocking with entropy
estimation" random-fortuna.c patch this week.  My fiance is out of town and
there should be time to hack one up.

JLC

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-27  0:50             ` linux
  2004-09-27 13:07               ` Jean-Luc Cooke
@ 2004-09-27 14:23               ` Theodore Ts'o
  2004-09-27 14:42                 ` Jean-Luc Cooke
  1 sibling, 1 reply; 35+ messages in thread
From: Theodore Ts'o @ 2004-09-27 14:23 UTC (permalink / raw)
  To: linux; +Cc: jlcooke, cryptoapi, jmorris, linux-kernel

On Mon, Sep 27, 2004 at 12:50:33AM -0000, linux@horizon.com wrote:
> > And the ring-buffer system which delays the expensive mixing stages untill a
> > a sort interrupt does a great job (current and my fortuna-patch).  Difference
> > being, fortuna-patch appears to be 2x faster.
> 
> Ooh, cool!  Must play with to steal the speed benefits.  Thank you!

The speed benefits come from the fact that /dev/random is currently
using a large pool to store entropy, and so we end up taking cache
line misses as we access the memory.  Worse yet, the cache lines are
scattered across the memory (due to the how the LFSR works), and we're
using/updating information from the pool 32 bits at a time.  In
contrast, in JLC's patch, each pool only has enough space for 256 bits
of entropy (assuming the use of SHA-256), and said 256 bits are stored
packed next to each other, so it can fetch the entire pool in one or
two cache lines.

This is somewhat fundamental to the philosophical question of whether
you store a large amount of entropy, taking advantage of the fact that
the kernel has easy access to hardware-generated entropy, or use tiny
pools and put a greater faith in crypto primitives.

So the bottom line is that while Fortuna's input mixing uses more CPU
(ALU) resources, /dev/random is slower because of memory latency
issue.  On processors with Hyperthreading / SMT enabled (which seems
to be the trend across all architectures --- PowerPC, AMD64, Intel,
etc.), the memory latency usage may be less important, since other
tasks will be able to use the other (virtual) half of the CPU while
the entropy mixing is waiting on the memory access to complete.  On
the other hand, it does mean that we're chewing up a slightly greater
amount of memory bandwidth during the entropy mixing process.  Whether
or not any of this is actually measurable during real-life mixing is
an interesting and non-obvious question.

						- Ted

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-27 14:23               ` Theodore Ts'o
@ 2004-09-27 14:42                 ` Jean-Luc Cooke
  0 siblings, 0 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-27 14:42 UTC (permalink / raw)
  To: Theodore Ts'o, linux, cryptoapi, jmorris, linux-kernel

On Mon, Sep 27, 2004 at 10:23:52AM -0400, Theodore Ts'o wrote:
> On Mon, Sep 27, 2004 at 12:50:33AM -0000, linux@horizon.com wrote:
> > > And the ring-buffer system which delays the expensive mixing stages untill a
> > > a sort interrupt does a great job (current and my fortuna-patch).  Difference
> > > being, fortuna-patch appears to be 2x faster.
> > 
> > Ooh, cool!  Must play with to steal the speed benefits.  Thank you!
> 
> This is somewhat fundamental to the philosophical question of whether
> you store a large amount of entropy, taking advantage of the fact that
> the kernel has easy access to hardware-generated entropy, or use tiny
> pools and put a greater faith in crypto primitives.

Tiny in that at most you can only pull out 256bits of entropy from one pool,
you are correct.  SHA-256 buffers 64 bytes at time.  The transform requires
512 bytes for its mixing.  The mixing of the 512 byte W[] array is done
serially.

random_state->pool is POOLBYTES in size.  Which is poolwords*4, which is
DEFAULT'd to 512 bytes.  The "5 tap" LFSR reaches all over that 512byte
memory for its mixing.

If page sizes get big enough and we page-align the pool[] member, the
standard RNG will get faster.

JLC

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
       [not found]   ` <20040927133203.GF28317@certainkey.com>
@ 2004-09-27 14:55     ` Theodore Ts'o
  2004-09-27 15:19       ` Jean-Luc Cooke
  0 siblings, 1 reply; 35+ messages in thread
From: Theodore Ts'o @ 2004-09-27 14:55 UTC (permalink / raw)
  To: Jean-Luc Cooke; +Cc: linux-kernel

On Mon, Sep 27, 2004 at 09:32:03AM -0400, Jean-Luc Cooke wrote:
> 
> I'll read over this once I finish re-writing my patch to use your entropy
> estimation.

While you're at it, please re-read RFC 793 and RFC 1185.  You still
don't have TCP sequence generation done right.  The global counter
is being increased for every TCP connection, and with only eight bits,
it can wrap very frequently.  Encrypting the source/destination
address/port tuple and using that as an offset to the global clock,
and then only bumping the counter when you rekey would be much more in
the spirit of RFC 1185, and would result in sequence numbers much less
likely to cause stale packets to get mistakenly accepted.

I'm still a bit concerned about whether doing AES is going to be a
speed issue.  Your comparisons against MD4 using openssl don't really
prove much, because (a) the original code used a cut-down MD4, and (b)
the openssl benchmark does a large number of encryptions and nothing
else, so all of the AES key schedule and tables will be in cache. 

The only real way to settle this would be to ask Jamal and some of the
other networking hackers to repeat their benchmarks and see if the AES
encryption for every TCP SYN is a problem or not.  CPU's have gotten
faster (but then again so have networks, and memory has *not* gotten
much faster), so only a real benchmark will tell us for sure.

					- Ted

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-27 14:55     ` Theodore Ts'o
@ 2004-09-27 15:19       ` Jean-Luc Cooke
  0 siblings, 0 replies; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-27 15:19 UTC (permalink / raw)
  To: Theodore Ts'o, linux-kernel

Thanks.

My re-writing wilol appear as more like an editorial revision than a
re-write.

I will certainly talk to Jamal et al.  Thanks

JLC

On Mon, Sep 27, 2004 at 10:55:55AM -0400, Theodore Ts'o wrote:
> On Mon, Sep 27, 2004 at 09:32:03AM -0400, Jean-Luc Cooke wrote:
> > 
> > I'll read over this once I finish re-writing my patch to use your entropy
> > estimation.
> 
> While you're at it, please re-read RFC 793 and RFC 1185.  You still
> don't have TCP sequence generation done right.  The global counter
> is being increased for every TCP connection, and with only eight bits,
> it can wrap very frequently.  Encrypting the source/destination
> address/port tuple and using that as an offset to the global clock,
> and then only bumping the counter when you rekey would be much more in
> the spirit of RFC 1185, and would result in sequence numbers much less
> likely to cause stale packets to get mistakenly accepted.
> 
> I'm still a bit concerned about whether doing AES is going to be a
> speed issue.  Your comparisons against MD4 using openssl don't really
> prove much, because (a) the original code used a cut-down MD4, and (b)
> the openssl benchmark does a large number of encryptions and nothing
> else, so all of the AES key schedule and tables will be in cache. 
> 
> The only real way to settle this would be to ask Jamal and some of the
> other networking hackers to repeat their benchmarks and see if the AES
> encryption for every TCP SYN is a problem or not.  CPU's have gotten
> faster (but then again so have networks, and memory has *not* gotten
> much faster), so only a real benchmark will tell us for sure.
> 
> 					- Ted

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
@ 2004-09-27 18:53 Manfred Spraul
  2004-09-27 19:45 ` Jean-Luc Cooke
  0 siblings, 1 reply; 35+ messages in thread
From: Manfred Spraul @ 2004-09-27 18:53 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: Jean-Luc Cooke, Linux Kernel Mailing List

On Mon, Sep 27, 2004 at 10:55:55AM -0400, Theodore Ts'o wrote:
> 
> While you're at it, please re-read RFC 793 and RFC 1185.  You still
> don't have TCP sequence generation done right.

Actually trying to replace the partial MD4 might be worth an attempt: 
I'm certain that the partial MD4 is not the best/fastest way to generate 
sequence numbers.
 >
 >The only real way to settle this would be to ask Jamal and some of the
 >other networking hackers to repeat their benchmarks and see if the AES
 >encryption for every TCP SYN is a problem or not.
 >
It would be unfair: The proposed implementation is not optimized - e.g. 
the sequence number generation runs under a global spinlock. On large 
SMP systems this will kill the performance, regardless of the internal 
implementation.

For the Linux-variant of RFC 1948, the sequence number generation can be 
described as:
A hash function that generates 24 bit output from 96 bit input. Some of 
the input bits can be chosen by the attacker, all of these bits are 
known to the attacker. The attacker can query the output of the hash for 
some inputs - realistically less than 2^16 to 2^20 inputs. A successful 
attack means guessing the output of the hash function for one of the 
inputs that the attacker can't query.

Current implementation:
Set the MD4 initialization vector to the 96 bit input plus 32 secret, 
random bits.
Perform an MD4 hash over 256 secret, random bits.
Take the lowest 24 bits from one of the MD4 state words.
Every 5 minutes the secret bits are reset.

For IPV6, the requirements are similiar, except that the input is 288 
bits long.

--
    Manfred

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-27 18:53 Manfred Spraul
@ 2004-09-27 19:45 ` Jean-Luc Cooke
  2004-09-28  0:07   ` Theodore Ts'o
  0 siblings, 1 reply; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-27 19:45 UTC (permalink / raw)
  To: Manfred Spraul; +Cc: Theodore Ts'o, Linux Kernel Mailing List

On Mon, Sep 27, 2004 at 08:53:56PM +0200, Manfred Spraul wrote:
> On Mon, Sep 27, 2004 at 10:55:55AM -0400, Theodore Ts'o wrote:
> >
> >While you're at it, please re-read RFC 793 and RFC 1185.  You still
> >don't have TCP sequence generation done right.
> 
> Actually trying to replace the partial MD4 might be worth an attempt: 
> I'm certain that the partial MD4 is not the best/fastest way to generate 
> sequence numbers.

It infact uses two full SHA1 hashs for tcp sequence numbers (endian and
padding issues aside).  my patch aims to do this in 1 AES256 Encrypt or 2
AES256 encrypts for ipv6.

> >The only real way to settle this would be to ask Jamal and some of the
> >other networking hackers to repeat their benchmarks and see if the AES
> >encryption for every TCP SYN is a problem or not.
> >
> It would be unfair: The proposed implementation is not optimized - e.g. 
> the sequence number generation runs under a global spinlock. On large 
> SMP systems this will kill the performance, regardless of the internal 
> implementation.

This would be nice to have in both RNG implementations.

> For the Linux-variant of RFC 1948, the sequence number generation can be 
> described as:
> A hash function that generates 24 bit output from 96 bit input. Some of 
> the input bits can be chosen by the attacker, all of these bits are 
> known to the attacker. The attacker can query the output of the hash for 
> some inputs - realistically less than 2^16 to 2^20 inputs. A successful 
> attack means guessing the output of the hash function for one of the 
> inputs that the attacker can't query.
> 
> Current implementation:
> Set the MD4 initialization vector to the 96 bit input plus 32 secret, 
> random bits.
> Perform an MD4 hash over 256 secret, random bits.
> Take the lowest 24 bits from one of the MD4 state words.
> Every 5 minutes the secret bits are reset.
> 
> For IPV6, the requirements are similiar, except that the input is 288 
> bits long.
> 
> --
>    Manfred

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-27 19:45 ` Jean-Luc Cooke
@ 2004-09-28  0:07   ` Theodore Ts'o
  2004-09-28  2:24     ` Jean-Luc Cooke
  0 siblings, 1 reply; 35+ messages in thread
From: Theodore Ts'o @ 2004-09-28  0:07 UTC (permalink / raw)
  To: Jean-Luc Cooke; +Cc: Manfred Spraul, Linux Kernel Mailing List

On Mon, Sep 27, 2004 at 03:45:02PM -0400, Jean-Luc Cooke wrote:
> > Actually trying to replace the partial MD4 might be worth an attempt: 
> > I'm certain that the partial MD4 is not the best/fastest way to generate 
> > sequence numbers.
> 
> It infact uses two full SHA1 hashs for tcp sequence numbers (endian and
> padding issues aside).  my patch aims to do this in 1 AES256 Encrypt or 2
> AES256 encrypts for ipv6.

No, that's not correct.  We rekey once at most every five minutes, and
that requires a SHA hash, but in the normal case, it's only a partial MD4.

An AES encrypt for every TCP connection *might* be faster, but I'd
want to time it to make sure, and doing a bulk test ala "openssl
speed" isn't necessarily going to be predictive, as I've discussed earlier.

						- Ted

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-28  0:07   ` Theodore Ts'o
@ 2004-09-28  2:24     ` Jean-Luc Cooke
  2004-09-28 13:46       ` Herbert Poetzl
  0 siblings, 1 reply; 35+ messages in thread
From: Jean-Luc Cooke @ 2004-09-28  2:24 UTC (permalink / raw)
  To: Theodore Ts'o, Manfred Spraul, Linux Kernel Mailing List

On Mon, Sep 27, 2004 at 08:07:19PM -0400, Theodore Ts'o wrote:
> On Mon, Sep 27, 2004 at 03:45:02PM -0400, Jean-Luc Cooke wrote:
> > > Actually trying to replace the partial MD4 might be worth an attempt: 
> > > I'm certain that the partial MD4 is not the best/fastest way to generate 
> > > sequence numbers.
> > 
> > It infact uses two full SHA1 hashs for tcp sequence numbers (endian and
> > padding issues aside).  my patch aims to do this in 1 AES256 Encrypt or 2
> > AES256 encrypts for ipv6.
> 
> No, that's not correct.  We rekey once at most every five minutes, and
> that requires a SHA hash, but in the normal case, it's only a partial MD4.

Pardon, the SYN cookies use two SHA1's, not the TCP sequence numbers.  Easy
to mistake to make with comments "Compute the secure sequence number." in the
secure_tcp_syn_cookie() function.  :)

> An AES encrypt for every TCP connection *might* be faster, but I'd
> want to time it to make sure, and doing a bulk test ala "openssl
> speed" isn't necessarily going to be predictive, as I've discussed earlier.

Agreed.

Was meaning to ask:
  add_timer_randomness()

There is a comment:
  /* if over the trickle threshold, use only 1 in 4096 samples */
  if ( random_state->entropy_count > trickle_thresh &&
	(__get_cpu_var(trickle_count)++ & 0xfff))
		return;

"if (x++ & 0xfff)" will return true 0xfff out of 0x1000 of the time.  Is this
the goal, because I don't think this will trickle control very well.

JLC

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PROPOSAL/PATCH] Fortuna PRNG in /dev/random
  2004-09-28  2:24     ` Jean-Luc Cooke
@ 2004-09-28 13:46       ` Herbert Poetzl
  0 siblings, 0 replies; 35+ messages in thread
From: Herbert Poetzl @ 2004-09-28 13:46 UTC (permalink / raw)
  To: Jean-Luc Cooke
  Cc: Theodore Ts'o, Manfred Spraul, Linux Kernel Mailing List

On Mon, Sep 27, 2004 at 10:24:09PM -0400, Jean-Luc Cooke wrote:
> On Mon, Sep 27, 2004 at 08:07:19PM -0400, Theodore Ts'o wrote:
> > On Mon, Sep 27, 2004 at 03:45:02PM -0400, Jean-Luc Cooke wrote:
> > > > Actually trying to replace the partial MD4 might be worth an attempt: 
> > > > I'm certain that the partial MD4 is not the best/fastest way to generate 
> > > > sequence numbers.
> > > 
> > > It infact uses two full SHA1 hashs for tcp sequence numbers (endian and
> > > padding issues aside).  my patch aims to do this in 1 AES256 Encrypt or 2
> > > AES256 encrypts for ipv6.
> > 
> > No, that's not correct.  We rekey once at most every five minutes, and
> > that requires a SHA hash, but in the normal case, it's only a partial MD4.
> 
> Pardon, the SYN cookies use two SHA1's, not the TCP sequence numbers.  Easy
> to mistake to make with comments "Compute the secure sequence number." in the
> secure_tcp_syn_cookie() function.  :)
> 
> > An AES encrypt for every TCP connection *might* be faster, but I'd
> > want to time it to make sure, and doing a bulk test ala "openssl
> > speed" isn't necessarily going to be predictive, as I've discussed earlier.
> 
> Agreed.
> 
> Was meaning to ask:
>   add_timer_randomness()
> 
> There is a comment:
>   /* if over the trickle threshold, use only 1 in 4096 samples */
>   if ( random_state->entropy_count > trickle_thresh &&
> 	(__get_cpu_var(trickle_count)++ & 0xfff))
> 		return;
> 
> "if (x++ & 0xfff)" will return true 0xfff out of 0x1000 of the time.  Is this
> the goal, because I don't think this will trickle control very well.

and it will 'return' 0xfff times of 0x1000 ...

(just one case (x == 0) will pass this check)

best,
Herbert

> JLC
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2004-09-28 13:46 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-09-23 23:43 [PROPOSAL/PATCH] Fortuna PRNG in /dev/random Jean-Luc Cooke
2004-09-24  4:38 ` Theodore Ts'o
2004-09-24 12:54   ` Jean-Luc Cooke
2004-09-24 17:43     ` Theodore Ts'o
2004-09-24 17:59       ` Jean-Luc Cooke
2004-09-24 20:44         ` Scott Robert Ladd
2004-09-24 21:34         ` Theodore Ts'o
2004-09-25 14:51           ` Jean-Luc Cooke
2004-09-24 18:43       ` James Morris
2004-09-24 19:09         ` Matt Mackall
2004-09-24 20:03         ` Lee Revell
2004-09-24 13:44   ` Jean-Luc Cooke
2004-09-27  4:58 ` Theodore Ts'o
     [not found]   ` <20040927133203.GF28317@certainkey.com>
2004-09-27 14:55     ` Theodore Ts'o
2004-09-27 15:19       ` Jean-Luc Cooke
  -- strict thread matches above, loose matches on Subject: below --
2004-09-24  0:59 linux
2004-09-24  2:34 ` Jean-Luc Cooke
2004-09-24  6:19   ` linux
2004-09-24 21:42   ` linux
2004-09-25 14:54     ` Jean-Luc Cooke
2004-09-25 18:43       ` Theodore Ts'o
2004-09-26  1:42         ` Jean-Luc Cooke
2004-09-26  5:23           ` Theodore Ts'o
2004-09-27  0:50             ` linux
2004-09-27 13:07               ` Jean-Luc Cooke
2004-09-27 14:23               ` Theodore Ts'o
2004-09-27 14:42                 ` Jean-Luc Cooke
2004-09-26  6:46           ` linux
2004-09-26 16:32             ` Jean-Luc Cooke
2004-09-26  2:31       ` linux
2004-09-27 18:53 Manfred Spraul
2004-09-27 19:45 ` Jean-Luc Cooke
2004-09-28  0:07   ` Theodore Ts'o
2004-09-28  2:24     ` Jean-Luc Cooke
2004-09-28 13:46       ` Herbert Poetzl

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox