Uses too much entropy (Debian Bug #343085)

Matthias Urlichs smurf at smurf.noris.de
Fri Jan 4 12:38:47 CET 2008


Hi,

Nikos Mavrogiannopoulos:
> However on the point. I still believe it is a kernel
> issue. A process should not be able to deplete randomness in a system.

There's a trade-off between "make /dev/urandom as random as possible"
(which requires mixing in random bits from hardware), and "make
/dev/random have as many unique, used-only-once, truly-random bits as
possible" (which obviously requires *not* doing that for /dev/urandom).

You can't have both.

The kernel defaults to the former, because /dev/random should only be
used if you need to generate a ultra-secure long-term public key, or
something else along these lines.

/dev/urandom is more than sufficient for anything else, to the best of
our knowledge.

I'd hazard that nobody who is in their right mind would run *anything*
on a system used for generating a highly secure key (other than the key
generator of course :-P). Thus, jobs which deplete the randomness pool
by reading /dev/urandom are not an issue on such a system.

> There are algorithms that can do that, and there were even rejected
> kernel patches that implemented them.
> 
There's nothing to implement. There already is /dev/urandom. Just use it
(exclusively).

> A workaround might be to use the libgcrypt's random number process
> feature which uses a single server process to feed other processes
> with entropy (I've never worked with it so I don't know if it can be
> used in that case). This might solve this issue.

Disagree.

* /dev/(u)random is more difficult to subvert than a daemon.

* The kernel has access to more sources of randomness.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  smurf at smurf.noris.de
Disclaimer: The quote was selected randomly. Really. | http://smurf.noris.de
 - -
Computers are useless.  They can only give you answers.
		-- Pablo Picasso





More information about the Gnutls-devel mailing list