Size of Libgcrypt (and other libraries) and subsequent performance

Simon Josefsson simon at josefsson.org
Fri Apr 25 16:59:34 CEST 2008


"Ashish Gupta" <ashishg2dec at gmail.com> writes:

> HI Simon,
>
> Thanks for the update. I am currently not in office, however will conduct
> more experiments once I am back.
>
> Meanwhile, the figures related to the run time overheads are most
> intriguing. Any comparisions on the way openssl handles its randoms?

I don't have openssl libraries with debug symbols, but if you want to do
the comparison for openssl that could help.

If libgcrypt randomness code isn't improved, I think we should start
thinking about adding our own PRNG and using it by default.  Here is how
I think it should work:

1. On initialization, read 32 bytes from /dev/urandom and seed a
AES-based PRNG.

2. For the two lesser randomness levels, nonce + random, read data from
the PNRG.

3. For the higher randomness level (e.g., for long-lived RSA keys), read
bytes directly from /dev/random.  Possibly XOR it against the
/dev/urandom based PRNG output?

As far as I recall, no part of a TLS handshake will require the
strongest randomness level, so all typical GnuTLS applications will at
most read 32 bytes from /dev/urandom.  GnuTLS applications that generate
long-lived keys (normally only 'certtool'?) will read data from
/dev/random.

The good thing is that we can experiment with how much performance
improvement this would yield relatively easily, once the crypto.h rnd
code works.

Thoughts?

/Simon





More information about the Gnutls-devel mailing list