the "crime" attack on TLS
Phil Pennock
help-gnutls-phil at spodhuis.org
Sat Sep 15 02:30:30 CEST 2012
On 2012-09-14 at 11:41 +0200, Nikos Mavrogiannopoulos wrote:
> On Fri, Sep 14, 2012 at 6:03 AM, Phil Pennock
> <help-gnutls-phil at spodhuis.org> wrote:
> > One thing I noted is that the attack relies upon compression working,
> > while DEFLATE uses a new Huffman tree for each compression block. So if
> > you end a _compression_ block any time you switch sensitivity level
> > within the stream, you protect different parts of the cleartext from
> > each other and this attack shouldn't work.
>
> The issue is that you cannot easily determine when the sensitivity
> level changes. E.g. if TLS is used for a VPN, how are one user's data
> distinguished from another's? Is it enough to assume that data of
> different sensitivity are included in different records?
TLS has always been a "mostly works, kindof" solution for a VPN. Mind,
for a link to a network, I think IPsec uses one stream for all flows, if
memory serves, so would have the same problem, if it used compression.
I've assumed that link-layer TCP-based VPN stuff doesn't use stream
compression because of the latency hit, but uses PPP header compression.
I've not actually investigated to find out.
Ultimately, unless every application is labelling every byte with
sensitivity levels on the way out, this doesn't seem soluble without
doing something like setting up 32 associations as substrate bearers and
non-deterministically multiplexing across them, bearer-hopping. And
even that is just making it harder, not fixing it.
> GnuTLS operates much differently than openssl.
Similarly enough that we maintain bindings to both in Exim. :)
But yes, OpenSSL has the BIO abstraction, GnuTLS just provides
gnutls_record_send() and friends. I know what you mean here, but where
in OpenSSL you can use BIO_flush(), in GnuTLS, you could have a similar
flush call; or a call which sets a flag in gnutls_session_t which will
be cleared with the next send, to coerce the change: on next send,
Z_FINISH the current compression block and start a new one. It's that,
or add gnutls_record_send_flagged() which takes a 4th parameter.
> It's operation is
> comparable to unix sockets. You provide data for sending in each
> record. That data is then compressed. I'm thinking whether it makes
> sense to use Z_FULL_FLUSH on each record boundary, or drop compression
> altogether.
Given the size of a record, a full flush on every send would probably
work out larger than no compression for most workloads, surely? You
still need to send the Huffman tree as overhead. And unfortunately,
where OpenSSH could just defer compression start until
post-authentication by adding a different protocol feature, we don't get
that without a re-handshake in TLS.
Which might be one approach: applications which know they have a
security boundary change they want to protect against need to do a TLS
re-handshake. Expensive, but within protocol now, and they could
disable compression for the initial H/S and enable it for the second, to
avoid the short-lived initial compression block wasting overhead.
Without the disable/enable-later compression, this can be done by
clients right now with gnutls_rehandshake(), right? So there's an
immediate fix _possible_ for those who want to keep using compression?
Also, please bear in mind that many uses of TLS do not exchange
authentication information inside the stream; for instance, the majority
of SMTP/TLS sessions today. Now, admittedly they're not doing
verification either so have bigger problems, but there's a draft out to
solve that problem and if I have time, I will be adding support for it
to Exim before the next release.
Emails tend to compress very well, so if you can find a way to keep
compression around, that would be appreciated. Making it default off
and need explicit enabling might be a reasonable approach, since I
suspect that the majority of small programs doing crypto with TLS will
be using HTTP over it.
-Phil
More information about the Gnutls-devel
mailing list