Re: Rizzo claims implementation attach, should be interesting
Martin Rex <mrex <at> sap.com>
2011-09-23 20:02:44 GMT
David Wagner wrote:
> I've seen the claim made that ~"it is inherently unsafe to mix trusted
> and untrusted content over the same encrypted channel, you should never
> do that, crypto protocols aren't intended to be secure against that"~.
That is not exactly what I wrote or intended to say.
What I said is that SSLv3&TLSv1.0 were clearly and obviously
_NEVER_ designed to protect from adaptive chosen plaintext
attacks when you multiplex confidential information and
attacker-supplied chosen plaintext and have them encrypted
under the same TLS connection state.
I agree that it would be valuable if TLS provided such protection,
because folks on top of TLS would prefer if they could remain ignorant
about any consequence their usage might have on the resulting security.
> I do not think this is a fruitful or sensible way to think about things.
As it turns out, as unpleasant as it may be, that is the only sensible
way to approach the use of TLS.
Any protocol designed for use over TLS must be carefully designed to
deal with all possible attacks against it. As a practical matter,
this means that the protocol designer must be aware of what security
properties TLS does and does not provide and cannot safely rely on
> Modern crypto protocols do aim to make it safe to mix trusted and
> untrusted content over the same channel.
What exactly do you want to say by that?
Does this mean that TLS is not modern, because it obviously never
had a property that its consumers silently relied on?
> They aim to be secure against chosen-plaintext attack;
> and the notion of chosen-plaintext attacks is motivated exactly by
> ensuring it is safe to mix trusted and untrusted data over the
> same channel.
The cbc-based cipher suies of SSLv3&TLSv1.0 *are* as secure
against chosen plaintext attacks as humanely possible.
But the specific usage scenario where the application caller of
TLS provides to the attacker a feedback loop plus SSL record creation
capabilities for an established TLS connection is what subverts
the security properties of SSLv3&TLSv1.0 for the cbc-ciphersuites.
> The designers of SSL and TLS 1.0 presumably intended for SSL to be
> secure against chosen-plaintext attacks, but we've sinced learned
> that there is a subtle flaw that enables a successful
> chosen-plaintext attack.
You're confusing things. SSLv3&TLSv1.0 are perfectly OK, any flaw
is entirely in the protocols on top of TLS being ignorant about
the attack facilities they open up. It is very essential to
understand this difference.
the explicit random IVs of TLSv1.1 make the problem *much* harder
to exploit, but the root cause of the problem is in the application
and still present: letting an attacker perform a chosen plaintext attack.
> It is a surprise that they fell short of this natural goal and are
> vulnerable to chosen-plaintext attack
It may be a surprise to you, but as the TLS specification very clearly
and explicitly states, it MUST NOT be a surprise to any designer of
protocol that run on top of TLS.
> TLS 1.1 introduces revisions to fix this problem, and as
> far as we know, TLS 1.1 is secure against chosen-plaintext attack.
No, TLSv1.1 does *NOT* actually contain a fix!
TLSv1.1 contains _only_ a mitigation by imposing a distortion on
the "oracle" (equal to the block size of the underlying block cipher),
This mitigation should provide a security margin that is sufficient
to stop those attacks that have been demonstrated, but the underlying
problem still exists.
> I don't think it makes sense to advocate "you should never send both
> trusted and untrusted content over the same crypto channel"; that advice
> leads to absurd results. Often dynamic web pages includes both trusted
> and untrusted content; are you going to advocate that no dynamic web
> page should sent over TLS? In my view, TLS had darn well better be
> secure in that threat model, and as far as we know, TLS 1.1 is.
You're missing the point. The protocol design MUST take into account
the side effects of mixing content from trusted and untrusted sources
and have it encrypted under the same encryption keys. What is of
particular interest is, whether a protocol provides to the attacker
a facility for automated guessing and how many attempts it allows
the attacker to make.
The security of a 4-digit PIN for an ATM card or for a mobile phone
GSM SIM card is ridiculously small. But when the "oracle" only
accepts 3 "guesses" and then locks the card, there still is
sufficient "security margin" for it to offer some amount of
Automated online usage scenarios and low-entropy secrets
do not go well together. And I would not be surprised if the
issue with PayPal was primarily due to a cookie with a
ridiculously low entropy, rather than a serious weakness in SSL.
> I've also seen claims on this list that ~"CBC mode is inherently
> insecure and should never be used"~. I think that's a little too broad.
I'm in full Agreement!
> The problem with TLS 1.0 is not that it uses CBC mode and CBC mode
> is inherently insecure. Rather, the problem is that TLS 1.0 uses a
> special variant of CBC mode, one which turns out to be insecure against
> chosen-plaintext attack. In particular, TLS 1.0 uses "ciphertext block
> chaining", where the last ciphertext block is used as the next IV.
> This way of choosing the IV enables chosen-plaintext attacks.
Actually, letting the attacker know the IV that will be used for
his next plaintext, prior to him choosing that plaintext, is what
creates the problem. It removes the "distortion" that CBC
intends to add for each first crypto block of an SSL record
(except the first one) for SSLv3&TLSv1.0.
> It is worth pointing out that the crypto-theory community has proven that,
> if CBC mode is used properly, it is secure against chosen-plaintext
> attacks. However, those proofs only apply to the standard form of
> CBC mode, where the IV is chosen randomly.
Again, you're missing the point. The IVs in SSLv3&TLSv1.0 _are_
random in exactly the fashion that CBC is designed to work.
The problem isn't the randomness, but that the first IV of
each new SSL record is _predictable_ by the attacker.
For TLS, processing is normally done in quantities of
SSL records, so creating a single new random IV for the
start of the SSL record is sufficient.
Other environments might be (ab-)using CBC in a directly-streaming
(i.e. data is sent as soon as the cipher block is full), and
such a usage scenario would need a random IV for EVERY cipher block
(i.e. it must not use CBC at all).
> In summary, I'd recommend: don't throw out the baby with the bathwater.
> We don't need to throw out CBC mode; TLS 1.1's use of CBC mode is fine.
> There's nothing wrong with sending both trusted and untrusted content over
> the same channel, as long as the crypto protocol is designed properly; and
> as best as we know, TLS 1.1 is fine for that.
I do not feel comfortable with this assertion, because it is technically
untrue, the underlying problem remains. Rather, I would like to
see a quantification of the resulting security margin that this
(something like "if your ciphers block size is X, you must change your
crypto keys every Y MB/GB)" to maintain an adequate security margin.")