Koblitz and Menezes on Safety Margins in Cryptography

Posted by William Whyte on August 5, 2011 at 11:09 AM

Neal Koblitz and Alfred Menezes are two pioneers in the field of Elliptic Curve Cryptography. In recent years, they’ve teamed up to write a series of papers (available at http://anotherlook.ca/) questioning some current practices in academic cryptography. The papers are stimulating and worth a look, and I’ll be posting some more about them. For this post, I’m most interested in the section on safety margins in their most recent paper, “Another look at security definitions” (warning -- f-bomb on page 9).

There’s a school of thought in cryptographic research that says that when you’re designing a scheme or protocol, you should determine the security requirements, design a protocol that exactly meets those requirements, and then make sure you eliminate all elements of the protocol that aren’t necessary to meet those requirements. This gives you the simplest, easiest to implement correctly, and most efficient protocol.

Koblitz and Menezes argue for a different position: unless you are truly resource-constrained, you should be biased towards including techniques that you can see an argument for, even if those techniques seem to be unnecessary within your security model. The reason is simple: your security model may be wrong. (Or it may be incomplete, which can amount to the same thing).

This attitude seems very wise to me. For a while we at Security Innovaton have been arguing that there is one basic assumption underlying almost all Internet protocols: the assumption that it’s okay to use a single public-key algorithm, because it won’t get broken. But that assumption isn’t necessarily right. It’s been right up till now, but if quantum computers come along or if a mathematical breakthrough that we weren’t expecting happens, RSA could be made insecure almost overnight. And if RSA goes, most current implementations of SSL go too, and all internet activities that use SSL will be seriously disrupted.

We don’t have to operate with these narrow safety margins. It’s easy to design a variant “handshake” for SSL that uses both RSA and NTRU to exchange parts of the symmetric key, each as secure as the whole. This would be secure if either RSA or NTRU was attacked, and the additional cost of doing NTRU alongside RSA is negligible in terms of processing. Menezes himself, speaking at the recent ECC conference in Toronto, spoke of this approach as extremely sensible.

Yes, there are some places where efficiency really is paramount, and naturally, we’d recommend using the highest performance crypto which is NTRU.  However,  for most devices, there’s no reason to use pure efficiency as a reason to avoid doing something that makes perfect security sense. We’re encouraged by the fact that researchers of the stature of Koblitz and Menezes seem to agree with us, and we’re going to look for ways to spread the word further.

Topics: embedded security

William Whyte

Written by William Whyte

William Whyte is responsible for the strategy and research behind the Security Innovation's activities in vehicular communications, security, and cryptographic research. He is chair of the IEEE 1363 Working Group for new standards in public key cryptography and has served as technical editor of two published IEEE standards, IEEE Std 1363.1-2008 and IEEE Std 1609.2-2006, as well as the ASC X9 standard X9.98.