Underscoring just how hard it is to design secure cryptographic software, academic researchers recently uncovered a potentially serious weakness in an early version of the code library protecting Amazon Web Services.
Ironically, s2n, as Amazon’s transport layer security implementation is called, was intended to be a simpler, more secure way to encrypt and authenticate Web sessions. Where the OpenSSL library requires more than 70,000 lines of code to execute the highly complex TLS standard, s2n—short for signal to noise—has just 6,000 lines. Amazon hailed the brevity as a key security feature whenunveiling s2n in June. What’s more, Amazon said the new code had already passed three external security evaluations and penetration tests.
Aging standard isn’t holding up very well in face of sophisticated attacks.
Amazon’s June 30 announcement was only a few hours old when Royal Holloway, University of London Professor Kenny Paterson and his colleagues met for lunch at a nearby pub to discuss the security of s2n. Five days later, they presented Amazon engineers with a report showing that the newly unveiled s2n was vulnerable to “Lucky 13,” aTLS attack unveiled in 2013 that made it possible to recover encrypted browser cookies used to access restricted parts of a website. Amazon engineers promptly fixed the errors. In a blog post, Amazon officials said the the vulnerable version of s2n was never used in production and that the proof-of-concept attacks “did not impact Amazon, AWS, or our customers, and are not the kind that could be exploited in the real world.”
By Paterson’s estimates, attackers would need to observe about 223, or about 8.39 million, encrypted sessions to recover one byte of plaintext. To a layperson, that may sound like an insurmountable challenge. To cryptographers safeguarding the Web, it was an unacceptable risk. Ironically, in addition to the compact size of s2n and the completion of three external reviews, engineers also put in custom-designed safeguards to harden the TLS implementation against Lucky 13 exploits.
“Our work highlights the challenges of protecting implementations against sophisticated timing attacks,” Paterson and colleague Martin Albrecht wrote in a research paper published Monday. “It also illustrates that standard code audits are insufficient to uncover all cryptographic attack vectors.”
Lucky 13 exploits a subtle timing bug in a TLS mode known as cipher block chaining. Paterson and other researchers behind the Lucky 13 attack discovered CBC-based streams could be manipulated in a way that reveals a limited amount of the plaintext in an encrypted data stream. The plaintext leaked through the timing of error messages that were produced when the modified ciphertexts were processed at a server. The exploit required attackers to receive thousands of different encryptions of the same message. With careful statistical processing, noise in the different timing samples could be eliminated, revealing the target plaintext bytes.
In Monday’s paper, Paterson and Albrecht wrote:
We show that s2n—as initially released—was vulnerable to a timing attack on its implementation of CBC-mode ciphersuites. Specifically, we show that the two levels of protection offered against the Lucky 13 attack in s2n at the time of first release were imperfect, and that a novel variant of the Lucky 13 attack could be mounted against s2n.
We stress that the problem we identify in s2n does not arise from reusing OpenSSL’s crypto code, but rather from s2n’s own attempt to protect itself against the Lucky 13 attack when processing incoming TLS records. It does this in two steps: (1) using additional cryptographic operations, to equalise the running time of the record processing; and (2) introducing random waiting periods in case of an error such as a MAC failure.
by Dan Goodin – Nov 24, 2015 2:40pm UTC