For the last installment of this week for The Information Age I decided to post a last talk and paper featured at CCS 2016 – the 23rd ACM Conference on Computer and Communications Security (Hofburg Palace Vienna, Austria / October 24-28, 2016). The collection of talks and papers from this conference is a must see for the Computer Science community and the all related fields.
The post today will be a short one. The paper supporting the talk is again a worthwhile recommendation on the topic of Cybersecurity or Cryptanalysis. But it requires a somewhat deeper knowledge of specific issues in computer and internet security like Block Ciphers, ISO standards in cybersecurity and internet communication protocols. I am still in a process of solidify further the knowledge base required to proper write about these issues. Nevertheless some knowledge I already have encourages me to this post, which will feature the paper and abstract with the talk by the researchers Karthikeyan Bhargavan and Gaëtan Leurent from the French company Inria.
The heart of the matter of the paper and talk is about the security issues surrounding Block Ciphers and the way modern protocols interact with older legacy ones. The issue is a very important one within the cybersecurity community and the challenges presented in this talk remain in some cases open:
Internet protocols such as TLS , SSH , and IPsec  are agile, in the sense that they are designed to support a wide variety of ciphersuites: combinations of key exchange protocols, encryption schemes, authentication modes, etc. Each protocol implementation may choose to support a different subset of ciphersuites, but two implementations can still interoperate if they can negotiate a common ciphersuite. When it works well, protocol agility can enable a graceful transition from old cryptographic algorithms to new ones. A server can, for example, offer AES-GCM to modern clients while still supporting legacy ciphers like 3DES for older clients that have not yet been upgraded. However, a negative consequence of agility is that old ciphers may never be removed, resulting in implementations that support a few strong modern ciphers, followed by a long tail of obsolete ciphers that are still supported for backwards compatibility, but are known to be cryptographically weak. For example, the OpenSSL library supports five versions of TLS and hundreds of ciphersuites, even though many of these ciphersuites include weak algorithms like RC4 and MD5.
Indeed this mixing of modern and legacy protocols do still have a rational basis overall:
There are several reasons why practitioners do not consider theoretical weaknesses in cryptographic to be sufficient for their removal from mainstream protocols. First, even if an obsolete primitive is supported, it will typically only be negotiated if one of the parties does not support a modern alternative, in which case, the obsolete cipher is still better than nothing. Second, the attack may not be applicable to the way the primitive is used in the protocol, or it may require too much computation to be considered practical. Third, the attack may require special knowledge about the structure or the content of the plaintext stream which may be difficult to obtain in general. Consequently, protocol implementations tend to support legacy ciphers until a concrete attack is demonstrated in a common usage of the protocol.
In this work, we demonstrate two concrete attacks that
exploit collisions on short block ciphers. First, we present
an attack on the use of 3DES in HTTPS that can be used
to recover a secret session cookie. Second, we show how a
similar attack on Blowfish can be used to recover HTTP
BasicAuth credentials sent over OpenVPN connections. In
our proof-of-concept demos, the attacker needs to capture
about 785GB of data, which takes between 19-38 hours in
our setting. This complexity is comparable to the recent RC4
attacks on TLS: the only fully implemented attack takes 75
hours. We evaluate the impact of our attacks by measuring
the use of 64-bit block ciphers in real-world protocols. We
discuss mitigations, such as disabling all 64-bit block ciphers,
and report on the response of various software vendors to
our responsible disclosure of these attacks.
Birthday paradox attacks and computational complexity
Two strands of inquiry in the research literature on computer science, probability theory and computer security are the Birthday Paradox(problem) and computational complexity theory. Both of these strands form part of the argument structure of this paper. I will briefly sketch the part in the paper that dealt with this. And after I share below the video of the talk and some final thoughts, remarks and conclusion from this paper.
Block ciphers like AES and 3DES are widely used for symmetric encryption in security protocols. Mathematically, a block cipher with a κ-bit key and n-bit blocks defines of family of permutations of n-bit strings, indexed by the key. The main security notion is that a block cipher should behave like a pseudo-random permutation (PRP) family: the block cipher E(k) with a random key k should be hard to distinguish from a random permutation.
While block ciphers are required to resist any attack with up to 2 n data and 2 κ time, most modes of operation are only proven secure up to 2 n/2 blocks of plaintext, a limit that is commonly called the birthday bound. Indeed, there are attacks matching this limit. In CBC mode, the probability of collisions between two n-bit ciphertext blocks becomes significant after 2 n/2 blocks because of the birthday paradox.
In general, unless special efforts are taken, almost any mode of operation based on an n-bit block cipher will admit attacks that employ σ = 2ˆ(n/2) blocks. […] Do birthday-bound attacks on CBC, CFB, and OFB actually matter? They are of relatively little concern when the blockcipher has a blocksize of n = 128 bits, but the attacks can be a serious concern when employing a blockcipher of n = 64 bits, requiring relatively frequent rekeying to keep σ << 2ˆ32.
The other aspect worth a mention in this brief comment is the issues of the complexity of the attack problem facing an attacker. This has interest from both the specific cryptanalysis view as well as from the computational complexity of the computation involved view.
Attack Complexity. Let us denote the known fraction of the data as α and the secret and valuable fraction of the traffic as β (with fraction 1 − α − β that is neither known nor valuable to the attacker). In order to recover some secret information, an attacker must collect roughly 1/2αβ collisions, so that one collision is between a valuable secret block and a known block. Following the analysis of Section 2.2, this requires about √(1/αβ) · 2ˆ(n/2) blocks of data.
For instance, with n = 64, in an optimal case for the adversary half of the traffic is known, and half of the traffic is highly valuable (α = β = 1/2). In this case, a collision an attack requires about:
√(1/αβ) · 2ˆ(n/2) = 2 · 2ˆ (n/2) = 2ˆ33
blocks of data, which corresponds to just 64 GB.
For a more concrete scenario, let us assume that the messages are HTTP queries of 512 bytes (64 blocs), with a secret 8-byte cookie (1 block), and that the remaining of the message is known, i.e. α = 63/64 ≈ 1, β = 1/64. The number of blocks needed by an attacker is roughly:
√(1/αβ) · 2ˆ(n/2) ≈ 8 · 2ˆ(n/2) = 2ˆ35
which correspond to 256 GB.
And the resulting calculations yielded a data complexity in the computation with the following orders:
Summary of Attack Scenario. With a fixed message of size 2ˆu repeatedly encrypted, an attacker that knows a fraction α of the message can recover the k missing blocks of plaintext (k = (1 − α) · 2ˆu ) by observing about log(k) · 2ˆ(u−1) collisions. This gives an attack with a data complexity of
encrypted blocks. With rekeying every 2ˆr blocks (r < n/2), the complexity becomes:
The video of the talk and the conclusion
The paper supporting the talk above had a non-conventional conclusion. It finished with two paragraphs of mitigation recommendations and a responsibility disclosure list of points. I will briefly transcribe some of those for the reader:
Concretely, we recommend the following measures to prevent our attack:
- Web servers and VPNs should be configured to prefer 128-bit ciphers. According to our scans, about 1.1% of the top 100k web server from Alexa, and 0.5% of the top 1 million, support AES but prefer to use 3DES.
- Web browsers should offer 3DES as a fallback-only cipher, to avoid using it with servers that support AES but prefer 3DES.
- TLS libraries and web browsers and servers should limit the length of TLS sessions with a 64-bit cipher. This could be done in TLS renegotiation, or in some cases by closing the connection and starting a new one (i.e. limiting HTTP/1.1 Keep-Alive, SPDY, and HTTP/2 with 3DES ciphersuites).
We have communicated our results and concerns to the OpenVPN team, and to various website owners, browser vendors, and TLS libraries. They all acknowledged the issue, and are working on implementing countermeasures. The TLS vulnerability received CVE number CVE-2016-2183.
OpenSSL has moved 3DES ciphersuites from the HIGH category to MEDIUM in the 1.0.2 branch, and will disable it by default in the upcoming 1.1.0 release.
Akamai will offer an option for web server administrators to drop 3DES from the offered ciphers.
Apple has disabled 3DES on icloud.com and is recommending that all its customers disable 3DES on their websites.
Currently, most browsers see about 1% of their connections using 3DES, and vendors consider this number too high to simply disable 3DES on the client side, since too many websites would be broken. So, they are instead considering implementing data limits per connection to force rekeying, or offering 3DES ciphersuites only in a fallback negotiation if no AES ciphersuite is acceptable to the server.
Thank you. See you next week…!