SSL & TLS Bad Practices for the Technical Layman

Posted by Arvind Doraiswamy on December 2, 2016 at 9:48 AM

With all the SSL/TLS bugs that seem to come out every month nowadays, as a security penetration tester it's hard for me to remember which bug causes what, how hard the exploit is, and what needs to be done to fix it. Over time, I built up a nice little inventory that I've decided to share with the security community. Note, this is not a one-stop page for all TLS and SSL security issues, just knowledge of some real-world issues I've encountered. Over the course of this blog series, I will discuss some of attacks that have exploited various features in the SSL/TLS mechanism.

Weak SSL/TLS ciphers (Strength less than 128 bit)

The way SSL/TLS works is that the client and server negotiate what key size to use to bi-directionally encrypt communication. If both the client and the server have their first choice as a 40-bit key, it means that until negotiation happens again, they're going to encrypt traffic using a shared 40-bit key. If someone gets hold of an encrypted traffic dump of this communication, they could go through all 2^40 keys, each time decrypting the traffic. When it makes sense, they'd stop and that's the key... for THAT session. Not for every session between that client and server.

This is a bad practice. Ciphers that use anything less than a 128-bit key (which today is considered safe) should not be supported. As security professionals, we should start thinking of supporting ciphers that use keys which are at least 256 bits long.


The reason that I call this out separately is because it was a preferred cipher for all client-server communication for quite some time (I believe this was the case for Google - I'm guessing ;because it was a really fast symmetric key cipher.) However, numerous vulnerabilities have been discovered in RC4 so it should not longer be a preferred cipher and it should be disabled on every host+service that supports SSL. Thankfull browser vendors ;are also moving away from it.

Really, Really Old Protocols (SSLv1, SSLv2)

I haven’t encountered the use of either of these protocols for a while, even on test servers however, that doesn't mean they’re not out there. If so, be sure to disable support for both of these. SSLv1 was apparently never public, which explains why I've only seen SSLv2 enabled even on poorly configured servers. SSLv2 had several vulnerabilities (which have their own RFC) that affect Confidentiality, Integrity and Availability for any client-server traffic. This is dangerous - I strongly suggest never using it.

MD5/SHA1 Signed Things

A signature for any digital content (like certificates) proves that the content has not been modifie since the creator made and signed the content. A property of this signing process is that no two messages with different content should ever have the same signature. If they do, it means the person reading the message can’t know if one of the messages was tampered with. The receive can't know which message is the real one since both appear to be validly signed by the creator.

Any hash function where it is possible (practically speaking) to find two messages that have the same hash at the end (hash collisions) should be avoided. There is a well-known attac agains MD5 that does exactly this and I believe that it’s theoretically possible, but extremely difficult to do this with SHA1 too. However, if there's a potential crack in something and you have a feasible alternative... why wait? Migrate to SHA256 or SHA512 for all of your signing needs.

Insecure Renegotiation

Renegotiation in an SSL/TLS context effectively says, "Can we please decide again...what protocols and ciphers we're going to use to communicate?" If a client can do this and the server supports it, it means that a man-in- the-middle can inject traffic and force a client to renegotiate.

By getting the timing right, an attacker could piggyback on a client's request, use the client's credentials and perform functions that the client can do without the client's knowledge. This is similar to a Cross-Site Request Forgery attack works in a web application.

There's also another exploit that lets attackers DoS the server, and a tool by THC that can automate it. Sure it's a bug, but fixing that bug doesn't fix the overall DoS problem. It defends against 1 technique but that's it. There's even arguments that this is by design. The best fix is to not let the client renegotiate at all. If a client does try, the server should reject it. The configuration to enable this defensive behavior is different for different servers.

Lack of OCSP Stapling

Clients need a way to decide whether a server certificate is valid or not. If it isn't, the client shouldn't allow a user to visit that site. Initially there were revocation-lists (CRLs) but those wer updated only once every X days, so there was always a chance of a user being owned inbetween updates. Hence OCSP was introduced where the user's browser verifies in real-time if the certificate is valid before connecting to it.

However, now the destination server, especially if it’s busy, could get overwhelmed if clients keep hitting it to check the validity of the certificate all the time; not to mention that it consumes client resources as well. Hence OCSP stapling where a server will do the querying-for- cert-validity and return the stapled response when the client queries it. There are no runtime queries from the client directly to the servers belonging to the certificate authorities. There really isn't a security issue here unless you think of a DOS against the certificate authorities. If you have a low capacity internal CA server, it is good to enable OCSP stapling.

Lack of HSTS

HSTS tells the browser to always send traffic over a secure HTTPS connection. It’s only relevant if the server supports both HTTP (for some legacy reason) as well as HTTPS. If implemented correctly, when a user types http://site the browser will look at the HSTS header and force the traffic over HTTPS.

One thing to note is that the initial request (before the HSTS response reaches the browser) could still be over HTTP and vulnerable to an attacker man-in- the-middling the connection and owning a user. This naturally isn’t favorable so it’s beneficial to notify browser vendors prior to or immediately after implementing HSTS to reduce the exposure to this attack. If you only support HTTPS and port 80 isn't open on your web-server, you can still configure HSTS but it doesn't improve your security in any way.

No Forward Secrecy Cipher Support

Lets say an encrypted traffic dump is captured. The captured communication was encrypted using a specific TLS session key, generated using the server's asymmetric private key during the ;initial TLS handshake. If the server's asymmetric private key gets stolen, an attacker could use it to decrypt all the traffic they captured earlier.

Lets now say that a client-server communication is encrypted using a cipher that supports perfect forward secrecy. Even if the private key is stolen, the attacker can't use it to compute the session key and decrypt any of the encrypted traffic they had. So make sure that you support some strong ciphers that support forward secrecy and make one of them your preferred cipher of communication.

Topics: developer guidance, crypto

Arvind Doraiswamy

Written by Arvind Doraiswamy

Arvind is a Senior Security Engineer who focuses on conducting security assessments for clients, contributing articles to our secure coding knowledgebase, and writing tools to improve our company's security testing efficiency for clients.