A confident deployment guide for TLS and PKI
Explore our guide

Performance

Everybody worries about security, but they worry about performance even more. What use is a secure service that people can’t or don’t want to use? Properly configured, TLS is plenty fast. With little effort, the performance will be good enough. In some cases, it’s even possible to deploy TLS with virtually no performance overhead. In this section, we’ll look at how you can achieve best-in-class performance with some additional effort.

Don’t use too much security

We all like security, but it’s possible to have too much of it. If you go overboard and choose cryptographic primitives that are too strong, your security won’t be better in any meaningful way, but your services will nevertheless be slower, and sometimes significantly so. Most sites should aim to use elements that provide 128 bits of security. We make an exception for DHE, which, at 2,048 bits, provides 112 bits of security. That’s close enough. You will virtually always use ECDHE anyway, which provides a full 128 bits of security.

The next step up is to use primitives that offer 256 bits of security. This is something you might decide to do if you think quantum computing is a realistic threat, either today or in the attack window relevant to you.

Enable session resumption

In TLS terminology, when a client and server have a successful handshake, they establish a session. Handshakes involve a fair amount of computation, which is why cryptographic protocols focused on performance also incorporate so-called session caching that makes it possible to continue to use the results of one handshake over a period of time, typically for up to a day. This is called session caching or session resumption.

Session resumption is an essential performance optimization that ensures smooth operation, even for web sites that don’t need to scale. Servers that don’t use it or don’t use it well are going to perform significantly slower.

Optimize connection management

In the early days, slow cryptographic operations were the main bottleneck introduced by encryption. Since then, CPU speed has improved greatly, so much so that most sites don’t worry about its overhead. The main overhead of TLS now lies with the increased latency of the handshake. The best way to improve TLS performance is to find ways to avoid handshakes.

Keep connections open

The best approach to avoiding TLS handshakes is to keep existing connections open for an extended period of time and reuse them for subsequent user requests. In HTTP, this feature is known as keep-alives, and using it is a simple matter of web server reconfiguration.

Use TLS 1.3

The complete redesign of TLS in version 1.3 to improve security was also a good opportunity to improve its performance. As a result, this protocol version reduces full handshake latency by half, compared to the standard handshake in earlier protocol revisions. TLS 1.3 also introduces a special 0-RTT (zero round-trip time) mode, in which TLS adds no additional latency over TCP. Your servers will fly with this mode enabled, but with the caveat that using it opens you up to replay attacks. Because of this, this mode is not appropriate for use with all applications.

Use modern HTTP protocols

There was a very fast pace of HTTP protocol evolution recently. After HTTP/1.1, there was a long period of no activity, but then we got HTTP/2 and soon thereafter HTTP/3. These two releases didn’t really change HTTP itself but focused on connection management and the underlying transport mechanism, including encryption via QUIC.

Use content delivery networks

Content delivery networks (CDNs) can be very effective at improving network performance, provided they are designed to reduce the network latency of new connections to origin servers. Normally, when you open a connection to a remote server, you need to exchange some packets with it for the handshake to complete. At best, you need to send your handshake request and receive the server’s response before you can start sending application data. The further the server, the worse the latency. CDNs, by definition, are going to be close to you, which means that the latency between you and them is going to be small. CDNs that keep connections to origin servers open won’t have to open new connections all the time, leading to potentially substantial performance improvements.

Enable caching of nonsensitive content

An earlier section in this guide recommended that you disable HTTP caching by default. Although that’s the most secure option, not all properties require the same level of security. HTTPS is commonly used for all web sites today, even when the content on them is not sensitive. In that case, you want to selectively enable caching in order to improve performance.

The first step might be to enable caching at the browser level by indicating that the content is private:

Cache-Control: private

If you have a content delivery network in place and want to utilize its caching abilities, indicate that the content is public:

Cache-Control: public

In both situations, you can use other HTTP caching configuration options to control how the caching is to be done.

Use fast cryptographic primitives

Measured server-side, the overhead of cryptography tends to be very low and there aren’t many opportunities for performance improvements. In fact, my recommended configuration is also the fastest as it prefers ECDSA, ECDHE, and AES with reasonable key sizes. These days, to deploy fast TLS, you generally (1) deploy with ECDSA keys and (2) ensure that your servers support hardware-accelerated TLS.

However, things change somewhat if we look at the performance from the client perspective. Recently there’s been an explosion in the adoption of mobile devices, which use different processors and have different performance characteristics. What’s good for your servers may not be good for mobile phones. So which do you want to optimize?

In practice, some organizations choose to take a performance hit on their servers in order to provide a better end user experience. In practice, this means that they choose to negotiate ChaCha20 suites with mobile devices because they are not only several times faster, but also consume less battery. But how do we know which clients are mobile devices?

The trick is to use something called an equal preference cipher suites configuration. Normally, we want our servers to select the best possible cipher suite, so naturally that would be something with AES-GCM. But with mobile devices we want ChaCha20. BoringSSL was the first to introduce a feature in which the client’s list of suites is analyzed to determine if it prefers ChaCha20 over AES. Only if it did would the server select a ChaCha20 suite over an AESGCM one. This feature is now also available in OpenSSL.

Get a free cyber risk assessment

Sign up here to have a Red Sift expert walk you through the issues affecting your digital estate across email, domains, and the network perimeter.

LinkedInInstagram