Introduction
In an era where milliseconds can define the user experience, understanding networking protocols is not just a matter of technical curiosity but a necessity for developers, businesses, and anyone invested in the digital landscape. HyperText Transfer Protocol (HTTP) is the foundation protocol of data exchange on the web. It facilitates fetching resources, such as HTML documents, that allow web pages to load in your browser, making online communication and content delivery possible.
The Internet has witnessed a remarkable transformation from the slow-loading pages in the early days to the lightning-fast web experiences today. The limitations of the early HTTP/1.1 protocol became evident very quickly; each request required a new connection, leading to inefficiencies and slower page loads. HTTP/2 addressed these limitations by allowing multiple requests and responses over a single connection to improve speed and efficiency.
HTTP/3 is the next phase of web communication. It overcomes some significant shortcomings of HTTP/2 to better meet the requirements of real-time communication use cases. It brings benefits like reduced connection establishment time, improved security measures, and enhanced performance.
This article explores key differences between HTTP/2 and HTTP/3 and its impact on application performance and user experience.
{{banner-15="/design/banners"}}
Summary of HTTP/3 vs HTTP/2
The rest of this article explores these differences in detail.
Protocol
The main distinction between HTTP/2 and HTTP/3 lies in their underlying transport protocols, TCP and QUIC. HTTP/2 over TCP, the backbone of Internet communication for decades, offers reliable, ordered delivery of a byte stream between applications. However, it has particular challenges.
HTTP/2 TCP limitations
Most TCP flavors implemented nowadays ensure data integrity by requiring that packets be processed in order. In any case, independent of the TCP flavor, the application layer won't be able to read the results being transmitted until they are received in sequence. If a packet is lost, subsequent packets must wait until the missing packet is retransmitted and received. This causes delays and head-of-line blocking in multiplexed connections.
Another shortcoming is that establishing a TCP connection requires a three-way handshake. It introduces latency easily noticeable in connections requiring a round trip to a server far away.
Benefits of HTTP/3 QUIC
HTTP/3 over QUIC runs over UDP. Unlike TCP, QUIC aims to provide better reliability and security while addressing TCP limitations.
- QUIC resolves the head-of-line blocking issue by allowing independent data streams that do not block each other. It is more resilient to packet loss and network conditions.
- QUIC incorporates a cryptographic handshake (using TLS 1.3) into its connection establishment process. It reduces the number of round trips required to establish a secure connection.
The positive impact on network performance is summarized in the following table.
Tools like Catchpoint can measure and compare web application latency using HTTP/2 and HTTP/3. By monitoring the time data travels from the server to the client, developers can see the tangible benefits.
Not all web browsers currently support QUIC. Currently, Safari considers QUIC support as an experimental feature, which means it might not be enabled by default for all users. Developers and users looking to test or experience HTTP/3 can check browser support for QUIC at Cloudflare.
Multiplexing
Multiplexing is a technique that allows multiple messages or requests and responses to be transmitted simultaneously over a single connection.
HTTP/2 introduced multiplexing over TCP, allowing multiple streams of data to be interlinked and sent over a single connection without interference from one another. This movement helped in minimizing latency and optimize connection utilization. However, it faces limitations due to TCP's inherent characteristics.
HTTP/2 TCP limitations
Despite multiplexing, TCP's sequential nature causes head-of-line blocking in most TCP flavors. Head-of-line blocking occurs when the first packet of data (the "head") for a particular stream (a single request/response cycle within the multiplexed connection) is delayed or lost. Subsequent packets cannot be processed until the missing packet arrives. This means that even though HTTP/2 can handle multiple requests in parallel, it won’t do so until the head packet arrives.
Another issue is that while HTTP/2 theoretically allows stream prioritization, the underlying TCP connection cannot distinguish between streams, potentially leading to suboptimal resource allocation.
HTTP/3 multiplexing & stream prioritization
In HTTP/3, each stream is independent. If a packet is lost, only the affected stream waits for retransmission, while others continue unaffected, significantly improving the web experience under packet loss conditions. HTTP/3 thus does not suffer from TCP head-of-line blocking.
HTTP/3 also manages stream prioritization more effectively, as it's built to understand and handle streams independently. Stream prioritization is a feature that prioritizes certain types of data on the internet so they can skip the line and get to you faster. This is useful because not all data is equally important when browsing the web or using apps. This could mean getting the text and main images before less critical items so you can start reading without waiting for every little detail to load.
In HTTP/2, prioritization directives are sent to the server, but actual prioritization can be hampered by head-of-line blocking. Requests on TCP treat all data like it’s in a single line; if there’s a hiccup with one item, everything behind it has to wait.
In contrast, HTTP/3 allows for more granular control and reliability in prioritization. The QUIC connection can handle multiple independent streams for different data types, utilizing critical network resources more efficiently.
For example, consider a web page that loads a mixture of critical CSS, JavaScript, and image files. Effective stream prioritization ensures that the essential resources are loaded first, enhancing the user experience.
JavaScript snippet for prioritizing resource loading
<!-- We don't want a high priority for this above-the-fold image -->
<img src="/images/in_viewport_but_not_important.svg" fetchpriority="low" alt="I'm an unimportant image!">
<!-- We want to initiate an early fetch for a resource and also prioritize it -->
<link rel="preload" href="/js/script.js" as="script" fetchpriority="high">
<script>
fetch('https://example.com/', {priority: 'medium'})
.then(data => {
// Trigger a medium priority fetch
});
</script>
Connection establishment
Regular TCP, which happens to be used in HTTP/2, uses a three-way handshake. Here's a simplified overview:
- SYN: The client sends a synchronization packet to the server.
- SYN-ACK: The server acknowledges and responds with its synchronization packet.
- ACK: The client acknowledges the server response and establishes the connection.
This method, while reliable, introduces latency right from the start. Following this, setting up a secure connection (TLS) adds another round of exchanges, further compounding the latency and affecting the overall user experience.
HTTP/3 uses QUIC on UDP. In general, UDP doesn't require a handshake to start sending data. However, QUIC has introduced its own handshake mechanism for security. It combines the connection and cryptographic handshakes, reducing the steps in establishing a secure connection. It supports 0-RTT (round trip time) resumption for connections to the same server, enabling data transmission to commence with the first packet. Despite the speedup, security is not compromised as it incorporates the latest in encryption standards (TLS 1.3) from the outset,
Imagine a user visiting an eCommerce site for the first time using a browser and network that supports HTTP/3, which facilitates a secure connection and allows the user to start browsing the site with noticeably reduced loading times compared to HTTP/2. The 0-RTT feature can make the connection establishment almost instantaneous for repeat visitors, further enhancing the user experience. Users experience quicker access to content, and you get improved user experience and higher search rankings.
{{banner-2="/design/banners"}}
Security
While not mandating TLS, the real-world deployment of HTTP/2 over TLS enhanced web traffic security. This move towards encryption protected data in transit and set the stage for more secure web interactions. However, the reliance on TCP and TLS 1.2 introduced certain complexities and limitations.
HTTP/2 TLS 1.2 limitations
While TLS 1.2 and later versions offer robust security features, they also require careful configuration to avoid vulnerabilities and ensure compatibility. Mismatches in supported cipher suites between clients and servers, the use of deprecated cryptographic algorithms, and the overhead of managing certificates can complicate deployments and affect performance and security.
The combined overhead of TCP and TLS can also be resource-intensive on servers. This can lead to scalability challenges, as each connection requires its own encryption and state management resources.
New or "cold" TCP connections, especially those negotiating TLS, can experience a "warm-up" time before reaching optimal performance. This is due to TCP's slow start mechanism and the time it takes to negotiate and cache TLS session keys.
HTTP/3 TLS 1.3 benefits
HTTP/3 integrates TLS 1.3, the latest version of the encryption protocol, that simplifies and strengthens the security mechanisms, removes outdated cipher suites and vulnerabilities associated with older versions. It ensures each connection has unique encryption keys. Even if long-term keys are compromised, past communications remain secure.
Using the code below, you can configure servers to support HTTP/3 and ensure they are updated to utilize TLS 1.3 protocols.
server {
listen 443 quic reuseport;
listen 443 ssl http2;
ssl_certificate /etc/ssl/certs/your_domain.crt;
ssl_certificate_key /etc/ssl/private/your_domain.key;
ssl_protocols TLSv1.3;
ssl_ciphers "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384";
ssl_prefer_server_ciphers on;
# Enable QUIC and HTTP 3
ssl_quic on;
ssl_early_data on;
}
Performance
Performance on the Web refers to the efficiency and speed with which Web resources (like HTML pages, images, and videos) are transferred from a server to a client's browser. HTTP/2 allows multiple data streams to share a single TCP connection.
In contrast, HTTP/3 redesigns this approach by providing separate, independent streams for each data flow. It eliminates head-of-line blocking, as issues in one stream do not affect the delivery of others. Instead, it enhances web performance by reducing latency, improving throughput, and increasing reliability in diverse network conditions.
- Zero round-trip time (0-RTT) feature allows clients to send data to a server during the initial handshake, assuming they've communicated before.
- Page load time improves
- Throughput improves.
Tools like Catchpoint, with its real-user monitoring capabilities, enable businesses to gather data on how real users experience their websites under both protocols. You can monitor if users experience improvements in metrics such as page load times and interaction readiness.
Error recovery
Network congestion, faulty infrastructure, and other issues cause data packet loss over the Internet. HTTP/2 relies on TCP to notice and fix the issue. TCP does a good job ensuring that data is present and in order, but the process can be slow.
HTTP/3 includes Forward Error Correction (FEC) mechanisms. It is like having a backup plan for your data packets; if some get lost in transit, there's enough information included with the others to reconstruct what's missing at the receiver’s end without needing any data to be resent.
Server push
HTTP/2 introduced server push as a way for servers to preemptively send resources to the client, like CSS or JavaScript files needed to render a webpage, without waiting for the browser to request them. The idea was to save time and load web pages faster by eliminating round trips between the browser and the server. However, there were challenges:
- Sometimes servers might push resources that the browser already has cached, leading to wasted bandwidth.
- Browsers could not tell the server what they needed, making unwanted server pushes more common.
Despite its good intentions, server push in HTTP/2 wasn’t always used to its full potential due to these inefficiencies.
In contrast, HTTP/3 provides mechanisms for better communication between the client and server about what should be pushed, reducing wasted data transfers. For instance, clients can use the SETTINGS frame to adjust how many concurrent pushes they are willing to handle or even disable server push entirely. Such dynamic interactions between the client and server allow pushing to align more closely with the client's current state and requirements.
Also, improved multiplexing and stream prioritizations in HTTP/3 mean server pushes are less likely to clog the network if something else is being sent simultaneously.
Accessibility
Accessibility is about ensuring websites and online services work well for everyone, regardless of how fast their internet connection is or where they are in the world.
Network changes
Mobility can be challenging as HTTP/2 and TCP connections are tightly linked to IP addresses. So, when your device switches from one network to another (say, from your home Wi-Fi to your mobile network), it usually means your IP address changes. TCP doesn't handle this change gracefully; it requires the connection to be re-established, which can interrupt whatever you are doing online.
HTTP/3 is designed to handle these scenarios much more smoothly as it is not reliant on IP addresses to maintain a connection. Instead, it uses a connection ID feature. It assigns a unique ID to each connection, which remains constant even if the underlying network changes.
This means that if your device switches networks, it can keep the connection alive and prevent interruptions. Ongoing activities (like video calls or file transfers) continue without interruption.
Distance from server
HTTP/2 TCP has limitations when dealing with users that are geographically distant from servers. HTTP/3 QUIC does not experience this due to improved handshake efficiency, congestion control, and loss recovery.
Tools like Catchpoint's network monitoring capabilities allow you to analyze how effectively QUIC handles packet loss compared to TCP. This is crucial for applications requiring high reliability across varied network conditions.
Conclusion
HTTP/3 usage of QUIC over UDP addresses HTTP/2 limitations by eliminating head-of-line blocking, reducing latency, and enhancing overall web performance and security. It ensures a faster and more secure online experience for uninterrupted service regardless of changes in the user network environment. Integrating tools like Catchpoint can further optimize and understand the impact of these features.