HTTP/2: the Future of the Internet
HTTP/2[1] is about to get very real. The standard has just been finalized, leading browsers are beginning to support it and so
is Akamai. But why is this important?

The web has dramatically evolved over the last 20+ years, yet HTTP - the
workhorse of the Web - has not. Web developers have worked around HTTP's limitations, but:
- Performance still falls short of full bandwidth utilization
- Web design and maintenance are more complex
- Resource consumption increases for client and server
- Cacheability of resources suffers
HTTP/2 attempts to solve many of the shortcomings and inflexibilities of HTTP/1.1.Its many benefits include:
- Multiplexing and concurrency: Several requests can be sent in rapid succession on the same TCP connection, and responses can be received out of order -
eliminating the need for multiple connections between the client and the server
- Stream dependencies: the client can indicate to the server which of the resources are more important
than the others
- Header compression: HTTP header size is drastically reduced
- Server push: The server can send resources the client has not yet requested
You will not need to change your websites or applications to ensure they continue to work properly. Not only will your application code and HTTP APIs continue to work
uninterrupted, but your application will also likely perform better and consume fewer resources on both client and server.
As it becomes more prevalent, organizations looking to benefit from the performance and security features of HTTP/2 should start thinking about how they are invested to fully capitalize on these new capabilities. Such considerations include:
- Encrypting: Applications running over HTTP/2 are likely to experience improvements in performance over secure connections. This is an important consideration for companies contemplating the move to TLS.
- Optimizing the TCP layer: Applications should be designed with a TCP layer implemented to account for the switch from multiple TCP connections to a single long-lived one, especially when adjusting the congestion window in response to packet loss.
- Undoing HTTP/1.1 best practices: Many “best practices” associated with applications delivered over HTTP/1.1 (such as domain sharding, image spriting, resource in-lining and concatenation) are not only unnecessary when delivering over HTTP/2, and in some cases may actually cause sub-optimizations.
- Deciding what and when to push: Applications designed to take advantage of the new server push capabilities in HTTP/2 must be carefully designed to balance performance and utility.
Akamai can help and is working hard to address these and additional challenges, including possibly the toughest one: optimizing differently for HTTP/1.1 vs. HTTP/2 connections as
browsers and other clients gradually transition over the next several years.
Akamai supports HTTP/2 in limited beta and is working hard to broaden availability to additional customers. Akamai's Mark Nottingham chairs the IETF working group that defined the
new standard - demonstrating our commitment to benefit users, content providers, service providers, developers and the Internet community at large
For a preview of how your website or application is likely to perform on HTTP/2 once it is widely supported, you can experiment on the Akamai platform with SPDY/3.1. It is
important to note that Akamai and leading browsers intend to stop supporting SPDY when HTTP/2 is ready to take its place.
For questions, please reach out to your Akamai representative, visit the Akamai Community or click here to contact us.
1. Formerly know as HTTP/2.0 or HTTP 2.0↩